| Serious Games for Psychological Health Education | | BIBAK | Full-Text | 3-10 | |
| Anya Andrews | |||
| This paper presents a summary of recent research efforts aiming to address
modern psychological health education needs through the use of innovative
instructional tools. The current body of research on virtual learning
environments and serious games as they relate to psychological treatment shows
promising results, especially in the case of the instructional interventions
that provide an optimal blend of education and training and focus on the
psychological health knowledge acquisition as well as appropriate stress
management skills and behaviors. In concert with the theoretical and research
foundations within the psychological health domain and pedagogical precepts in
the area of simulation and game-based learning, this article also presents
design considerations for serious games for psychological health. Keywords: Serious Games; Psychological Health Education; Mental Health; Virtual
Learning Environments | |||
| Mixed Reality as a Means to Strengthen Post-stroke Rehabilitation | | BIBAK | Full-Text | 11-19 | |
| Ines Di Loreto; Liesjet Van Dokkum; Abdelkader Gouaich; Isabelle Laffont | |||
| Purpose of this paper is to present a mixed reality system (MRS) for
rehabilitation of the upper limb after stroke. Aim of the system is to increase
the amount of training using the fun factor as driver. While the acceptance of
such a system can be assessed with patients, true clinical validity can be
assessed only through a long randomized clinical trial. However, a first
important impression of usefulness can be based on therapists' expertise. For
this reason before testing the MRS with patients we carried a study with
therapists involving the rehabilitation staff of a French hospital. Three
sessions, one using the Wii System with a commercial game, another using an ad
hoc developed game on a PC, and another using a mixed reality version of the
same game were held. In synthesis results have shown the MR system is regarded
to be useful for a larger number of patients, in particular the ones in the
more acute phase after stroke. Keywords: Mixed reality; Post stroke rehabilitation; Serious Games | |||
| A Virtual Experiment Platform for Mechanism Motion Cognitive Learning | | BIBAK | Full-Text | 20-29 | |
| Xiumin Fan; Xi Zhang; Huangchong Cheng; Yanjun Ma; Qichang He | |||
| In order to give students a more intuitionistic understanding in mechanism
motion system, a virtual experiment platform is designed and developed. First,
experimental component models, which contain both visual information and
logical information, are built. The logical information is modeled according to
a Unified Modeling Language called Modelica (MO). Then, a virtual experiment
scene, which is described by Modelica, is assembled in a virtual reality
environment. The virtual scene MO model is flatted to a set of equations which
are compiled and solved, and the mechanism motion data can be output. Last, the
motion data are exported into the Virtual Reality environment for the
simulation result visualization. Students can use the platform to build
mechanism system experiments and simulate the component motion for a better
understanding of mechanism composition and its movement principle. The platform
is universal and can be expanded to other subjects easily because the
experimental components are built by Unified Modeling Method. Keywords: virtual experiment; mechanism motion experiment; modelica modeling; virtual
reality | |||
| Mechatronic Prototype for Rigid Endoscopy Simulation | | BIBAK | Full-Text | 30-36 | |
| Byron Perez-Gutierrez; Camilo Ariza-Zambrano; Juan Camilo Hernández | |||
| Haptic systems include hardware and software components for providing
programmable sensations of mechanical nature, such as those related to the
sense of touch. This article covers the mechatronic design of a rigid endonasal
endoscopy simulator that allows the user to feel force feedback collisions with
the anatomical structures during navigation in a virtual environment. The
mechatronic system design provides tactile feedback information with three
degrees of freedom to the user based on an open loop implemented control. The
tests were performed on a computational prototype that allows the visualization
of medical image volumes and navigation with collision detection system. Keywords: Haptic; Virtual Reality; Virtual Endoscopy | |||
| Patterns of Gaming Preferences and Serious Game Effectiveness | | BIBAK | Full-Text | 37-43 | |
| Katelyn Procci; James Bohnsack; Clint A. Bowers | |||
| According to the Technology Acceptance Model (TAM), important predictors of
system use include application-specific self-efficacy, ease of use, and
perceived usefulness. Current work with the TAM includes extending the
assessment framework to domains such as serious games as well as how other
typically under-researched factors, such as gender, affect technology use. The
current work reports on how there are gender differences in both game playing
behaviors as well as general game genre preferences, offers implications for
serious game designers regarding the development of effective learning
interventions based on these differences, and finally suggests avenues for
future research in this area. Keywords: gender differences; serious games; technology acceptance model; user
preferences | |||
| Serious Games for the Therapy of the Posttraumatic Stress Disorder of Children and Adolescents | | BIBA | Full-Text | 44-53 | |
| Rafael Radkowski; Wilfried Huck; Gitta Domik; Martin Holtmann | |||
| The posttraumatic stress disorder (PTSD) is a mental-health problem that can emerge after a delayed reaction of a person to a traumatic incident. A common therapy is the so-called exposure therapy. However, children and adolescent cannot be treated with a common therapy. In this paper we describe a serious game for the therapy of a PTSD by children and adolescent. Objective of this paper is to introduce a concept for the game development and a method to balance the game. It is based on a so-called impulse hierarchy. The game prototype respectively the utilized concept and methods have been tested with healthy test persons. The results are a strong indication for the effective of the developed game concept and the usefulness of the key principles. | |||
| Virtual Reality as Knowledge Enhancement Tool for Musculoskeletal Pathology | | BIBAK | Full-Text | 54-63 | |
| Sophia Sakellariou; Vassilis Charissis; Stephen Grant; Janice Turner; Dianne Kelly; Chistodoulos Christomanos | |||
| Contemporary requirements of medical explanatory resources have sparked the
initiative of developing a unique pilot application which could use real-time
3D visualisation in order to inform General Practitioners (GPs) and allied
health professionals as well as educate patients on musculoskeletal issues and
particularly lower back pain. The proposed application offers a selection of 3D
spinal anatomical and pathological models with embedded information. The
interface elements adhered to previous studies' suggestions that the knowledge
acquisition and ultimately the understanding of such motley three-dimensional
subjects typically entails a strong grasp of the 3D anatomy to which it
relates. The Human-Computer Interaction is simplified in order to empower the
user to explore the healthy and pathogenic anatomy of the spine without the
typical real-life constrains. The paper presents the design philosophy of the
interface and the evaluation results from twenty user trials. Finally the paper
discusses the results and offers a future plan of action. Keywords: VR; 3D; HCI; Musculoskeletal; Medical Education; visual interface; Low Back
Pain | |||
| Study of Optimal Behavior in Complex Virtual Training Systems | | BIBAK | Full-Text | 64-72 | |
| Jose San Martin | |||
| In previous works we have studied the behavior of simple training systems
integrated by a haptic device basing on criteria derived from Manipulability
concept. The study of complex systems needs to re-define the criteria of
optimal design for these systems. It is necessary to analyze how the workspace
of two different haptics, simultaneously on the same model, limits the movement
of each other. Results of the new proposed measures are used on Insight ARTHRO
VR training system. The Minimally Invasive Surgery (MIS) techniques use
miniature cameras with microscopes, fiber-optic flashlights and high definition
monitors. The camera and the instruments are inserted through small incisions
on the skin called portals. The trainer uses two PHANToM OMNi haptic devices,
one representing the camera and other the surgical instrumental. Keywords: Haptics; Workspace Interference; Manipulability; Optimal Designing | |||
| Farming Education: A Case for Social Games in Learning | | BIBAK | Full-Text | 73-79 | |
| Peter A. Smith; Alicia Sanchez | |||
| Social games have skyrocketed in popularity; much to the surprise of many in
the game development community. By reinforcing individualized internalization
of concepts while framing those experiences in terms of social activities,
social games are filling a void not adequately filled by other games and may
turn out to be power learning tools. Their potential use in education is still
in its infancy as many consider how the characteristics unique to social games
could be used within a learning paradigm. By creating asynchronous multiplayer
environments and play dynamics designed to leverage both individual and
collaborative goals, social games may foster long distance relationships and
encourages reflection of the tasks preformed. Keywords: Social Games; Social Networks; Learning Games; Serious Games | |||
| Sample Size Estimation for Statistical Comparative Test of Training by Using Augmented Reality via Theoretical Formula and OCC Graphs: Aeronautical Case of a Component Assemblage | | BIBAK | Full-Text | 80-89 | |
| Fernando Suárez-Warden; Yocelin Cervantes-Gloria; Eduardo González-Mendívil | |||
| Advances in Augmented Reality applied to learning the assembly operation in
terms of productivity, must certainly be evaluated. We propose a congruent
sequence of statistical procedures that will lead to determine the estimated
work sample size (n) according to the level of significance required by the
aeronautical sector or a justified similar one and the estimated (sometimes
preconceived) value of the plus-minus error (E or E±). We used the
Kolmogorov-Smirnov test to verify that a normal distribution fits, it is a
nonparametric one (free-distribution). And by taking into account normal
population, confidence interval is determined using the Student's t
distribution for (n-1) degrees of freedom. We have gotten E error and obtained
various sample sizes via statistical formula. Additionally, we proceeded to
utilize both an alpha α significance level and a beta β power of the
test selected for the aeronautical segment to estimate the size of the sample
via application of Operating Characteristic Curves (CCO), being this one of the
ways with statistical high rigor. Several scenarios with different n values
make up the outcome, herein. We disclosed diverse options for the different
manners of estimation. Keywords: Augmented Reality (AR); plus-minus error or margin of error; confidence
interval (CI); Operating Characteristic Curves (OCC) | |||
| Enhancing English Learning Website Content and User Interface Functions Using Integrated Quality Assessment | | BIBAK | Full-Text | 90-99 | |
| Dylan Sung | |||
| The present study investigated the applicability of an integrated quality
assessment approach to assess English learning website quality. The study used
the Kano Model to identify attractive quality attributes of the content and
user interface functions of an English learning website. The
Importance-Satisfaction Model was used to determine the interface functions
that need to be improved. Findings of the study led to the conclusion that the
content and user interface functions of English learning websites should be
specially developed according to the satisfaction level of the learners and
also the degree of importance perceived by them. On the basis of the key
quality attributes identified by utilizing the integrated quality assessment
model developed in this study, English learning website designers can make
important decisions on specific areas for enhancing the quality of the website
and improving the learning efficiency of the users. Keywords: English as a foreign language (EFL); English learning; computer-assisted
language learning (CALL); Internet-assisted language learning; e-learning;
educational technology | |||
| The Influence of Virtual World Interactions toward Driving Real World Behaviors | | BIBAK | Full-Text | 100-109 | |
| Hari Thiruvengada; Paul Derby; Wendy Foslien; John Beane; Anand Tharanathan | |||
| In the recent years, virtual worlds have gained wide spread popularity and
acceptance in a variety of application domains including training, education,
social networking, and conceptual demonstrations. This is largely due to their
ability to support modeling of fully textured high-resolution real world
objects, to provide a compelling user experience, and to offer novel, rich and
exploratory interactions to the user. However, the impact of familiarity with
the real world domain and objects on user behavior is still unclear. In this
study, we discuss the findings from a pilot study on a virtual world facility
tour that was based on a real world facility. The objectives of the tour were
threefold. First, we sought to understand the feasibility of using a virtual
tour in lieu of the actual real world tour. Second, the tour was used as an
educational tool to demonstrate several sustainable or efficient energy
initiatives to the facility occupants. Specifically, the virtual tour consisted
of an interactive energy dashboard, a low voltage LED based lighting
demonstration, an illustration of Heating, Ventilation and Air Conditioning
(HVAC) equipment operations during day and night, and renewable energy sources
within the facility. Third, we sought to understand the impact of the tour on
participants' future behaviors and attitudes toward sustainable energy. In
order to address these overarching objectives, user feedback was collected
using a survey after the users participated in the tour. We administered the
survey to both occupants and nonoccupants of the facility to also understand
the impact of familiarity on their behaviors. Users who were familiar with the
facility were optimistic about their perception of learning how to navigate
around the virtual replica than those who were not familiar. Our preliminary
findings from the survey indicate that virtual worlds can have a positive
impact on the users' behavior. Overall, we found that users' engagement during
the virtual tour could contribute to learning and the development of lasting
positive behaviors within virtual world, which can, in turn, translate into
real world behaviors. Keywords: Virtual Worlds; Experiential Learning; Human-in-the-loop simulation | |||
| Interactive Performance: Dramatic Improvisation in a Mixed Reality Environment for Learning | | BIBAK | Full-Text | 110-118 | |
| Jeff Wirth; Anne E. Norris; Daniel P. Mapes; Kenneth E. Ingraham; J. Michael Moshell | |||
| A trained interactive performer uses a combination of head-motion capture
and a new desktop gesture/posture control system to enact five avatars on a
screen, as those avatars interact face-to-face with a participant/trainee. The
inter-actor, assisted by a narrator/operator, provides voices for all five
on-screen characters and leads the participant through a story-driven
improvisational experience. This paper focuses on the processes of scenario
development, inter-actor training and production management in the new creative
discipline of interactive performance in mixed reality environments. Keywords: interactive performance; mixed reality; avatar; role-playing; learning;
training | |||
| Emotions and Telerehabilitation: Pilot Clinical Trials for Virtual Telerehabilitation Application Using Haptic Device and Its Impact on Post Stroke Patients' Mood and Motivation | | BIBAK | Full-Text | 119-128 | |
| Shih-Ching Yeh; Margaret McLaughlin; Yujung Nam; Scott Sanders; Chien-Yen Chang; Bonnie Kennedy; Sheryl Flynn; Belinda Lange; Lei Li; Shu-ya Chen; Maureen Whitford; Carolee J. Winstein; Younbo Jung; Albert A. Rizzo | |||
| We describe a pilot clinical trial with a flexible telerehabilitation
platform that allows a therapist to remotely monitor the exercise regimen and
progress of a patient who previously suffered from a stroke. We developed
virtual game environments which were host to a progressive set of training
tasks from precise fine motor movements to reaching movements that involve full
arm and shoulder activity. Concurrently, the therapist monitored the progress
of the patient through a video channel. Assessment of psychosocial variables
show that negative feelings (confusion, t(13)=2.54, p<.05, depression
t(13)=2.58, p<.05, and tension, t(13)=2, p<.1) were significantly
lessened after the game play. Patients' overall satisfaction with the
telerehabilitation system was positively correlated with the feeling of
co-presence of the therapist, r(8)=.770, p<.005. Patients felt less
efficacious in continuing therapy after participating in the telerehabilitation
game compared to their reported perseverance self-efficacy before the game,
t(5)=2.71, p<.05 and showed decreased willingness to persist in therapy
regardless of fatigue after the game play, t(5)=2.67, p<.05. However, when
patients' pretest mood scores were taken into account, this trend was reversed.
Patients' active mood before the game was positively correlated with their
willingness to persist in the therapy after the game, r(14)=.699, p<.005.
Telerehabilitation significantly enhanced stroke patients' psychological
states. Keywords: Virtual reality; stroke rehabilitation; telerehabilitation; haptics Note: Best Paper Award | |||
| An Interactive Multimedia System for Parkinson's Patient Rehabilitation | | BIBAK | Full-Text | 129-137 | |
| Wenhui Yu; Catherine Vuong; Todd Ingalls | |||
| This paper describes a novel real-time Multimedia Rehabilitation Environment
for the rehabilitation of patients with Parkinson's Disease (PD). The system
integrates two well known physical therapy techniques, multimodal sensory
cueing and BIG protocol, with visual and auditory feedback to created an
engaging mediated environment. The environment has been designed to fulfill the
both the needs of the physical therapist and the patient. Keywords: Parkinson's Disease; Physical Therapy; Mediated Rehabilitation; Sensory
Cueing; Multimodal Feedback; Virtual Environment | |||
| VClav 2.0 -- System for Playing 3D Virtual Copy of a Historical Clavichord | | BIBAK | Full-Text | 141-150 | |
| Krzysztof Gardo; Ewa Lukasik | |||
| VClav 2.0 system presented in the paper enables user to interact with a
digital 3D reconstruction of the historical clavichord in a manner similar to
lifelike using Virtual Reality gloves to "play" music. The real clavichord was
constructed by the famous maker in 18th century, Johann Adolph Hass from
Hamburg, and is at the exposition in the Museum of Musical Instruments in
Poznan, Poland (department of the National Museum). This is a system powered by
the NeoAxis game engine and equipped with 5DT Data Glove 14 and Polhemus
Patriot tracker. It is an exemplary solution for museums to actively present
musical instruments. Keywords: cultural heritage; Virtual Reality; 3D modeling; clavichord; gesture driven
HCI | |||
| A System for Creating the Content for a Multi-sensory Theater | | BIBAK | Full-Text | 151-157 | |
| Koichi Hirota; Seichiro Ebisawa; Tomohiro Amemiya; Yasushi Ikei | |||
| This paper reports on the current progress in a project to develop a
multi-sensory theater. The project is focused not only on the development of
hardware devices for multi-sensory presentations but also on an investigation
into the framework and method of expression for creating the content.
Olfactory, wind, and pneumatic devices that present the sensation of odor, wind
and gusts, respectively, were developed and integrated into an audio-visual
theater environment. All the devices, including the video device, are
controlled through a MIDI interface. Also, a framework for creating the
multi-sensory content by programming the sequence of device operations was
proposed and implemented. Keywords: multi-sensory theater; odor; sensation of wind; multi-sensory content | |||
| Wearable Display System for Handing Down Intangible Cultural Heritage | | BIBAK | Full-Text | 158-166 | |
| Atsushi Hiyama; Yusuke Doyama; Mariko Miyashita; Eikan Ebuchi; Masazumi Seki; Michitaka Hirose | |||
| In recent years, most of traditional craftsmanship is declining because of
aging skilled craftspeople and fewer successors. Therefore, methods for digital
archiving of such traditional craftsmanship are needed. We have constructed a
wearable skill handing down system focused on first-person visual and audio
information and biological information of a craftsman. We used instrumental
information associated with the usage of the tools for evaluating the effect of
proposed wearable display system of intangible cultural heritage. In this
paper, we show the result of archiving and training on the skills of Kamisuki,
Japanese traditional papermaking. Keywords: Intangible cultural heritage; Skill transfer; Tacit Knowledge; Wearable
computer | |||
| Stroke-Based Semi-automatic Region of Interest Detection Algorithm for In-Situ Painting Recognition | | BIBAK | Full-Text | 167-176 | |
| Youngkyoon Jang; Woontack Woo | |||
| In the case of illumination and view direction changes, the ability to
accurately detect the Regions of Interest (ROI) is important for robust
recognition. In this paper, we propose a stroke-based semi-automatic ROI
detection algorithm using adaptive thresholding and a Hough-transform method
for in-situ painting recognition. The proposed algorithm handles both simple
and complicated texture painting cases by adaptively finding the threshold. It
provides dominant edges by using the determined threshold, thereby enabling the
Hough-transform method to succeed. Next, the proposed algorithm is easy to
learn, as it only requires minimal participation from the user to draw a
diagonal line from one end of the ROI to the other. Even though it requires a
stroke to specify two vertex searching regions, it detects unspecified vertices
by estimating probable vertex positions calculated by selecting appropriate
lines comprising the predetected vertices. In this way, it accurately (1.16
error pixels) detects the painting region, even though a user sees the painting
from the flank and gives inaccurate (4.53 error pixels) input points. Finally,
the proposed algorithm provides for a fast processing time on mobile devices by
adopting the Local Binary Pattern (LBP) method and normalizing the size of the
detected ROI; the ROI image becomes smaller in terms of general code format for
recognition, while preserving a high recognition accuracy (99.51%). As such, it
is expected that this work can be used for a mobile gallery viewing system. Keywords: Semi-automatic ROI Detection; Hough-transform; Planar Object Recognition;
Local Binary Pattern | |||
| Personalized Voice Assignment Techniques for Synchronized Scenario Speech Output in Entertainment Systems | | BIBAK | Full-Text | 177-186 | |
| Shinichi Kawamoto; Tatsuo Yotsukura; Satoshi Nakamura; Shigeo Morishima | |||
| The paper describes voice assignment techniques for synchronized scenario
speech output in an instant casting movie system that enables anyone to be a
movie star using his or her own voice and face. Two prototype systems were
implemented, and both systems worked well for various participants, ranging
from children to the elderly. Keywords: Instant casting movie system; post-recording; speaker similarity; voice
morphing; synchronized speech output | |||
| Instant Movie Casting with Personality: Dive into the Movie System | | BIBAK | Full-Text | 187-196 | |
| Shigeo Morishima; Yasushi Yagi; Satoshi Nakamura | |||
| "Dive into the Movie (DIM)" is a name of project to aim to realize a world
innovative entertainment system which can provide an immersion experience into
the story by giving a chance to audience to share an impression with his family
or friends by watching a movie in which all audience can participate in the
story as movie casts. To realize this system, we are trying to model and
capture the personal characteristics instantly and precisely in face, body,
gait, hair and voice. All of the modeling, character synthesis, rendering and
compositing processes have to be performed on real-time without any manual
operation. In this paper, a novel entertainment system, Future Cast System
(FCS), is introduced as a prototype of DIM. The first experimental trial
demonstration of FCS was performed at the World Exposition 2005 in which
1,630,000 people have experienced this event during 6 months. And finally
up-to-date DIM system to realize more realistic sensation is introduced. Keywords: Personality Modeling; Gait Motion; Entertainment; Face Capture | |||
| A Realtime and Direct-Touch Interaction System for the 3D Cultural Artifact Exhibition | | BIBAK | Full-Text | 197-205 | |
| Wataru Wakita; Katsuhito Akahane; Masaharu Isshiki; Hiromi T. Tanaka | |||
| We propose a realtime and direct-touch interaction system for 3D cultural
artifact exhibition based on a texture-based haptic rendering technique. In the
field of digital archive, it is important to archive and exhibit the cultural
artifact at the high-definition. To archive the shape, color and texture of the
cultural artifact, it is important to archive and represent not only visual
effect but haptic impression. Therefore, multimodal digital archiving, realtime
multisensory rendering, and intuitive and immersive exhibition system are
necessary. Therefore, we develop a realtime and direct-touch interaction system
for the 3D cultural artifact exhibition based on a texture-based haptic
rendering technique. In our system, the viewer can directly touch a
stereoscopic vision of 3D digital archived cultural artifact with the
string-based and scalable haptic interface device "SPIDAR" and vibration motor. Keywords: Digital Museum; Virtual Reality; Computer Graphics; Haptics | |||
| Digital Display Case: A Study on the Realization of a Virtual Transportation System for a Museum Collection | | BIBAK | Full-Text | 206-214 | |
| Takafumi Watanabe; Kenji Inose; Makoto Ando; Takashi Kajinami; Takuji Narumi; Tomohiro Tanikawa; Michitaka Hirose | |||
| This paper describes our proposed virtual transportation system. Our
proposed system is a display case for use at art museums, which is based on
computer graphics and image-based rendering (IBR) techniques. Using this
system, anyone can simply create and realistically represent virtual cultural
assets. This system consists of two main components: a display unit and a
capture unit. The display unit is in the shape of a conventional display case
in order to represent virtual cultural assets. The capture unit, which is
created by attaching cameras to a conventional display case, reconstructs
cultural assets using IBR techniques. In our experiment, we implemented a basic
system using View Morphing as the IBR technique. The results show that this
system can represent virtual cultural assets as 3D objects on the display unit
by using arbitrary view images that are interpolated by View Morphing. Keywords: digital display case; digital museum; image based rendering; virtual reality | |||
| Integrating Multi-agents in a 3D Serious Game Aimed at Cognitive Stimulation | | BIBAK | Full-Text | 217-226 | |
| Priscilla F. de Abreu; Luís Alfredo Vidal de Carvalho; Vera Maria Benjamim Werneck; Rosa Maria Esteves Moreira da Costa | |||
| Therapies for cognitive stimulation must be developed when some of the
cognitive functions are not working properly. In many applications there is a
strong dependence on therapist's intervention to control the patient's
navigation in the environment and to change the difficulty level of a task. In
general, these interventions, cause distractions, reducing the level of user
immersion in the activities. As an alternative, the inclusion of intelligent
agents can help to alleviate this problem by reducing the need of therapist
involvement. This paper presents a serious game that combines the technologies
of Virtual Reality and Multi-Agent Systems designed to improve the cognitive
functions in patients with neuropsychiatric disorders. The integration of
different technologies and the modelling methodology are described and open new
software development perspectives for 3D environments construction. Keywords: Virtual reality; Multi-Agents Systems; Serious Games; Cognitive Stimulation | |||
| Automatic 3-D Facial Fitting Technique for a Second Life Avatar | | BIBA | Full-Text | 227-236 | |
| Hiroshi Dohi; Mitsuru Ishizuka | |||
| This paper describes an automatic 3-D facial fitting technique for a Second Life avatar. It is often difficult to create an original avatar by yourself that resembles a real person. In Second Life, the combinations of system-defined parameters deform the shape of the avatar, and we can't control each vertex directly. It needs to encode the deformation information into many parameters. As a reference target, we make use of MRI data that scans the real face. It is stable and not affected by lighting and diffused reflection. In our experiments, we picked up 428 vertices on the base model for facial fitting. Using the iteration technique, more than 50% of vertices are just on the reference targets, and more than 85% are within +/- 3mm of errors. | |||
| Reflected in a Liquid Crystal Display: Personalization and the Use of Avatars in Serious Games | | BIBAK | Full-Text | 237-242 | |
| Shan Lakhmani; Clint A. Bowers | |||
| Personalization, in the realm of Serious Games, is the extent to which users
believe that the digital environment is tailored to their characteristics and
preferences. This belief can have major repercussions for a user's experience
with the game and can subsequently be used to maximize the return on investment
for serious game designers. Other factors that influence users' personalization
in games include how the game affects the users' perception of self, presence
in the game, and social relationships developed in the game. Users' avatars
influence all of these factors. This goal of this paper is to examine the
research done into avatars and personalization and presents it in the context
of serious games research. Keywords: avatars; personalization; presence; immersion; serious games | |||
| Leveraging Unencumbered Full Body Control of Animated Virtual Characters for Game-Based Rehabilitation | | BIBAK | Full-Text | 243-252 | |
| Belinda Lange; Evan A. Suma; Brad Newman; Thai Phan; Chien-Yen Chang; Albert A. Rizzo; Mark T. Bolas | |||
| The use of commercial video games as rehabilitation tools, such as the
Nintendo® Wii Fit™, has recently gained much interest in the physical
therapy arena. However, physical rehabilitation requires accurate and
appropriate tracking and feedback of performance, often not provided by
existing commercial console devices or games. This paper describes the
development of an application that leverages recent advances in commercial
video game technology to provide fullbody control of animated virtual
characters with low cost markerless tracking. The aim of this research is to
develop and evaluate an interactive game-based rehabilitation tool for balance
training of adults with neurological injury. This paper outlines the
development and evaluation of a game-based rehabilitation tool using the
PrimeSense depth sensing technology, designed to elicit specific therapeutic
motions when controlling a virtual avatar in pursuit of in-game goals. A sample
of nine adults participated in the initial user testing, providing feedback on
the hardware and software prototype. Keywords: video game; balance; stoke; camera tracking | |||
| Interactive Exhibition with Ambience Using Video Avatar and Animation on Huge Screen | | BIBAK | Full-Text | 253-259 | |
| Hasup Lee; Yoshisuke Tateyama; Tetsuro Ogi; Teiichi Nishioka; Takuro Kayahara; Kenichi Shinoda | |||
| In this paper, we develop an interactive exhibition system using a video
avatar and an animation on huge screen. As the video avatar, we extract
background images from recorded video stream using Chroma key and send the
stream to remote using TCP/UDP protocols. The animation on huge screen provides
immersion and ambience of the ages of displays. We restore clothes, ceremony,
crowd, etc. using computer animation. 4K resolution projector with 300 inch
screen is used in our system and it makes viewers to feel the ambience of the
environment when and where the displays existed. Keywords: Video Avatar; Museum Digitalization; Interactive Exhibition; Digital
Restoration; Huge Screen | |||
| Realistic Facial Animation by Automatic Individual Head Modeling and Facial Muscle Adjustment | | BIBAK | Full-Text | 260-269 | |
| Akinobu Maejima; Hiroyuki Kubo; Shigeo Morishima | |||
| We propose a technique for automatically generating a realistic facial
animation with precise individual facial geometry and characteristic facial
expressions. Our method is divided into two key methods: the head modeling
process automatically generates a whole head model only from facial range scan
data, the facial animation setup process automatically generates key shapes
which represent individual facial expressions based on physics-based facial
muscle simulation with an individual muscle layout estimated from facial
expression videos. Facial animations considering individual characteristics can
be synthesized using the generated head model and key shapes. Experimental
results show that the proposed method can generate facial animations where 84%
of subjects can identify themselves. Therefore, we conclude that our head
modeling techniques are effective to entertainment system like a Future Cast. Keywords: Individual Head Model; Automatic Mesh Completion; Facial Muscle Layout
Estimation; Key Shape Generation; Facial Animation Synthesis | |||
| Geppetto: An Environment for the Efficient Control and Transmission of Digital Puppetry | | BIBAK | Full-Text | 270-278 | |
| Daniel P. Mapes; Peter Tonner; Charles E. Hughes | |||
| An evolution of remote control puppetry systems is presented. These systems
have been designed to provide high quality trainer to trainee communication in
game scenarios containing multiple digital puppets with interaction occurring
over long haul networks. The design requirements were to support dynamic
switching of control between multiple puppets; suspension of disbelief when
communicating through puppets; sensitivity to network bandwidth requirements;
and as an affordable tool for professional interactive trainers (Interactors).
The resulting system uses a novel pose blending solution guided by a scaled
down desktop range motion capture controller as well as traditional button
devices running on an standard game computer. This work incorporates aspects of
motion capture, digital puppet design and rigging, game engines, networking,
interactive performance, control devices and training. Keywords: Digital puppetry; avatar; gesture; motion capture | |||
| Body Buddies: Social Signaling through Puppeteering | | BIBAK | Full-Text | 279-288 | |
| Magy Seif El-Nasr; Katherine Isbister; Jeffery Ventrella; Bardia Aghabeigi; Chelsea Hash; Mona Erfani; Jacquelyn Ford Morie; Leslie Bishko | |||
| While virtual worlds have evolved to provide a good medium for social
communication, they are very primitive in their social and affective
communication design. The social communication methods within these worlds have
progressed from early text-based social worlds, e.g. MUDS (multi-user dungeons)
to 3D graphical interfaces with avatar control, such as Second Life. Current
communication methods include triggering gestures by typed commands, and/or
selecting a gesture by name through the user interface. There are no
agreed-upon standards for organizing such gestures or interfaces. In this
paper, we address this problem by discussing a Unity-based avatar puppeteering
prototype we developed called Body Buddies. Body Buddies sits on top of the
communication program Skype, and provides additional modalities for social
signaling through avatar puppeteering. Additionally, we discuss results from an
exploratory study we conducted to investigate how people use the interface. We
also outline steps to continuously develop and evolve Body Buddies. Keywords: avatar puppeteering; avatar nonverbal communication; social communication
with avatars; avatar design; CVE (Collaborative Virtual Environment) | |||
| Why Can't a Virtual Character Be More Like a Human: A Mixed-Initiative Approach to Believable Agents | | BIBAK | Full-Text | 289-296 | |
| Jichen Zhu; J. Michael Moshell; Santiago Ontañón; Elena Erbiceanu; Charles E. Hughes | |||
| Believable agents have applications in a wide range of human computer
interaction-related domains, such as education, training, arts and
entertainment. Autonomous characters that behave in a believable manner have
the potential to maintain human users' suspense of disbelief and fully engage
them in the experience. However, how to construct believable agents, especially
in a generalizable and cost effective way, is still an open problem. This paper
compares the two common approaches for constructing believable agents --
human-driven and artificial intelligence-driven interactive characters -- and
proposes a mixed-initiative approach in the domain of interactive training
systems. Our goal is to provide the user with engaging and effective
educational experiences through their interaction with our system. Keywords: Mixed-initiative system; character believability; interactive storytelling;
artificial intelligence; interactive virtual environment | |||
| Collaborative Mixed-Reality Platform for the Design Assessment of Cars Interior | | BIBAK | Full-Text | 299-308 | |
| Giandomenico Caruso; Samuele Polistina; Monica Bordegoni; Marcello Aliverti | |||
| The paper describes a collaborative platform to support the development and
the evaluation of cars interior by using a Mixed Prototyping (MP) approach. The
platform consists of two different systems: the 3D Haptic Modeler (3DHM) and
the Mixed Reality Seating Buck (MRSB). The 3DHM is a workbench that allows us
to modify the 3D model of a car dashboard by using a haptic device, while the
MRSB is a configurable structure that enables us to simulate different driving
seats. The two systems allow the collaboration among designers, engineers and
end users in order to get, as final result, a concept design of the product
that satisfies both design constraints and final users' preferences. The
platform has been evaluated by means of several testing sessions, based on two
different scenarios, so as to demonstrate the benefits and the potentials of
our approach. Keywords: Collaborative design; Mixed Reality; Virtual Prototype; Haptic modeling;
Ergonomic assessment | |||
| Active Location Tracking for Projected Reality Using Wiimotes | | BIBAK | Full-Text | 309-317 | |
| Siam Charoenseang; Nemin Suksen | |||
| Some addressed issues in projected reality are location acquisition, limited
work space, and geometric distortion. This paper proposes a low-cost, robust,
fast, and simple method for handling addressed problems using infrared camera
in Nintendo's Wiimotes and a pan-tilt camera head. Two Wiimotes are attached on
both horizontal and vertical axes of a portable projector mounted on a pan-tilt
camera head. Hence, it can detect 4 infrared LEDs on the corners of a display
surface in perspective projection volume. The augmented images are wrapped to
fit the display area. To increase the system workspace, a pan-tilt camera head
is used to track the display surface. While the display surface or the
projector moves, a proposed fast location tracking algorithm between two
Wiimotes is implemented. Experimental results demonstrate the ability of real
time location tracking at 97 fps that is more than the refresh rate of typical
projector. Finally, the active location tracking using the pan-tilt camera head
can give workspace more than 36 times of the normal perspective projection
workspace. Keywords: Perspective Location Tracking; Projected Reality; Augmented Reality | |||
| Fast Prototyping of Virtual Replica of Real Products | | BIBAK | Full-Text | 318-326 | |
| Francesco Ferrise; Monica Bordegoni | |||
| The ability to capture customers' needs and the voice of customers, and to
translate them into a set of product specifications that at best satisfy the
target customers has increasingly become a key element of business strategy.
The common practice consists in evaluating products at the end of the design
process through physical prototypes with the participation of users and
potential customers. The same practice can be implemented by using virtual
replica of real products, reducing cost and time necessary to build some
variants. The paper presents a methodology for the development of the virtual
prototype of a piece of furniture, produced by a company that is interested in
studying how customers perceive and evaluate some variants of the hinge
mechanism. The virtual prototype has been implemented using a tool for virtual
reality applications oriented to non-expert programmers. The modularity and
flexibility of the approach used for implementing the virtual replica has
allowed us to re-use the components, and to easily change the parameters, also
during the test activities. Keywords: Virtual Products; Virtual Prototyping; Fast Prototyping | |||
| Effectiveness of a Tactile Display for Providing Orientation Information of 3d-patterned Surfaces | | BIBA | Full-Text | 327-332 | |
| Nadia Vanessa Garcia-Hernandez; Ioannis Sarakoglou; Nikolaos G. Tsagarakis; Darwin G. Caldwell | |||
| This paper studies the effectiveness of a tactile display in providing information about the orientation of 3d-patterned surfaces. In particular, it investigates the perception of the orientation of sinusoidal gratings rendered through the display in a passive guided touch modality. The results of this study have revealed that participants could successfully perceive variations in the orientation of the rendered sinusoidal gratings. Moreover, they indicate a small difference in the perception of orientation between touching virtual gratings and touching real gratings. | |||
| ClearSpace: Mixed Reality Virtual Teamrooms | | BIBAK | Full-Text | 333-342 | |
| Alex Hill; Matthew N. Bonner; Blair MacIntyre | |||
| We describe ClearSpace, a tool for collaboration between distributed
teamrooms that combines components of virtual worlds and mixed presence
groupware. This prototype is a starting point for exploring solutions to
display and presence disparity by leveraging model-based user representations.
We describe our deployed system and a mirroring approach that solves several
problems with scaling up ClearBoard style portals to a common virtual space. We
also describe techniques for enforcing consistency between heterogeneous
virtual and physical contexts through system-managed awareness. Keywords: Mixed Reality; Distributed Groupware; Mixed Presence Groupware | |||
| Mesh Deformations in X3D via CUDA with Freeform Deformation Lattices | | BIBAK | Full-Text | 343-351 | |
| Yvonne Jung; Holger Graf; Johannes Behr; Arjan Kuijper | |||
| In this paper we present a GPU-accelerated implementation of the well-known
freeform deformation algorithm to allow for deformable objects within fully
interactive virtual environments. We furthermore outline how our real-time
deformation approach can be integrated into the X3D standard for more
accessibility of the proposed methods. The presented technique can be used to
deform complex detailed geometries without pre-processing the mesh by simply
generating a lattice around the model. The local deformation is then computed
for this lattice instead of the complex geometry, which efficiently can be
carried out on the GPU using CUDA. Keywords: Deformable objects; real-time simulation; FFD; CUDA; X3D | |||
| Visualization and Management of u-Contents for Ubiquitous VR | | BIBAK | Full-Text | 352-361 | |
| Kiyoung Kim; JongHyun Han; Changgu Kang; Woontack Woo | |||
| Ubiquitous Virtual Reality, where ubiquitous computing meets mixed reality,
is coming to our lives based on recent developments in the two fields. In this
paper, we focus on the conceptual properties of contents including definition
rather than infrastructures or algorithms for Ubiquitous Virtual Reality. For
this purpose, we define u-Content and its descriptor with three conceptual key
properties: u-Realism, u-Intelligence, and u-Mobility. Then we address the
overall scheme of the descriptor with a Context-aware Augmented Reality Toolkit
for visualization and management. We also show how the proposed concept is
applied in the recent applications. Keywords: Ubiquitous VR; u-Contents; Augmented Reality; Context | |||
| Semi Autonomous Camera Control in Dynamic Virtual Environments | | BIBAK | Full-Text | 362-369 | |
| Marcel Klomann; Jan-Torsten Milde | |||
| We present a system for controlling the camera movement in an interactive
dynamic virtual world. The camera control is scripted in a specialized
scripting language. A portable script interpreter has been implemented,
allowing to run the scripts both on standard PCs and XBOX 360 systems. Keywords: Camera control; script interpreter; PC and XBOX | |||
| Panoramic Image-Based Navigation for Smart-Phone in Indoor Environment | | BIBAK | Full-Text | 370-376 | |
| Van Vinh Nguyen; Jin Guk Kim; Jong Weon Lee | |||
| In this paper, we propose a vision-based indoor navigation system for a
smart-phone. The proposed system is designed to help a user traveling around an
indoor environment to determine his current position and to give him the
direction toward a chosen destination. For sensing user's position and
orientation, the system utilizes panoramic images, which are pre-captured the
environment and then processed to create a database. For matching images
captured from user's smart-phone with the database, we use SURF [1], a robust
detector and descriptor. Besides, to minimize responding time, the system
employs client-server architecture in which a server module is mainly in charge
of time consuming processes. Also, a tracking mechanism is applied to reduce
matching time on the server. The experimental results show that the system can
work well on a smart-phone in interactive time. Keywords: Indoor navigation; panorama tracking; augmented reality | |||
| Foundation of a New Digital Ecosystem for u-Content: Needs, Definition, and Design | | BIBA | Full-Text | 377-386 | |
| Yoosoo Oh; Sébastien Duval; Sehwan Kim; Hyoseok Yoon; Taejin Ha; Woontack Woo | |||
| In this paper, we analyze and classify digital ecosystems to demonstrate the need for a new digital ecosystem, oriented towards contents for ubiquitous virtual reality (U-VR), and to identify appropriate designs. First, we survey the digital ecosystems, explore their differences, identify unmet challenges, and consider their appropriateness for emerging services tightly linking real and virtual (i.e. digital) spaces. Second, we define a new type of content ecosystem (u-Content ecosystem) and describe its necessary and desirable features. Finally, the results of our analysis show that our proposed ecosystem surpasses the existing ecosystems for U-VR applications and contents. | |||
| Semantic Web-Techniques and Software Agents for the Automatic Integration of Virtual Prototypes | | BIBA | Full-Text | 387-396 | |
| Rafael Radkowski; Florian Weidemann | |||
| A virtual prototype is a computer internal representation of a real prototype. It is composited by a set of different aspect models. A common technique to analyze a virtual prototype is the usage of virtual reality applications. However, this requires a composition of different aspect models and their integration into a virtual environment. In this paper, an agent-based technique is presented, which facilitates this automatic integration of different aspect models. The Resource Description Framework is utilized to annotate the aspect models. A software agent compares the annotations of different models. By this it identifies similar models. A software prototype has been created that shows the usefulness of the approach. | |||
| Virtual Factory Manager | | BIBAK | Full-Text | 397-406 | |
| Marco Sacco; Giovanni Dal Maso; Ferdinando Milella; Paolo Pedrazzoli; Diego Rovere; Walter Terkaj | |||
| The current challenges in manufacturing engineering are the integration of
the product/process/factory worlds (data and tools) and the synchronization of
their lifecycles. Major ICT players already offer all-comprehensive Product
Lifecycle Management suites supporting most of the processes. However, they do
not offer all the required functionalities and they lack of interoperability.
An answer will be given by the development of a Virtual Factory Framework
(VFF): an integrated virtual environment that supports factory processes along
all the phases of its lifecycle. This paper will focus on the Virtual Factory
Manager (VFM) that acts as a server supporting the I/O communications within
the framework for the software tools needing to access its data repository. The
VFM will ensure data consistency and avoid data loss or corruption while
different modules access/modify partial areas of the data repository at
different times. Finally, an industrial case study will show the potentiality
of the VFM. Keywords: Virtual Factory; Interoperability; Reference Model | |||
| FiveStar: Ultra-Realistic Space Experience System | | BIBAK | Full-Text | 407-414 | |
| Masahiro Urano; Yasushi Ikei; Koichi Hirota; Tomohiro Amemiya | |||
| This paper describes the development of the FiveStar system that provides
five senses stimulations to the participant for the creation of ultra-realistic
experiences. We performed an upgraded demonstration of the system to evaluate
its individual technologies at Asiagraph 2010 in Tokyo. The content of the
exhibit was the encounter with a yokai character that produces effects of
extra-ordinary interaction between the participant and the imaginary
characters. The experiences of participants were investigated as exploratory
effort for the elucidation of this type of ultra-reality created in a fantasy
world. Keywords: Multiple-modality; Interactive Experience; Augmented Reality; Ultra Reality | |||
| Synchronous vs. Asynchronous Control for Large Robot Teams | | BIBAK | Full-Text | 415-424 | |
| Huadong Wang; Andreas Kolling; Nathan Brooks; Michael Lewis; Katia P. Sycara | |||
| In this paper, we discuss and investigate the advantages of an asynchronous
display, called image queue, for foraging tasks with emphasis on Urban Search
and Rescue. The image queue approach mines video data to present the operator
with a relevant and comprehensive view of the environment, which helps the user
to identify targets of interest such as injured victims. This approach allows
operators to search through a large amount of data gathered by autonomous robot
teams, and fills the gap for comprehensive and scalable displays to obtain a
network-centric perspective for UGVs. It is found that the image queue reduces
errors and operator's workload comparing with the traditional synchronous
display. Furthermore, it disentangles target detection from concurrent system
operations and enables a call center approach to target detection. With such an
approach, it could scale up to a larger multi-robot systems gathering huge
amounts of data with multiple operators. Keywords: Human-robot interaction; metrics; evaluation; multi-robot system; interface
design; simulation | |||
| Acceleration of Massive Particle Data Visualization Based on GPU | | BIBAK | Full-Text | 425-431 | |
| Hyun-Roc Yang; Kyung-Kyu Kang; Dongho Kim | |||
| In case of using ray tracing for rendering of massive particles, it usually
takes more time than mesh-based models, where the particle data used for the
representation of fluid are generated by the fluid simulator. It also has a
large amount of particle set with high density. In this paper, we apply two
schemes to reduce the rendering time and solve the problems using the
characteristics of the particles. We suggest that GPGPU can improve the
efficiency of the operations through parallel processing and a modified Kd-tree
spatial subdivision algorithm for speed-up. Keywords: GPGPU; CUDA; Ray tracing; Kd-tree; octree | |||