| VR-based edutainment | | BIB | Full-Text | 1 | |
| Zhigeng Pan; Jim Chen | |||
| Age invaders: social and physical inter-generational mixed reality family entertainment | | BIBAK | Full-Text | 3-16 | |
| Eng Tat Khoo; Adrian David Cheok; Ta Huynh Duy Nguyen; Zhigeng Pan | |||
| Age invaders (AI), is a novel interactive intergeneration social-physical
game which allows the elderly to play harmoniously together with children in
the physical space, while parents can participate in the game play in real time
remotely in the virtual world through the internet. Traditional digital games
are designed for the young where normally the player sits in front of a
computer or game console. Unlike standard computer games, age invaders brings
the game play to a physical platform, and requires and encourages physical body
movements rather than constraining the user in front of a computer for many
hours. Age invaders is an interactive social-physical family digital game
designed specially for a harmonious game play between the elderly and young.
Adjusting game properties automatically compensates for potential elderly
disadvantages, for example slower reaction time and slow movement. Keywords: Elderly entertainment; Mixed reality; Physical computing; Social computing | |||
| FARMTASIA: an online game-based learning environment based on the VISOLE pedagogy | | BIBAK | Full-Text | 17-25 | |
| Kevin K. F. Cheung; Morris S. Y. Jong; F. L. Lee; Jimmy H. M. Lee | |||
| Virtual interactive student-oriented learning environment (VISOLE) is a
game-based constructivist pedagogical approach that encompasses the creation of
an online interactive world modeled upon a set of interdisciplinary domains, in
which students participate as "citizens" to take part cooperatively and
competitively in shaping the development of the virtual world as a means to
construct their knowledge and skills. FARMTASIA is the first online game
designed using the VISOLE philosophy, encompassing the subject areas of
biology, government, economics, technology, production system and natural
environment. The "virtual world" deployed is a farming system covering the
domains of cultivation, horticulture and pasturage, situated in a competitive
economy governed by good public policies. The design and implementation of
FARMTASIA pursue three vital principles. The first one is to make the game as
realistic as possible so that students can learn in a near-real life
environment; the second one is to inject motivational elements so that students
can sustain to learn and acquire various knowledge and skills with the game;
and the third one is to make easy for teachers to conduct various VISOLE
facilitation tasks. According to our exploratory educational study, we show
evidentially that positive perceptions and an advancement of subject-specific
and interdisciplinary knowledge appeared among the students who participated in
VISOLE learning with FARMTASIA. Keywords: VISOLE; Virtual learning environment; Educational game; Edutainment | |||
| Synchronization between audiovisual and haptic feeling for constructing edutainment systems | | BIBAK | Full-Text | 27-36 | |
| Yoshihiro Tabuchi; Norihiro Abe; Hirokazu Taki; Shoujie He | |||
| Virtual reality (VR) technology has become more and more mature over the
last decade. Development of a virtual environment for training purpose is
considered to be one of the most practical applications of the VR technology.
Since the VR technology involves all kinds of sensors in exchanging information
between the real world and the virtual environment, it is computationally
intensive in terms of data processing at an individual sensor and information
integration among all the sensors. In general, the information integration has
to be well synchronized in order to meet the training needs. At the same time,
real-time processing capability is also considered to be critical. Many more
practical issues could be uncovered only when a virtual training environment is
actually being developed. Based on this belief, this study experiments on the
development of a virtual environment for training billiards players. The
technical difficulties encountered and the corresponding resolutions are
considered beneficial to the development of other practical virtual training
environments. This paper summarizes the design and implementation details about
our experimental virtual training environment for edutainment systems such as
virtual billiard game, virtual air hockey game and virtual drum performance
with the algorithms for the synchronization of the information from different
sources. Keywords: Training system; Virtual reality; Synchronized real-time processing;
Billiards game | |||
| Tangled reality | | BIBAK | Full-Text | 37-45 | |
| Kevin Ponto; Falko Kuester; Robert Nideffer; Simon Penny | |||
| Leonardo da Vinci was a strong advocate for using sketches to stimulate the
human imagination. Sketching is often considered to be integral to the process
of design, providing an open workspace for ideas. For these same reasons,
children use sketching as a simple way to express visual ideas. By merging the
abstraction of human drawings and the freedom of virtual reality with the
tangibility of physical tokens, Tangled Reality creates a rich mixed reality
workspace. Tangled Reality allows users to build virtual environments based on
simple colored sketches and traverse them using physical vehicles overlayed
with virtual imagery. This setup allows the user to "build" and "experience"
mixed reality simulations without ever touching a standard computer interface. Keywords: Mixed reality; Augmented reality; Teleoperations; Edutainment | |||
| Cognitive and synthetic behavior of avatars in intelligent virtual environments | | BIBAK | Full-Text | 47-54 | |
| Ronghua Liang; Mingmin Zhang; Zhen Liu; Meleagros Krokos | |||
| In intelligent virtual environments (IVEs), it is a challenging research
issue to provide the intelligent virtual actors (or avatars) with the ability
of visual perception and rapid response to virtual world events. Modeling an
avatar's cognitive and synthetic behavior appropriately is of paramount
important in IVEs. We propose a new cognitive and behavior modeling methodology
that integrates two previously developed complementary approaches. We present
expression cloning, walking synthetic behavior modeling, and an autonomous
agent cognitive model for driving an avatar's behavior. Facial expressions are
generated using our own-developed rule-based state transition system. Facial
expressions are further personalized for individuals by expression cloning. An
avatar's walking behavior is modeled using a skeleton model that is implemented
by seven-motion sequences and finite state machines (FSMs). We discuss
experimental results demonstrating the benefits of our approach. Keywords: Expression animation; Walking synthetic behavior; Expression cloning;
Autonomous agent models; FSM | |||
| Towards adaptive Web scriptable user interfaces for virtual environments | | BIBAK | Full-Text | 55-64 | |
| Emad Barsoum; Falko Kuester | |||
| The vast majority of Web-based technology, its ability to visualize static
and time varying information and the pervasive nature of its content have lead
the development of applications and user interfaces that port between a broad
range of operating systems, databases and devices. However the integration of
this immense resource in virtual environments (VEs) remains elusive. In this
paper we present a Web scriptable user interface that utilizes Web browser
technology to enable the user to search the internet for arbitrary information
and to seamlessly augment this information into the VE. WebVRI provides access
to the standard data input and query mechanisms offered by conventional Web
browsers, with the difference that it generates active texture-skins of the Web
contents that can be mapped onto arbitrary surfaces within the environment.
Once mapped, the corresponding texture functions as a fully integrated
Web-browser that will respond to traditional events such as the selection of
links or text input. As a result, any surface within the environment can be
turned into a Web-enabled resource that provides access to user-definable data.
Using WebVRI, users can merge Web contents into their VE, control its behavior
and collaborate with other users inside and outside the VE. This provides a
completely new mechanism to access readily available Web-based data, documents,
images, animations, simulations and visualization. WebVRI also enables
game-based education by providing the ability to create contents rich,
pervasive VR-based edutainment environments. Keywords: Virtual reality; 3D Web browser; WWW; Navigation; 2D/3D user interface | |||
| A ship motion simulation system | | BIBAK | Full-Text | 65-76 | |
| Shyh-Kuang Ueng; David Lin; Chieh-Hong Liu | |||
| In this article, efficient computational models for ship motions are
presented. These models are used to simulate ship movements in real time.
Compared with traditional approaches, our method possesses the ability to cope
with different ship shapes, engines, and sea conditions without the loss of
efficiency. Based on our models, we create a ship motion simulation system for
both entertainment and educational applications. Our system assists users to
learn the motions of a ship encountering waves, currents, and winds. Users can
adjust engine powers, rudders, and other ship facilities via a graphical user
interface to create their own ship models. They can also change the environment
by altering wave frequencies, wave amplitudes, wave directions, currents, and
winds. Therefore, numerous combinations of ships and the environment are
generated and the learning becomes more amusing. In our system, a ship is
treated as a rigid body floating on the sea surface. Its motions compose of 6
degrees of freedom: pitch, heave, roll, surge, sway, and yaw. These motions are
divided into two categories. The first three movements are induced by sea
waves, and the last three ones are caused by propellers, rudders, currents, and
winds. Based on Newton's laws and other basic physics motion models, we deduce
algorithms to compute the magnitudes of the motions. Our methods can be carried
out in real time and possess high fidelity. According to ship theory, the net
effects of external forces on the ship hull depend on the ship shape.
Therefore, the behaviors of the ship are influenced by its shape. To enhance
our physics models, we classify ships into three basic types. They are flat
ships, thin ships, and slender ships. Each type of ship is associated with some
predefined parameters to specify their characteristics. Users can tune ship
behaviors by varying the parameters even though they have only a little
knowledge of ship theory. Keywords: Ship motions; Physics engine; Computer games; Simulation | |||
| The feasibility of a mixed reality surgical training environment | | BIBAK | Full-Text | 77-86 | |
| Louise Moody; Alan Waterworth; Avril D. McCarthy; Peter J. Harley | |||
| The Sheffield knee arthroscopy training system (SKATS) was originally a
visual-based virtual environment without haptic feedback, but has been further
developed as a mixed reality-training environment through the use of tactile
augmentation (or passive haptics). The design of the new system is outlined and
then tested. In the first experiment described, the effect of tactile
augmentation on performance is considered by comparing novice performance using
the original and mixed reality system. In the second experiment the mixed
reality system is assessed in terms of construct validity by comparing the
performance of users with differing levels of surgical expertise. The results
are discussed in terms of the validity of a mixed reality environment for
training knee arthroscopy. Keywords: Tactile augmentation; Passive haptics; Surgical simulator; Training;
Arthroscopy | |||
| Sensory-motor enhancement in a virtual therapeutic environment | | BIBAK | Full-Text | 87-97 | |
| Richard A. Foulds; David M. Saxe; Arthur W., III Joyce; Sergei Adamovich | |||
| The sensory-motor skills of persons with neuromuscular disabilities have
been shown to be enhanced by intensive and repetitive therapeutic
interventions. This paper describes a form of low immersion virtual reality and
a prototype, open source system that allow a user with significant physical
disability to actively interact with computer-generated objects whose behaviors
promote a game-like interaction. Unlike fully immersive and haptic virtual
reality, this approach frees the user from head-mounted displays and gloves. It
extracts the user's real-time silhouette from the output of a remote video
camera and uses that two-dimensional outline to interact with graphical objects
on screen. In contrast to video games that have been modified with specialized
interfaces, this virtual interaction system promotes the repetitive use of goal
directed movements of the arms and body, which are essential to promote
cortical reorganization, as well as discourage unwanted changes in muscle
tissue that result in contracture. A prototype system demonstrates the
potential of low immersion technology to motivate users and encourage
participation in therapy. It also offers the potential of accommodating the
sensory-motor skills of individuals with very significant impairment. The
behaviors of the computer-generated graphics can be altered to allow use by
those with very limited range of motion and/or motor control. These behaviors
can be adjusted to provide a continuing challenge as the user's skills improve.
This prototype is described in terms of functional capabilities that include a
silhouette extraction from a video image, and generation of graphical objects
that interact with the silhouette. The work is extended with a discussion of a
more sophisticated region of interest detection algorithm that can select
specific parts of the body. Keywords: Low-immersion; Rehabilitation; Cortical reorganization; Therapy;
Sensory-motor skills; Biomedical engineering | |||
| Stable haptic interaction using a damping model to implement a realistic tooth-cutting simulation for dental training | | BIBAK | Full-Text | 99-106 | |
| Guanyang Liu; Yuru Zhang; Dangxiao Wang; William T. Townsend | |||
| It is difficult to implement a stable and realistic haptic simulation for
cutting rigid objects that is based on a damping model because of an inevitable
conflict between stability and high output force. This paper presents passivity
techniques to show that an excessive damping coefficient causes the output
stiffness to exceed the maximum output stiffness of the haptic device, leading
to instability. By analysing the damping model of a haptic dental-training
simulator, we construct a relationship among the damping coefficient, position
resolution, sampling frequency, human operation, and the maximum achievable
device stiffness that will still maintain device stability. A method is also
provided to restrict the output stiffness of the haptic device to ensure
stability while enabling the realistic haptic simulation of cutting rigid
objects (teeth) that is based in a damping model. Our analysis and conclusions
are verified by a damping model that is constructed for a dental-training
haptic display. Three types of haptic devices are used in our analysis and
experiments. Keywords: Haptic display system; Output stiffness; Damping model; Stability criterion | |||
| The incorporation of challenge enhances the learning of chronology from a virtual display | | BIBAK | Full-Text | 107-113 | |
| Nigel Foreman; Liliya Korallo; David Newson; Natalie Sarantos | |||
| In earlier studies investigating the learning of historical chronology,
virtual fly-throughs were used in which successive historical events were
represented by images on virtual screens, placed in temporal-spatial sequence.
Undergraduate students benefited more than school-age children from virtual 3D
(compared to 2D) training, perhaps because they took on the task as a
challenge. In this study a modification of the earlier paradigm was used, in a
game-like format, in which successive screens (paintings, representing epochs
of art history) had to be memorised and anticipated during training, the
participant's score accumulating on the screen. Compared with PowerPoint and
verbal-semantic training conditions, VE training resulted in more rapid
learning, better recall of associated semantic information and error-free
recall of the picture sequence. Possible applications of this paradigm for
teaching were discussed. Keywords: Chronology; Art history; Virtual environment; Game format; Undergraduates | |||
| Visuals are not what they look | | BIBAK | Full-Text | 115-123 | |
| Karsten Bormann | |||
| When developing virtual environments (VEs), most effort goes into developing
the visuals. For many, the ideal is to create virtual worlds of photo-realistic
quality or otherwise being of high fidelity. The purpose is to make the VE seem
real to the user. This paper takes a closer look at subjects' ratings of the
visuals, and of the extent to which the VE feels real to the subjects, in the
context of an experiment on audio in which subjects were to perform two search
tasks: the first in an ordinary, textured house; the second in a bare structure
consisting almost exclusively of the barren, white walls. Audio was never
relevant to the search task in the first experiment, while in the second
experiment it was relevant to the search task for half of the subjects.
Subjects for whom audio was irrelevant to both their search tasks rated their
visual involvement as large in the barren VE as in the higher quality one.
However, subjects for which audio was relevant to their search task in the
second experiment saw their visual involvement plummet, while their auditory
involvement surged. Finally, the extent to which the VE felt real to the
subjects did not correlate with their visual involvement but instead showed a
strong correlation with the extent to which the interaction felt natural. Keywords: Cross-sensory perception; Interaction between visual and auditory
perception; Visual perception; Auditory perception | |||
| Force modeling for tooth preparation in a dental training system | | BIBAK | Full-Text | 125-136 | |
| Guanyang Liu; Yuru Zhang; William T. Townsend | |||
| Feedback force is very important for novices to simulate tooth preparation
by using the haptic interaction system (dental training system) in a virtual
environment. In the process of haptic simulation, the fidelity of generated
forces by a haptic device decides whether the simulation is successful. A force
model computes feedback force, and we present an analytical force model to
compute the force between a tooth and a dental pin during tooth preparation.
The force between a tooth and a dental pin is modeled in two parts: (1) force
to resist human's operation and (2) friction to resist the rotation of the
dental engine. The force to resist the human's operation is divided into three
parts in the coordinates that are constructed on the bottom center of the
dental pin. In addition, we also consider the effects of dental-pin type, tooth
stiffness, and contact geometry in the force model. To determine the parameters
of the force model, we construct a measuring system by using machine vision and
a force/torque sensor to track the human's operations and measure the forces
between the dental pins and teeth. Based on the measuring results, we construct
the relation between the force and the human's operation. The force model is
implemented in the prototype of a dental training system that uses the Phantom
as the haptic interface. Dentists performing virtual operations have confirmed
the fidelity of feedback force. Keywords: Force model; Haptic interaction; Contact geometry; Force measure; Human's
operating tracking | |||
| Interaction styles in tools for developing virtual environments | | BIBAK | Full-Text | 137-150 | |
| Jesper Kjeldskov; Jan Stage | |||
| This article discusses and compares interaction styles in development tools
for virtual environments (VE). The comparison relies on a qualitative empirical
study of two development processes where a command language and a direct
manipulation based tool were used to develop the same virtual environment
application. The command language tool proved very flexible and facilitated an
even distribution of effort and progress over time, but debugging and
identification of errors was very difficult. Contrasting this, the direct
manipulation tool enabled faster implementation of a first prototype but did
not facilitate a shorter implementation process as a whole. On the basis of
these findings, the strength and weaknesses of direct manipulation for
developing virtual environment applications are explored further through a
comparison with a successful direct manipulation tool for developing
interactive multimedia applications. The comparisons are used to identify and
emphasize key requirements for virtual environment development tool interface
design. Keywords: Virtual environments; Development tools; Interaction styles; Empirical study | |||
| Transfer of learning in virtual environments: a new challenge? | | BIBAK | Full-Text | 151-161 | |
| Cyril Bossard; Gilles Kermarrec; Cédric Buche; Jacques Tisseau | |||
| The aim of all education is to apply what we learn in different contexts and
to recognise and extend this learning to new situations. Virtual learning
environments can be used to build skills. Recent research in cognitive
psychology and education has shown that acquisitions are linked to the initial
context. This provides a challenge for virtual reality in education or
training. A brief overview of transfer issues highlights five main ideas: (1)
the type of transfer enables the virtual environment (VE) to be classified
according to what is learned; (2) the transfer process can create conditions
within the VE to facilitate transfer of learning; (3) specific features of VR
must match and comply with transfer of learning; (4) transfer can be used to
assess a VE's effectiveness; and (5) future research on transfer of learning
must examine the singular context of learning. This paper discusses how new
perspectives in cognitive psychology influence and promote transfer of learning
through the use of VEs. Keywords: Transfer of learning; Training; Virtual environment; Learning models | |||
| Do virtual worlds create better real worlds? | | BIBAK | Full-Text | 163-179 | |
| Mark P. Mobach | |||
| Over the last years, virtual reality (VR) has been said to offer promise for
design visualisation and has started to be included in participatory design
methodology. This research provides an overview of the use of VR in
architectural design and organizational space design, and explores how this
application can be integrated with participatory design. The effects of the
proposed integration of participatory design, VR, architecture and organization
were studied in two pharmaceutical case studies. It was assessed whether the
participants actually changed the design and to what extent this affected staff
satisfaction and construction costs. The results show that the design was
changed, staff satisfaction improved, and costs were reduced. Keywords: Architecture; Business practice; Organization; Participatory design;
Immersive virtual reality | |||
| Computer game engines for developing first-person virtual environments | | BIBAK | Full-Text | 181-187 | |
| David Trenholme; Shamus P. Smith | |||
| Building realistic virtual environments is a complex, expensive and time
consuming process. Although virtual environment development toolkits are
available, many only provide a subset of the tools needed to build complete
virtual worlds. One alternative is the reuse of computer game technology. The
current generation of computer games present realistic virtual worlds featuring
user friendly interaction and the simulation of real world phenomena. Using
computer games as the basis for virtual environment development has a number of
advantages. Computer games are robust and extensively tested, both for
usability and performance, work on off-the-shelf systems and can be easily
disseminated, for example via online communities. Additionally, a number of
computer game developers provide tools, documentation and source code, either
with the game itself or separately available, so that end-users can create new
content. This short report overviews several currently available game engines
that are suitable for prototyping virtual environments. Keywords: Virtual environments; Computer game technology; Game engines; Reuse;
Prototyping | |||
| Introduction | | BIB | Full-Text | 189-190 | |
| Richard Satava | |||
| The Visible Human® at the University of Colorado 15 years later | | BIBAK | Full-Text | 191-200 | |
| Victor M. Spitzer; Michael J. Ackerman | |||
| The Visible Human has come through ages, providing a foundation of
photorealistic anatomy for learner-centered, interactive education. Pathways
for improvement of the Visible Human process for reverse engineering the
macrostructure of the human body have been developed to provide higher
resolution and decreased production time for segmentation and modeling human
form. The assignment of physical properties, the development of algorithms for
the interaction of surgical tools with this virtual anatomy and the
availability of high-fidelity haptic interfaces provide the basis for fully
immersive surgical training and certification in an environment that is zero
direct-risk to patients. Interactive journal publishing, 3D stereoscopic
anatomical visualization software and surgical simulators, all based on the
Visible Human, the history of the Project and its utilization and provide a
framework for its evolution and role in delivering education, training,
certification and credentialing through virtual reality to the health care
workforce of tomorrow. Keywords: Virtual anatomy; Surgical simulation; Visible Human; Medical education;
Anatomy | |||
| Medical interface research at the HIT Lab | | BIBAK | Full-Text | 201-214 | |
| Suzanne Weghorst; Eric Seibel; Peter Oppenheimer; Hunter Hoffman | |||
| The Human Interface Technology Laboratory (HIT Lab) is a multi-disciplinary
research and development lab whose work centers on novel approaches to human
interface technology. Lab researchers represent a wide range of disciplines
from across the University of Washington campus, including engineering,
medicine, education, social sciences, architecture, and the design arts. We
describe here a representative sampling of past and current HIT Lab research
and development activities related to medicine, including virtual reality and
augmented/mixed reality applications for direct patient therapy, tools for
basic medical education and procedure training, novel approaches to medical
image acquisition and display, and new visualization methods in medical
informatics. Keywords: Virtual reality; Mixed reality; Endoscopy; Medical informatics;
Rehabilitation; Surgical simulation | |||
| Learning medicine through collaboration and action: collaborative, experiential, networked learning environments | | BIBAK | Full-Text | 215-234 | |
| Parvati Dev; Wm. LeRoy Heinrichs | |||
| The SUMMIT Lab and William LeRoy Heinrichs, at Stanford University, were
honored to be the 2002 awardees of the Satava Award for Virtual Reality in
Medicine. Since the award, the group has followed two main threads of research,
which we describe below. The first, "building a high-performance,
network-aware, collaborative learning environment" has investigated the
framework and components needed when students in multiple locations collaborate
using computation-intensive simulations and large image datasets. The second
thread, "online, interactive human physiology for medical education and
training", has focused on the application of interactive physiology models
embedded in 3D visualizations of virtual patients in naturalistic medical
environments. These environments support immersive, experiential learning where
students act as medical providers and manage authentic medical events and
crises. These research efforts, and our conclusions, are presented in the
chapter below. Keywords: Collaborative learning; Human anatomy; Human physiology; Online; Distance
learning; Virtual patients; Virtual physiology models; Virtual worlds; Stereo
anatomy | |||
| Medical imaging and virtual reality: a personal perspective | | BIBAK | Full-Text | 235-257 | |
| Richard A. Robb | |||
| The evolution of medical imaging, and concomitantly virtual reality (VR)
technology, especially over the past 2-3 decades, has significantly accelerated
the use of multi-modality images and VR instrumentation in guiding medical
procedures, including surgery. The imaging capabilities have not only increased
in variety of modalities (CT, MRI, PET, ultrasound, etc.), but also in
dimensions and resolution. It is becoming more common to talk about 3D, 4D and
even 5D images produced by modern imaging modalities. However, a relatively
unexploited potential and capability of this increase in multimodality,
multidimensional image data is the synergistic fusion of these datasets into a
unified form that describes more accurately and extensively the complex nature
of human anatomy, physiology, biology and pathology. The assist in achieving
this potential, through realistic simulation, training, rehearsal and delivery
of surgery and other interventional procedures by use of VR technology, has
been increasingly evident, particularly in education. This paper attempts an
overview of this potential, describing the evolution of medical imaging systems
and VR that has lead to development of powerful computational techniques to
fuse, visualize, analyze and use these images for advanced use in medical
practice. This overview is based primarily on the author's experience, opinion
and perspective, explaining the preponderance of citations to his own work. A
brief history of medical imaging and VR, a description of current imaging
systems, and a summary of important image processing methods used in
image-guided interventions will be given. Examples of use of these methods on
several types of multidimensional image datasets will be illustrated, and
several examples of real clinical applications described using 3D, 4D and 5D
fused image datasets and VR technology for image-guided interventions,
image-guided surgery, and image-guided therapy. Finally, the paper will discuss
some barriers to progress and provide some prognostic views on the promising
future of image-guided medical procedures and surgical interventions. Keywords: Multi-modality imaging; Multi-dimensional imaging; Image fusion; Image
visualization; Image modeling; Evolution of imaging; Virtual reality technology | |||
| Virtual reality with fMRI: a breakthrough cognitive treatment tool | | BIBAK | Full-Text | 259-267 | |
| Brenda K. Wiederhold; Mark D. Wiederhold | |||
| The impact of virtual reality (VR) has been felt in a wide range of fields
over the past 10 years. VR has been shown to be an effective treatment for
anxiety, phobia, pain, post-traumatic stress disorder, stress inoculation
training, and drug and alcohol addiction. The emerging application of VR in
conjunction with functional magnetic resonance imaging (fMRI) is helping to
improve upon current VR systems, and in the future will aid in creating more
effective treatments for patients. With the advent of fMRI-safe VR goggles,
brain activity can be studied in real time as a patient undergoes a VR
treatment. The use of brain imaging during a VR session allows for the study of
the brain itself as a patient interacts in a real-world environment. Studies
are showing that by using VR in combination with fMRI, a new emergence of data
about previously-elusive functions of the brain can be expected. Keywords: Virtual reality; fMRI; Brain imaging; Postt-raumatic stress disorder; Cue
exposure; Physiology; Exposure therapy | |||
| Design and implementation of medical training simulators | | BIBAK | Full-Text | 269-279 | |
| Nigel W. John | |||
| This paper discusses the design issues and implementation details of
building a medical training simulator. Example projects that have been
undertaken by the Visualization and Medical Graphics group at Bangor University
and our collaborators are used to illustrate the points made. A detailed case
study is then presented of a virtual environment designed to train the
Seldinger Technique, a common procedure in interventional radiology. The paper
will introduce a medical practitioner to the technology behind a medical
virtual environment. It will also provide an engineer with an overview of many
of the issues that need to be considered when undertaking to build such an
application. The paper ends with the author's views on future developments in
this exciting domain. Keywords: Medical virtual environments; Haptics; 3D displays; Graphics hardware;
Seldinger technique; Augmented reality | |||
| The road to surgical simulation and surgical navigation | | BIBAK | Full-Text | 281-291 | |
| Naoki Suzuki; Asaki Hattori | |||
| The recent advantage of the power of graphic workstations has made it
possible to handle 3D human structures in an interactive way. Real-time imaging
of medical 3D or 4D images can be used not only for diagnosis, but also for
various novel medical treatments. By elaborating on the history of the
establishment of our laboratory, which focuses on medical virtual reality, we
describe our experience of developing surgery simulation and surgery navigation
systems according to our research results. In the case of surgical simulation,
we mention two kinds of virtual surgery simulators that produce the haptic
sensation of surgical maneuvers in the user's fingers. Regarding surgical
navigation systems, we explain the necessity of the augmented reality function
for the encouragement of the ability of robotic surgery and its trial for
clinical case. Keywords: Medical imaging; Surgical simulation; Virtual reality; Augmented reality;
Navigation surgery | |||