HCI Bibliography Home | HCI Journals | About VR | Journal Info | VR Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
VR Tables of Contents: 010203040506070809101112131415161718

Virtual Reality 18

Editors:Daniel Ballin; Robert D. Macredie
Dates:2014
Volume:18
Publisher:Springer-Verlag
Standard No:ISSN 1359-4338 (print) EISSN 1434-9957 (online)
Papers:22
Links:link.springer.com | Twitter | Table of Contents
  1. VR 2014-03 Volume 18 Issue 1
  2. VR 2014-06 Volume 18 Issue 2
  3. VR 2014-09 Volume 18 Issue 3
  4. VR 2014-11 Volume 18 Issue 4

VR 2014-03 Volume 18 Issue 1

Validating Cyber-Interventions

Validation in cyberinterventions: an introduction to this themed issue BIBAFull-Text 1-4
  Anna Spagnolli; Cheryl Campanella Bracken
IntroductionThe number of virtual reality applications and digital environments that support the promotion of personal and social change is incredibly rich and incessantly increasing. These applications can be found in very different, often unrelated domains; typical examples might be computerised programs to support smoking cessation; widgets to monitor household energy consumption; serious games to increase sensitivity towards political or humanitarian issues; and mobile applications to plan everyday commutes to work to avoid contributing to traffic jams. Also the platform supporting these and similar interventions varies considerably, ranging from immersive virtual worlds, to stand-alone driving simulators, to mobile applications and many others. Nevertheless, despite this heterogeneity of application domains and technological platforms, technologically mediated interventions share several common issues. Examining common issues allows the extension of reflections and solutions to ap ...
Usability and feasibility of an internet-based virtual pedestrian environment to teach children to cross streets safely BIBAKFull-Text 5-11
  David C. Schwebel; Leslie A. McClure; Joan Severson
Child pedestrian injury is a preventable global health challenge. Successful training efforts focused on child behavior, including individualized streetside training and training in large virtual pedestrian environments, are laborious and expensive. This study considers the usability and feasibility of a virtual pedestrian environment "game" application to teach children safe street-crossing behavior via the internet, a medium that could be broadly disseminated at low cost. Ten 7- and 8-year-old children participated. They engaged in an internet-based virtual pedestrian environment and completed a brief assessment survey. Researchers rated children's behavior while engaged in the game. Both self-report and researcher observations indicated the internet-based system was readily used by the children without adult support. The youth understood how to engage in the system and used it independently and attentively. The program also was feasible. It provided multiple measures of pedestrian safety that could be used for research or training purposes. Finally, the program was rated by children as engaging and educational. Researcher ratings suggested children used the program with minimal fidgeting or boredom. The pilot test suggests an internet-based virtual pedestrian environment offers a usable, feasible, engaging, and educational environment for child pedestrian safety training. If future research finds children learn the cognitive and perceptual skills needed to cross streets safely within it, internet-based training may provide a low-cost medium to broadly disseminate child pedestrian safety training. The concept may be generalized to other domains of health-related functioning such as teen driving safety, adolescent sexual risk-taking, and adolescent substance use.
Keywords: Pedestrian; Safety; Injury; Evaluation; Internet
The role played by the concept of presence in validating the efficacy of a cybertherapy treatment: a literature review BIBAKFull-Text 13-36
  Anna Spagnolli; Cheryl Campanella Bracken; Valeria Orso
The present paper considers the existing research in cybertherapy, which is a psychological therapy carried out with the use of a mediated environment, and examines the way in which the users' sense of presence in the mediated environment can be of relevance for the validation of the intervention. With this purpose, a collection of 41 papers reporting the measurement of presence in the context of a cybertherapy treatment has been identified and examined. The general relevance of presence in cybertherapy and the measurement techniques adopted in the studies collected here are described and discussed. The way in which presence corresponds to establishing internal validity, convergent or predictive validity and external validity of a treatment is examined. In conclusion, a checklist to apply when planning a validation study is proposed, to improve the way in which presence is used.
Keywords: Presence; Cybertherapy; Validation
Using immersive virtual reality and anatomically correct computer-generated characters in the forensic assessment of deviant sexual preferences BIBAKFull-Text 37-47
  Patrice Renaud; Dominique Trottier; Joanne-Lucine Rouleau; Mathieu Goyette; Chantal Saumur; Tarik Boukhalfi; Stéphane Bouchard
Penile plethysmography (PPG) is the gold standard for the assessment of sexual interests, especially among sex offenders of children. Nonetheless, this method faces some ethical limitations inherent to the nature of its stimuli and could benefit from the improvement of its ecological validity. The use of computer-generated characters (CGC) in virtual immersion for PPG assessment might help address these issues. A new application developed to design made-to-measure anatomically correct virtual characters compatible with the Tanner developmental stages is presented. The main purpose of this study was to determine how the virtual reality (VR) modality compares to the standard auditory modality on their capacity to generate sexual arousal profiles and deviance differentials indicative of sexual interests. The erectile responses of 22 sex offenders of children and 42 non-deviant adult males were recorded. While both stimulus modalities generated significantly different genital arousal profiles for sex offenders of children and non-deviant males, deviance differentials calculated from the VR modality allowed for significantly higher classification accuracy. Performing receiver operating characteristic analyses further assessed discriminant potential. Auditory modality yielded an area under the curve (AUC) of 0.79 (SE=0.059) while CGC in VR yielded an AUC of 0.90 (SE=0.052). Overall, results suggest that the VR modality allows significantly better group classification accuracy and discriminant validity than audio stimuli, which provide empirical support for the use of this new method for PPG assessment. Additionally, the potential use of VR in interventions pertaining to self-regulation of sexual offending is addressed in conclusion.
Keywords: Immersive virtual reality; Pedophilia; Penile plethysmography; Made-to-measure virtual characters; Sexual self-regulation
Learning disabilities and visual-motor skills; comparing assessment from a hapto-virtual reality tool and Bender-Gestalt test BIBAKFull-Text 49-60
  Line Tremblay; Brahim Chebbi; Stéphane Bouchard; Krystel Cimon-Lambert; Jessica Carmichael
Previous investigations conducted on post-secondary adult students with learning disabilities (LD) suggest that deficits in visual-motor skills contribute to difficulties in written expression which impact academic achievement. Intervention strategies for individuals with LD include assistive computer-based technologies (ATs) to compensate for or maximize performance. However, research fails to assess the impact of ATs on performance, learning, and motivation of students with LD. Also, one of the limitations of ATs is that they cannot be used for assessment and training and there are very few methods to assess or train visual-motor skills in this population. The present study explores the usefulness of a hapto-visual virtual reality motor skills assessment (MSA) device for visual-motor functioning in adults with and without LD. This is a preliminary step of developing an intervention to improve impaired visual-motor skills in adults with LD. A sample of 22 male and female university students with and without LD had their visual-motor skills pretested using a standard paper-and-pencil Bender-Gestalt (BG) test and were compared according to their performance on the MSA tool. We hypothesized that our LD participants' performance would be significantly lower than our control participants on the VR task in terms of number of errors and speed. Results showed that participants without LD performed better and more rapidly on the VR task than participants with LD. There were no correlations between the BG and MSA performance. We did not find significant differences between the groups on the Bender-Gestalt scores, previous experience with video game, arousal, and mood. Our results suggest that a novel 3D virtual reality tool such as the MSA can potentially discriminate motor function of people with and without LD; however, the difference between both may also be due to a lack of problem-solving ability in LD.
Keywords: Virtual reality; Haptics; Learning disability; Visual-motor skills; Bender-Gestalt test
Creation of a new set of dynamic virtual reality faces for the assessment and training of facial emotion recognition ability BIBAKFull-Text 61-71
  José Gutiérrez-Maldonado; Mar Rus-Calafell; Joan González-Conde
The ability to recognize facial emotions is target behaviour when treating people with social impairment. When assessing this ability, the most widely used facial stimuli are photographs. Although their use has been shown to be valid, photographs are unable to capture the dynamic aspects of human expressions. This limitation can be overcome by creating virtual agents with feasible expressed emotions. The main objective of the present study was to create a new set of dynamic virtual faces with high realism that could be integrated into a virtual reality (VR) cyberintervention to train people with schizophrenia in the full repertoire of social skills. A set of highly realistic virtual faces was created based on the Facial Action Coding System. Facial movement animation was also included so as to mimic the dynamism of human facial expressions. Consecutive healthy participants (n=98) completed a facial emotion recognition task using both natural faces (photographs) and virtual agents expressing five basic emotions plus a neutral one. Repeated-measures ANOVA revealed no significant difference in participants' accuracy of recognition between the two presentation conditions. However, anger was better recognized in the VR images, and disgust was better recognized in photographs. Age, the participant's gender and reaction times were also explored. Implications of the use of virtual agents with realistic human expressions in cyberinterventions are discussed.
Keywords: Emotion recognition; Virtual agents; Dynamism; Social skills; Cyberintervention
Toward a validation of cyber-interventions for stress disorders based on stress inoculation training: a systematic review BIBAKFull-Text 73-87
  Silvia Serino; Stefano Triberti; Daniela Villani; Pietro Cipresso; Andrea Gaggioli; Giuseppe Riva
New advanced technologies have recently emerged as a potentially effective way for delivering stress management techniques. Specifically, the stress inoculation training (SIT) represents a validated approach to manage stress in several settings, and research is growing related to this clinical protocol combined with advanced technologies. This review aims to outline the state of the art of cyber-interventions based on SIT methodology (cyber-SIT). In the current review, we deeply analyzed and discussed three aspects of the selected studies: (1) the type of technological devices used for delivering cyber-SIT; (2) the sampling strategies; (3) and the stress-related measures for assessing the effectiveness of cyber-SIT. The results of this systematic review suggest the potential efficacy of cyber-SIT for managing psychological stress in several settings. Considering cyber-SIT for psychological stress, controlled trials testing a greater number of participants are needed. Other future challenges include adopting better inclusion/exclusion criteria, standardized outcome measures, and different conditions for comparing the effect and/or the integration of different technological devices. In conclusion, as the cyber-SIT may play an important role in the future clinical psychology, it is crucial to enhance the validation of this approach from a methodological point of view.
Keywords: Stress inoculation training; Virtual reality; Validation; Stress management; Systematic review; Cyber-interventions

VR 2014-06 Volume 18 Issue 2

Dynamic learning, retrieval, and tracking to augment hundreds of photographs BIBAKFull-Text 89-100
  Julien Pilet; Hideo Saito
Tracking is a major issue of virtual and augmented reality applications. Single object tracking on monocular video streams is fairly well understood. However, when it comes to multiple objects, existing methods lack scalability and can recognize only a limited number of objects. Thanks to recent progress in feature matching, state-of-the-art image retrieval techniques can deal with millions of images. However, these methods do not focus on real-time video processing and cannot track retrieved objects. In this paper, we present a method that combines the speed and accuracy of tracking with the scalability of image retrieval. At the heart of our approach is a bi-layer clustering process that allows our system to index and retrieve objects based on tracks of features, thereby effectively summarizing the information available on multiple video frames. Dynamic learning of new viewpoints as the camera moves naturally yields the kind of robustness and reliability expected from an augmented reality engine. As a result, our system is able to track in real-time multiple objects, recognized with low delay from a database of more than 300 entries. We released the source code of our system in a package called Polyora.
Keywords: Augmented reality; Multiple object tracking; Image retrieval
The design space of dynamic interactive virtual environments BIBAKFull-Text 101-116
  Kristopher J. Blom; Steffi Beckhaus
Virtual environments have become a key component of many fields and the critical component of virtual reality applications. Due to their virtual nature, they can accommodate an infinite number of possibilities. A theoretical work is presented, which decomposes those innumerous possibilities into concepts to help clarify the vast design space and provide insights into future applied research. We propose that what makes environments interesting and engaging is having worlds that are both active and reactive. This article explores the manifestations of those actions and reactions in what we term: dynamic components and interactions. We term worlds containing these dynamic interactive virtual environments (DIVE). An analysis of each component time was performed, with the purpose of providing a theoretical understanding of the respective design spaces. Initially, we collected the myriad possibilities of each component, e.g., the possible kinds of interactions. We point to examples throughout the field to ground and explain concepts presented. We then categorized of each area into taxonomies. The result of the analyses provides insights into the design space of virtual environments, exposes several avenues of research that are yet underexplored, and provides better understandings of ways in which DIVE creation can be supported.
Keywords: Virtual environments; Dynamic interactive VEs; 3D user interaction; VR systems
Immersive front-projection analysis using a radiosity-based simulation method BIBAKFull-Text 117-128
  J. Dehos; É. Zéghers; L. Sarry; F. Rousselle; C. Renaud
Video projectors are designed to project onto flat white diffuse screens. Over the last few years, projector-based systems have been used, in virtual reality applications, to light non-specific environments such as the walls of a room. However, in these situations, the images seen by the user are affected by several radiometric disturbances, such as interreflection. Radiometric compensation methods have been proposed to reduce the disturbance caused by interreflection, but nothing has been proposed for evaluating the phenomenon itself and the effectiveness of compensation methods. In this paper, we propose a radiosity-based method to simulate light transfer in immersive environments, from a projector to a camera (the camera gives the image a user would see in a real room). This enables us to evaluate the disturbances resulting from interreflection. We also consider the effectiveness of interreflection compensation and study the influence of several parameters (projected image, projection onto a small or large part of the room, reflectivity of the walls). Our results show that radiometric compensation can reduce the influence of interreflection but is severely limited if we project onto a large part of the walls around the user, or if all the walls are bright.
Keywords: Immersive environments; Video projection; Radiometric compensation; Radiosity
Real-time infinite horizon tracking with data fusion for augmented reality in a maritime operations context BIBAKFull-Text 129-138
  Olivier Hugues; Jean-Marc Cieutat; Pascal Guitton
In this paper, we propose a method for real-time horizon tracking (i.e., separation line between the sky and the sea) in a maritime operations context. We present the fusion of an image processing algorithm with the data obtained from the inertial measurement unit (IMU). The initial aim is to filter out environmental conditions using inertial information in order to combine a video stream with onboard electronic charts. This is achieved by the detection of the horizon with an image processing algorithm in an area defined by the IMU. We then present an evaluation of the algorithm with regard to the rate of detection of the horizon and the impact of the image resolution on the computational time. The purpose of developing this method is to create an augmented reality maritime operations application. We combine the video stream with electronic charts in a single display. We use the position of the horizon in the image to split the display into different areas. Then, we use transparency to display the video, the electronic charts or both.
Keywords: Image processing; Data fusion; Augmented reality; Electronic chart system; Geographical information system
Why, when and how to use augmented reality agents (AuRAs) BIBAKFull-Text 139-159
  Abraham G. Campbell; John W. Stafford; Thomas Holz; G. M. P. O'Hare
Over the last number of years, multiple research projects have begun to create augmented reality (AR) applications that use augmented reality agents, or AuRAs, as their principle interaction and development paradigm. This paper aims to address this new and distinct field of AuRAs by asking three questions: why should AuRAs be researched, when are they a useful paradigm, and how can they be developed? The first question explores the motivation behind applying AuRAs to AR. Specifically, it investigates whether AuRAs are purely an interaction paradigm, or whether they can also serve as a development paradigm, by outlining in which circumstances it is appropriate for a project to use AuRAs and where their addition would only add unnecessary complexity. A navigational experiment, performed in simulated AR, explores the second question of when AuRAs can be a useful concept in AR applications. Results from this experiment suggest that an embodied virtual character allows for faster navigation along a shorter route than directional arrows or marking the target with an AR "bubble". An exploration of the limitations of the simulated AR environment illuminates how faithfully the experiment recreated the environmental challenges that AuRAs can help to address. Finally, the question of how to develop such applications is addressed through the introduction of the agent factory augmented reality toolkit that allows the rapid prototyping of such applications. Results from a usability study on the toolkit are also presented.
Keywords: Augmented reality; Multi-agent systems; Virtual reality; AR simulation; Interaction techniques

VR 2014-09 Volume 18 Issue 3

Natural and hybrid bimanual interaction for virtual assembly tasks BIBAKFull-Text 161-171
  Yaiza Vélaz; Alberto Lozano-Rodero; Angel Suescun; Teresa Gutiérrez
This paper focuses on the simulation of bimanual assembly/disassembly operations for training or product design applications. Most assembly applications have been limited to simulate only unimanual tasks or bimanual tasks with one hand. However, recent research has introduced the use of two haptic devices for bimanual assembly. We propose a more natural and low-cost bimanual interaction than existing ones based on Markerless motion capture (Mocap) systems. Specifically, this paper presents two interactions based on a Markerless Mocap technology and one interaction based on combining Markerless Mocap technology with haptic technology. A set of experiments following a within-subjects design have been implemented to test the usability of the proposed interfaces. The Markerless Mocap-based interactions were validated with respect to two-haptic-based interactions, as the latter has been successfully integrated into bimanual assembly simulators. The pure Markerless Mocap interaction proved to be either the most or least efficient depending on the configuration (with 2D or 3D tracking, respectively). Usability results among the proposed interactions and the two-haptic based interaction showed no significant differences. These results suggest that Markerless Mocap or hybrid interactions are valid solutions for simulating bimanual assembly tasks when the precision of the motion is not critical. The decision on which technology to use should depend on the trade-off between the precision requested to simulate the task, the cost, and inner features of the technology.
Keywords: Virtual reality; Haptics; Markerless Mocap; Human-computer interaction; Assembly training; Bimanual assembly simulation
Measuring virtual experience in a three-dimensional virtual reality interactive simulator environment: a structural equation modeling approach BIBAKFull-Text 173-188
  Li-Keng Cheng; Ming-Hua Chieng; Wei-Hua Chieng
With the rapid development of the VR market, virtual experience has increasingly been the object of study in recent years. A growing number of studies have reported the positive effect that virtual experience can have on a user's mood and loyalty. However, few studies have investigated the influence of the mechanism of virtual experience on users' mood and loyalty. To compensate for this research gap, this study aims to evaluate consumers' virtual experience by examining the flow state in a virtual environment. A total of 368 valid questionnaires were collected, and a structural equation modeling approach was employed in the data analysis. The study reveals that forming flow involves many factors: the intrinsic characteristics of the mediated environment, the consumer's assumptions and perceptions prior to entering the flow state, the stage at which the customer enters the flow state, and the consequences of the flow experience.
Keywords: Flow; Telepresence; Interactivity; Vividness; Virtual experience
An evaluation of immersive viewing on spatial knowledge acquisition in spherical panoramic environments BIBAKFull-Text 189-201
  Phillip E. Napieralski; Bliss M. Altenhoff; Jeffrey W. Bertrand; Lindsay O. Long; Sabarish V. Babu; Christopher C. Pagano; Timothy A. Davis
We report the results of an experiment conducted to examine the effects of immersive viewing on a common spatial knowledge acquisition task of spatial updating task in a spherical panoramic environment (SPE). A spherical panoramic environment, such as Google Street View, is an environment that is comprised of spherical images captured at regular intervals in a real world setting augmented with virtual navigational aids such as paths, dynamic maps, and textual annotations. Participants navigated the National Mall area of Washington, DC, in Google Street View in one of two viewing conditions; desktop monitor or a head-mounted display with a head orientation tracker. In an exploration phase, participants were first asked to navigate and observe landmarks on a pre-specified path. Then, in a testing phase, participants were asked to travel the same path and to rotate their view in order to look in the direction of the perceived landmarks at certain waypoints. The angular difference between participants' gaze directions and the landmark directions was recorded. We found no significant difference between the immersive and desktop viewing conditions on participants' accuracy of direction to landmarks as well as no difference in their sense of presence scores. However, based on responses to a post-experiment questionnaire, participants in both conditions tended to use a cognitive or procedural technique to inform direction to landmarks. Taken together, these findings suggest that in both conditions where participants experience travel based on teleportation between waypoints, the visual cues available in the SPE, such as street signs, buildings and trees, seem to have a stronger influence in determining the directions to landmarks than the egocentric cues such as first-person perspective and natural head-coupled motion experienced in the immersive viewing condition.
Keywords: Immersive virtual environments; Immersive spherical panoramas; Spatial updating; 3D human-computer interaction
Evaluation of direct manipulation using finger tracking for complex tasks in an immersive cube BIBAKFull-Text 203-217
  Emmanuelle Chapoulie; Maud Marchal; Evanthia Dimara; Maria Roussou; Jean-Christophe Lombardo; George Drettakis
A solution for interaction using finger tracking in a cubic immersive virtual reality system (or immersive cube) is presented. Rather than using a traditional wand device, users can manipulate objects with fingers of both hands in a close-to-natural manner for moderately complex, general purpose tasks. Our solution couples finger tracking with a real-time physics engine, combined with a heuristic approach for hand manipulation, which is robust to tracker noise and simulation instabilities. A first study has been performed to evaluate our interface, with tasks involving complex manipulations, such as balancing objects while walking in the cube. The user's finger-tracked manipulation was compared to manipulation with a 6 degree-of-freedom wand (or flystick), as well as with carrying out the same task in the real world. Users were also asked to perform a free task, allowing us to observe their perceived level of presence in the scene. Our results show that our approach provides a feasible interface for immersive cube environments and is perceived by users as being closer to the real experience compared to the wand. However, the wand outperforms direct manipulation in terms of speed and precision. We conclude with a discussion of the results and implications for further research.
Keywords: Virtual reality; Direct manipulation; Immersive cube; Finger tracking
A framework to design 3D interaction assistance in constraints-based virtual environments BIBAKFull-Text 219-234
  Mouna Essabbah; Guillaume Bouyer; Samir Otmane; Malik Mallem
The equilibrium of complex systems often depends on a set of constraints. Thus, credible virtual reality modeling of these systems must respect these constraints, in particular for 3D interactions. In this paper, we propose a generic framework for designing assistance to 3D user interaction in constraints-based virtual environment that associates constraints, interaction tasks and assistance tools, such as virtual fixtures (VFs). This framework is applied to design assistance tools for molecular biology analysis. Evaluation shows that VF designed using our framework improve effectiveness of the manipulation task.
Keywords: Virtual reality; 3D interaction; Framework; Complex environments; Constraints; Assistance model; Virtual fixtures

VR 2014-11 Volume 18 Issue 4

New wireless connection between user and VE using speech processing BIBAKFull-Text 235-243
  M. Ali Mirzaei; Frederic Merienne; James H. Oliver
This paper presents a novel speak-to-VR virtual-reality peripheral network (VRPN) server based on speech processing. The server uses a microphone array as a speech source and streams the results of the process through a Wi-Fi network. The proposed VRPN server provides a handy, portable and wireless human machine interface that can facilitate interaction in a variety interfaces and application domains including HMD- and CAVE-based virtual reality systems, flight and driving simulators and many others. The VRPN server is based on a speech processing software development kits and VRPN library in C++. Speak-to-VR VRPN works well even in the presence of background noise or the voices of other users in the vicinity. The speech processing algorithm is not sensitive to the user's accent because it is trained while it is operating. Speech recognition parameters are trained by hidden Markov model in real time. The advantages and disadvantages of the speak-to-VR server are studied under different configurations. Then, the efficiency and the precision of the speak-to-VR server for a real application are validated via a formal user study with ten participants. Two experimental test setups are implemented on a CAVE system by using either Kinect Xbox or array microphone as input device. Each participant is asked to navigate in a virtual environment and manipulate an object. The experimental data analysis shows promising results and motivates additional research opportunities.
Keywords: Speak-to-VR; Wi-Fi network; Speech processing; VRPN server
Shadow removal of projected imagery by occluder shape measurement in a multiple overlapping projection system BIBAKFull-Text 245-254
  Daisuke Iwai; Momoyo Nagase; Kosuke Sato
This paper presents a shadow removal technique for a multiple overlapping projection system. In particular, this paper deals with situations where cameras cannot be placed between the occluder and projection surface. We apply a synthetic aperture capturing technique to estimate the appearance of the projection surface, and a visual hull reconstruction technique to measure the shape of the occluder. Once the shape is acquired, shadow regions on the surface can be estimated. The proposed shadow removal technique allows users to balance between the following two criteria: the likelihood of new shadow emergence and the spatial resolution of the projected results. Through a real projection experiment, we evaluate the proposed shadow removal technique.
Keywords: Shadow removal; Multiple overlapping projection; Synthetic aperture capturing; Visual hull
Aerial full spherical HDR imaging and display BIBAKFull-Text 255-269
  Fumio Okura; Masayuki Kanbara; Naokazu Yokoya
This paper describes a framework for aerial imaging of high dynamic range (HDR) scenes for use in virtual reality applications, such as immersive panorama applications and photorealistic superimposition of virtual objects using image-based lighting. We propose a complete and practical system to acquire full spherical HDR images from the sky, using two omnidirectional cameras mounted above and below an unmanned aircraft. The HDR images are generated by combining multiple omnidirectional images captured with different exposures controlled automatically. Our system consists of methods for image completion, alignment, and color correction, as well as a novel approach for automatic exposure control, which selects optimal exposure so as to avoid banding artifacts. Experimental results indicated that our system generated better spherical images compared to an ordinary spherical image completion system in terms of naturalness and accuracy. In addition to proposing an imaging method, we have carried out an experiment about display methods for aerial HDR immersive panoramas utilizing spherical images acquired by the proposed system. The experiment demonstrated HDR imaging is beneficial to immersive panorama using an HMD, in addition to ordinary uses of HDR images.
Keywords: Omnidirectional camera; High dynamic range image; Immersive panorama; Image-based lighting; Tone-mapping
Using a virtual environment to assess cognition in the elderly BIBAKFull-Text 271-279
  Valerie E. Lesk; Syadiah Nor Wan Shamsuddin; Elizabeth R. Walters; Hassan Ugail
Early diagnosis of Alzheimer's disease (AD) is essential if treatments are to be administered at an earlier point in time before neurons degenerate to a stage beyond repair. In order for early detection to occur tools used to detect the disorder must be sensitive to the earliest of cognitive impairments. Virtual reality technology offers opportunities to provide products which attempt to mimic daily life situations, as much as is possible, within the computational environment. This may be useful for the detection of cognitive difficulties. We develop a virtual simulation designed to assess visuospatial memory in order to investigate cognitive function in a group of healthy elderly participants and those with a mild cognitive impairment (MCI). Participants were required to guide themselves along a virtual path to reach a virtual destination which they were required to remember. The preliminary results indicate that this virtual simulation has the potential to be used for detection of early AD since significant correlations of scores on the virtual environment with existing neuropsychological tests were found. Furthermore, the test discriminated between healthy elderly participants and those with a MCI.
Keywords: Virtual reality; Spatial memory; Mild cognitive impairment; Spatial navigation
Computer-based virtual reality simulator for phacoemulsification cataract surgery training BIBAKFull-Text 281-293
  Chee Kiang Lam; Kenneth Sundaraj; M. Nazri Sulaiman
Recent research in virtual reality indicates that computer-based simulators are an effective technology to use for surgeons learning to improve their surgical skills in a controlled environment. This article presents the development of a virtual reality simulator for phacoemulsification cataract surgery training, which is the most common surgical technique currently being used to remove cataracts from the patient's eyes. The procedure requires emulsifying the cloudy natural lens of the eye and restoring vision by implanting an artificial lens through a small incision. The four main procedures of cataract surgery, namely corneal incision, capsulorhexis, phacoemulsification, and intraocular lens implantation, are incorporated in the simulator for virtual surgical training by implementing several surgical techniques. The surgical activity that are applied on the anatomy of the human eye, such as incision, grasping, tearing, emulsification, rotation, and implantation, are simulated in the system by using different types of mesh modifications. A virtual reality surgical simulator is developed, and the main procedures of phacoemulsification cataract surgery are successfully simulated in the system. The simulation results of the training system show that the developed simulator is capable of generating a virtual surgical environment with faithful force feedback for medical residents and trainees to conduct their training lessons via the computer using a pair of force-feedback haptic devices. In addition, the successful simulation of the mesh modifications on the human eyeball with visual realism and faithful force feedback throughout the surgical operation shows that the developed simulator is able to serve as a virtual surgical platform for surgeons to train their surgical skills.
Keywords: Phacoemulsification cataract surgery; Surgical training; Medical simulator; Virtual reality; Haptic device