HCI Bibliography Home | HCI Conferences | AH Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AH Tables of Contents: 10111213141516

Proceedings of the 2012 Augmented Human International Conference

Fullname:Proceedings of the 3rd Augmented Human International Conference
Editors:Jean-Marc Seigneur; Hartmut Koenitz; Guillaume Moreau
Location:Megeve, France
Dates:2012-Mar-08 to 2012-Mar-09
Standard No:ISBN: 1-4503-1077-X, 978-1-4503-1077-2; ACM DL: Table of Contents hcibib: AH12
Links:Conference Home Page
Summary:The third Augmented Human International Conference (AH'12) has again gathered scientific papers from many different disciplines: information technology, human computer interface, brain computing interface, sport and human performance, augmented reality, wearable computing... This third edition is quite multidisciplinary for a research domain that requires even more interdisciplinarity as it touches the human person. Many papers concentrated on building the human augmentation technologies, which is necessary for them to emerge in the real world. However, too few papers were investigating the ethical or safety issues of augmented human technologies. The next edition may bring more papers on this essential aspect that must be taken into account for a long term success of these technologies.
An on-site programming environment for wearable computing BIBAFull-Text 1
  Shotaro Akiyama; Tsutomu Terada; Masahiko Tsukamoto
In wearable computing environments, it is difficult for users to prepare applications that are used beforehand since there are various situations and places. Therefore, they want to define new services by themselves. In this study, we present a development framework and several tools for developing services in wearable computing environments. The framework consists of an event-driven rule processing engine and service implementation tools, which enable users to program services easily and quickly. The proposed system shows elements of event-driven rules as chips, and we can program services by selecting chips on graphical user interfaces. In addition, the proposed system has two functions considering programming features on wearable computing: genetic-algorithm-based programming and social-network-based programming.
Therapy: location-aware assessment and tasks BIBAFull-Text 2
  Luís Carriço; Marco de Sá; Luís Duarte; Tiago Antunes
In this paper, we present a system that allows therapists to assess and engage patients' in activities triggered by specific stressing contexts. The system is composed by: 1) a web application that the therapist uses to specify the activities and its triggering conditions; and 2) a mobile app that measures physiologic characteristics and challenges the patient to perform the activities according to those conditions. This toolset is part of an extended cognitive behaviour therapy framework. The preliminary evaluation results are encouraging and indicate that the system can be of use and usable for direct application in therapy procedures.
A study to understand lead-lag performance of subject vs rehabilitation system BIBAFull-Text 3
  Radhika Chemuturi; Farshid Amirabdollahian; Kerstin Dautenhahn
Robotic assistance in stroke rehabilitation is rapidly advancing based on the recent developments in robotics, haptic interfaces and virtual reality. GENTLE/S is a rehabilitation system that utilized haptic and virtual reality technologies to deliver challenging and meaningful therapies to upper limb impaired stroke subjects. The current research is working towards designing GENTLE/A system with a better adaptive human-robot interface, which allows for automatic tuning of the assistance and resistance based on provided input. This paper presents the results from a preliminary study conducted with three healthy subjects as part of this research. The aim of the investigation is to explore whether it is possible to identify if a robot or a person is leading the interaction by comparing the results from the actual performance of the subject with the minimum jerk model used to drive the robot. The final goal is to use these observations to probe various ways in which the contribution of robot can be established and the adaptability of the robot during the therapy can be enhanced.
Can you feel it?: sharing heart beats with Augmento BIBAFull-Text 4
  Luís Duarte; Tiago Antunes; Luís Carriço
This paper presents Augmento, a system which aims at providing individuals with an asynchronous approach of reinforcing the bonds with their relatives by sharing emotions when they are in the vicinity of places which hold special memories of their lives. Augmento capitalizes on existing technologies to accomplish its goal, ranging from the usage of location based services, to the retrieval of the individual's physiological signals to convey typically occluded information between individuals, more particularly in long-distance relationships. The paper presents the general vision for the system, its workflow, architecture, scenarios and early prototypes. We performed an early assessment of the system and, in particular, we were interested in obtaining valuable insight whether vibrotactile feedback would be suited to convey and mimic an individual's heartbeat rate value to other users. The results for this testing period are presented and discussed in the paper.
User performance tweaking in videogames: a physiological perspective of player reactions BIBAFull-Text 5
  Luís Duarte; Luís Carriço
The videogame industry has suffered significant modifications in the last years, broadening its horizons towards a more casual market. This market expansion not only brings new opportunities from an interaction point-of-view, but also new challenges with the inclusion of users who are not accustomed to these games. This paper presents part of an ongoing study which aims at providing a better understanding of player behavior both from an interactive and a physiological standpoint. The experiment addressed here assesses different gameplay mechanics influence not only a subset of the players' physiological signals, but also their performance and interactive behavior.
A method to evaluate metal filing skill level with wearable hybrid sensor BIBAFull-Text 6
  Yu Enokibori; Kenji Mase
This paper presents a method to evaluate a person's skill level for metal filing. Metal filing by expert engineers is an important manufacturing skill that supports basic areas of industry, although most sequences are already automated with industrial robots.
   However, there is no effective training method for the skill; "coaching" has been most weighted. Most coaching has depended on the coaches' personal viewpoints. In addition, skill levels have been assessed subjectively by the coaches. Because of these problems, learners have to spend several hundred hours to acquire the basic manufacturing skill.
   Therefore, to develop an effective skill training scheme and an objective skill level assessment, we analyzed metal filing and implemented a method to evaluate metal-filing skill. We used wearable hybrid sensors that support an accelerometer and gyroscope, and collected data from 4 expert coaches and 10 learners. The data are analyzed from the viewpoint of the mechanical structure of their bodies during metal filing. Our analysis yielded three effective measures for skill assessment: "Class 2 Lever-like Movement Measure", "Upper Body Rigidity Measure", and "Pre-Acceleration Measure".
   The weighted total measure succeeded in distinguishing the coach group and the learner group as individual skill level groups at a 95% confidence level. The highest-level learner, the lowest-level learner, and the group of other learners were also able to be distinguished as individual skill level groups at a 95% confidence level; this is the same result as an expert coach's subjective score.
Relation between location of information displayed by augmented reality and user's memorization BIBAFull-Text 7
  Yuichiro Fujimoto; Goshiro Yamamoto; Hirokazu Kato; Jun Miyazaki
This study aims to investigate the effectiveness of Augmented Reality (AR) on user's memory skills when it is used as an information display method. By definition, AR is a technology which displays virtual images on the real world. These computer generated images naturally contain location information on the real world. It is also known that humans can easily memorize and remember information if this information is retained along with some locations on the real world. Thus, we hypothesize that displaying annotations by using AR may have better effects on the user's memory skill, if they are associated with the location of the target object on the real world rather than when connected with an unrelated location. A user study was conducted with 30 participants in order to verify our hypothesis. As a result, a significant difference was found between the situation when information was associated with the location of the target object on the real world and when it was connected with an unrelated location. In this paper, we present the test results and explain the verification based on the results.
Facilitating a surprised feeling by artificial control of piloerection on the forearm BIBAFull-Text 8
  Shogo Fukushima; Hiroyuki Kajimoto
There have been many proposals that have added haptic stimulation to entertainment content such as music, games, and movies. These technologies enrich the quality of the experiences by improving the reality thereof. In contrast, we present a novel approach to enrich the quality of these experiences by facilitating the emotional feeling evoked by the content. In this paper, we focus on piloerection, which is a kind of involuntary emotional reaction. Our hypothesis is that not only is it an emotional "reaction", but it can also work as an emotional "input" that enhances the emotion itself. We have constructed a device that controls piloerection on the forearm through electrostatic force. Based on a psychophysical experiment, we confirm that the piloerection system enhances the feeling of surprise.
KUSUGURI: a shared tactile interface for bidirectional tickling BIBAFull-Text 9
  Masahiro Furukawa; Hiroyuki Kajimoto; Susumu Tachi
Tickling, a nonverbal form of communication, can provide entertainment. Therefore, tickling is a desirable addition as content as a remote communication method. However, tickling is difficult to realize because it requires both body contact as well as bidirectionality. In this paper, we propose a method of "Shared Tactile Interface" which allows sharing of a body part with another user at a distance. The interface has three features: direct contact, transfer of the tickling sensation, and bidirectionality. The first allows users to view another person's finger as if it is directly contacting the user's own palm and moving on the user's palm. The second feature delivers a vibration to the user's palm which generates an illusion and perception of a tickling sensation. The third feature enables bidirectional tickling because one user can also tickle the other user's palm in the same manner. We built prototypes based on this design method, and evaluated the proposed method through two technical exhibitions. The users were able to tickle each other, which confirmed that the design method "Shared Tactile Interface" works as expected. However, we found issues especially regarding the reliability of the tickling sensation.
Stereo camera based wearable reading device BIBAFull-Text 10
  Roman Guilbourd; Noam Yogev; Raúl Rojas
The ability to access textual information is crucial for visually impaired people in terms of achieving greater independence in their everyday life. Thus, there is a need for a mobile easy-to-use reading device, capable of dealing with the complexity of the outdoor environment. In this paper a wearable camera-based solution is presented, aiming at improving the performance of existing systems through the use of stereo vision. Specific aspects of the stereo matching problem in document images are discussed and an approach for its integration into the document processing procedure is introduced. We conclude with the presentation of experimental results from a prototype system, which demonstrate the practical benefits of the presented approach.
Realtime sonification of the center of gravity for skiing BIBAFull-Text 11
  Shoichi Hasegawa; Seiichiro Ishijima; Fumihiro Kato; Hironori Mitake; Makoto Sato
Control of body position is important in skiing. During turn, novice skiers often lean back and lose their control. Leaning back is natural reaction for people. They arc afraid of the slope or speed. We develop a device to provide realtime sonification feedback of the center of gravity of the skier. The device guides the position of skier. A preliminary experiment shows possibility of improvements that the user become to be able to control their position immediately and even to overcome the afraid of slope and speed.
A pointing method using accelerometers for graphical user interfaces BIBAFull-Text 12
  Tatsuya Horie; Tsutomu Terada; Takuya Katayama; Masahiko Tsukamoto
Graphical User Interfaces (GUIs) are widely used and pointing devices are required to operate most of them. We have proposed Xangle, a pointing method using two accelerometers for wearable computing environments. The cursor is positioned at the intersection of two straight lines, which are synchronized with the angles of the accelerometers at fingers. However, Xangle is difficult to be used in daily-life, when the user frequently changes which part of the body they point with. Therefore, we propose a method of changing the body parts used for pointing according to the situation. Additionally, we proposed a method to accelerate the pointer and a method to layout menu items for Xangle since these methods are suitable for using GUI in wearable computing environments. We confirmed that the proposed method was effective from the results of evaluations.
Effects of auditory feedback for augmenting the act of writing BIBAFull-Text 13
  Junghyun Kim; Tomoko Hashida; Tomoko Ohtani; Takeshi Naemura
In this paper, focusing on the writing sound when using an ordinary paper and pen, we explain how auditory feedback augments the act of writing. Specifically, we evaluated the effectiveness of the auditory feedback by comparing writing tasks, which involved tracing Chinese characters, Without Feedback (No), with Monaural Feedback (MF), and Stereo Feedback (SF). The results of this study showed that auditory feedback (MF and SF) of writing produced more written characters than Without Feedback (No) and had fewer negative impressions during the writing task.
Stop motion goggle: augmented visual perception by subtraction method using high speed liquid crystal BIBAFull-Text 14
  Naoya Koizumi; Maki Sugimoto; Naohisa Nagaya; Masahiko Inami; Masahiro Furukawa
Stop Motion Goggle (SMG) expands visual perception by allowing users to perceive visual information selectively through a high speed shutter. In this system, the user can easily observe not only periodic rotational motion such as rotating fans or wheels, but also random motion like bouncing balls. In this research, we developed SMG and evaluated the effect of SMG on visual perception of high speed moving objects. Furthermore this paper describes users' behaviors under the expanded visual experience.
Quantifying Japanese onomatopoeias: toward augmenting creative activities with onomatopoeias BIBAFull-Text 15
  Takanori Komatsu
Onomatopoeias are used when one cannot describe certain phenomena or events literally in the Japanese language, and it is said that one's ambiguous and intuitive feelings are embedded in these onomatopoeias. Therefore, an interface system that can use onomatopoeia as input information could comprehend such users' feelings, and moreover, this system would contribute to augmenting creative activities such as with computer graphics, music, choreography, and so on. The purpose of this study is to propose an objective quantification method for onomatopoeias in the form of an expression vector to be applied to an interface system in order to augment various creative activities.
Transmission of forearm motion by tangential deformation of the skin BIBAFull-Text 16
  Yuki Kuniyasu; Michi Sato; Shogo Fukushima; Hiroyuki Kajimoto
When teaching device handling skills such as those required in calligraphy, sports or surgery, it is important that appropriate arm motion is transmitted from the trainer to the trainee. In this study, we present a novel, wearable haptic device that produces arm motion using force sensation. The device produces skin deformation and a pseudo-force sensation that is similarly to the force produced when the arm is "pulled". The device generates skin deformation in four directions, and in this paper we have evaluated the device using a directions perception experiment.
Ma petite chérie: what are you looking at?: a small telepresence system to support remote collaborative work for intimate communication BIBAFull-Text 17
  Kana Misawa; Yoshio Ishiguro; Jun Rekimoto
We present a telepresence system with a reduced scale face-shaped display for supporting intimate telecommunication. In our previous work, we have developed a real-size face shaped display that tracks and reproduces the remote user's head motion and face image. It can convey user's nonverbal information such as facial expression and gaze awareness. In this paper, we examine the value and effect of scale reduction of such face-shaped displays. We expect small size face displays retain the benefit of real-size talking-head type telecommunication systems, and also provide more intimate impression. It is easier to transport or put on a desk, and it can be worn on the shoulder of the local participants so that people bring it like a small buddy. However, it is not clear how such reduced-size face screen might change the quality of nonverbal communication. We thus conducted an experiment using a 1/14 scale face display, and found critical nonverbal information, such as gaze-direction, is still correctly transmitted even when face size is reduced.
A new typology of augmented reality applications BIBAFull-Text 18
  Jean-Marie Normand; Myriam Servières; Guillaume Moreau
In recent years Augmented Reality (AR) has become more and more popular, especially since the availability of mobile devices, such as smartphones or tablets, brought AR into our everyday life. Although the AR community has not yet agreed on a formal definition of AR, some work focused on proposing classifications of existing AR methods or applications. Such applications cover a wide variety of technologies, devices and goals, consequently existing taxonomies rely on multiple classification criteria that try to take into account AR applications diversity. In this paper we review existing taxonomies of augmented reality applications and we propose our own, which is based on (1) the number of degrees of freedom required by the tracking of the application, as well as on (2) the visualization mode used, (3) the temporal base of the displayed content and (4) the rendering modalities used in the application. Our taxonomy covers location-based services as well as more traditional vision-based AR applications. Although AR is mainly based on the visual sense, other rendering modalities are also covered by the same degree-of-freedom criterion in our classification.
Investigating the use of brain-computer interaction to facilitate creativity BIBAFull-Text 19
  D. A. Todd; P. J. McCullagh; M. D. Mulvenna; G. Lightbody
Brain Computer Interaction (BCI) has mainly been utilized for communication and control, but it may also find application as a channel for creative expression, as part of an entertainment package. In this paper we provide an initial investigation on how creativity can be supported and assessed. An art-based approach was adopted to investigate the effects of achieving simple drawing and painting. Subjects were asked to complete three tasks using an Steady State Visual Evoked Potential BCI; a drawing task called 'etch-a-sketch' (TASK 1) which relied entirely upon BCI control, and two painting tasks, the first (TASK 2) with a set goal and the second (TASK 3) with more potential for user expression. The tasks varied in the proportion of control to creativity required. Participants provided feedback on their perception of the control and creative aspects and their overall experience. The painting application (TASK 3) for which users perceived that they had more creativity was well accepted; 50% of the users preferred this mode of interaction. The experimental approach described allows for an initial assessment of the acceptance of BCI-mediated artistic expression.
Human-centric panoramic imaging stitching BIBAFull-Text 20
  Tomohiro Ozawa; Kris M. Kitani; Hideki Koike
We introduce a novel image mosaicing algorithm to generate 360° landscape images while also taking into account the presence of people at the boundaries between stitched images. Current image mosaicing techniques tend to fail when there is extreme parallax caused by nearby objects or moving objects at the boundary between images. This parallax causes ghosting or unnatural discontinuities in the image. To address this problem, we present an image mosaicing algorithm that is robust to parallax and misalignment, and is also able to preserve the important human-centric content, specifically faces. In particular, we find an optimal path between the boundary of two images that preserves color continuity and peoples' faces in the scene. Preliminary results show promising results of preserving close-up faces with parallax while also being able to generate a perceptually plausible 360° panoramic image.
Augmenting on-road perception: enabling smart and social driving with sensor fusion and cooperative localization BIBAFull-Text 21
  Chieh-Chih Wang; Jennifer Healey; Meiyuan Zhao
In many ways the car is the most common human augmentation: it increases our speed, renders us more powerful and enables us to reach distances that are otherwise impossible. In this paper, we show how advanced localization systems enable yet another dimension of human augmentation: allowing the driver to visually perceive data streams from other cars. These data streams may contain social messages from other drivers such as "Follow Me" or warnings from the sensor systems of the other cars themselves such as "Distracted Driver!" We describe both the technical work in progress that makes this system possible as well as the future vision of how this technology will enable smart and social driving through M2M communication with other vehicles that are encountered ad hoc on the road.
EyeSound: single-modal mobile navigation using directionally annotated music BIBAFull-Text 22
  Shingo Yamano; Takamitsu Hamajo; Shunsuke Takahashi; Keita Higuchi
In this paper, we propose a mobile navigation system that uses only auditory information, i.e., music, to guide the user. The sophistication of mobile devices has introduced the use of contextual information in mobile navigation, such as the location and the direction of motion of a pedestrian. Typically in such systems, a map on the screen of the mobile device is required to show the current position and the destination. However, this restricts the movements of the pedestrian, because users must hold the device to observe the screen. We have, therefore, implemented a mobile navigation system that guides the pedestrian in a non-restricting manner by adding direction information to music. By measuring the resolution of the direction that the user can perceive, the phase of the musical sound is changed to guide the pedestrian. Using this system, we have verified the effectiveness of the proposed mobile navigation system.
Looming silhouette: an approaching visual stimulus device for pedestrians to avoid collisions BIBAFull-Text 23
  Maki Yokoyama; Yu Okano; Michi Sato; Shogo Fukushima; Masahiro Furukawa; Hiroyuki Kajimoto
We are exposed daily to the risk of collision at numerous blind intersections. To avoid the risk of collision, we propose a system that elicits an "approaching sensation" by presenting a visual stimulus. Possible factors for the approaching sensation are the "expansion" and "motion" of a silhouette. We compared the effects of these two factors on the approaching sensation and found that to elicit an approaching sensation, the expansion factor is important, and the motion factor has a certain effect in alarming pedestrians. On the base of this result, we produced a system that presents an expanding and moving silhouette of an approaching pedestrian to the pedestrians user.
Augmentation of obstacle sensation by enhancing low frequency component for horror game background sound BIBAFull-Text 24
  Shuyang Zhao; Taku Hachisu; Asuka Ishii; Yuki Kuniyasu; Hiroyuki Kajimoto
Computer games provide users with a mental stimulation that the real world cannot. Especially, horror games are a popular category. Current horror games can provide the user with a visible ghost and the stereo background sound to thrill the user. Inspired by obstacle sense -- the ability of blind people localizing themselves only with hearing, a novel method to augment the sense of existence in the game background sound is proposed in this paper. We found that an effective sense can be created by decreasing high frequency component and increasing low frequency component simultaneously.
Crowd augmented wireless access BIBAFull-Text 25
  Carlos Ballester Lafuente; Jean-Marc Seigneur
Environments such as ski slopes are highly dynamic, as users are constantly moving at high speeds and in different directions, and also many users are not locals, thus having to roam in order to be able to connect through mobile data. These two previous reasons make connectivity through regular means to be difficult to attain. This demo paper presents the simulation and validation of a crowd augmented wireless access used in order to tackle this problem.
Usability of video-overlaying SSVEP based BCIs BIBAFull-Text 26
  Christoph Kapeller; Christoph Hintermüller; Christoph Guger
This work investigates the usability of an steady-state visual evoked potentials (SSVEP) based brain-computer interface (BCI) with on-screen stimulation. The BCI controls were displayed with an underlying feedback video. Each control had a unique flashing frequency. For classification a combination of minimum energy (ME) and linear discriminant analysis (LDA) was used. Two experiments showed that the use of overlaying controls is possible, but also decreasing the performance.
Augmented control of an avatar using an SSVEP based BCI BIBAFull-Text 27
  Christoph Kapeller; Christoph Hintermüller; Christoph Guger
The demonstration shows the usage of an EEG-based brain-computer interface (BCI) for the real-time control of an avatar in World of Warcraft. Visitors can test the installation during the conference after about 5 minutes of training time. World of Warcraft is a common Massively Multiplayer Online Role-Playing Game (MMORPG) in which the player controls an avatar in a virtual environment.
   The user has to wear newly developed dry EEG electrodes which are connected to a biosignal amplifier. Then the data is transmitted to a computer to perform the real-time analysis of the EEG data. The BCI system is using steady-state visual evoked potentials (SSVEPs) as control signal. Therefore the system shows different icons flickering with different frequencies. If the user focuses now on one of the icons the flickering frequency is visible in the EEG data and can be extracted with frequency analysis algorithms.
   In order to control an avatar in World of Warcraft it is necessary to have 4 control icons that are analyzed in real-time. Three icons are necessary to turn left or right or to move forward. Additionally a 4th icon is required to perform certain actions like grasping objects, attacking other objects....like shown in Figure 1. The visual stimulation took place via a 60Hz LCD-display with flickering frequencies of 15, 12, 10 and 8.57Hz in combination with an underlying video.
   To visualize the flickering controls a BCI-Overlay library based on OpenGL was implemented, which can be used by any graphics application. It provides the possibility to generate BCI controls within a virtual reality environment or as overlays in combination with video sequences
   Figure 2 shows the components of the complete system. The user is connected with 8 EEG electrodes to the BCI system that is running under Windows and MATLAB. The BCI system uses the minimum energy algorithm and a linear discriminant analysis to determine if the user is looking at one of the icons or if the user is not attending.
   Via a UDP communication channel the BCI system is controlling the BCI-Overlay module that generates the 4 flickering icons around the WoW User Interface. If the BCI system detects a certain command it is transmitted to the game controller which generates the corresponding WoW command. This is straight forward for the left, right and move forward commands, but more complicated for the action command. Action commands are context dependant and the controller has to select certain possible actions. Finally the command is transmitted to WoW and the avatar performs the action.This allows the user to play WoW with the BCI system only by thought.
Augmentation of kinesthetic sensation by adding "rotary switch feeling" feedback BIBAFull-Text 28
  Yosuke Kurihara; Yuki Kuniyasu; Taku Hachisu; Michi Sato; Shogo Fukushima; Hiroyuki Kajimoto
In sports, dancing and playing music, it is important to achieve correct body movement as it greatly affects performance. However, matching one's movement with ideal movement is fundamentally difficult, because we do not have a detailed perception of our own body movement. In this study, we propose to present "rotary switch feeling" feedback as a new haptic cue. A periodical ticking sensation, like that of a rotary switch, can be presented at each joint so that the user vividly perceives his/her movement. This paper presents a simple mechanical prototype that is attached to the elbow.
Gesture keyboard with a machine learning requiring only one camera BIBAFull-Text 29
  Taichi Murase; Atsunori Moteki; Genta Suzuki; Takahiro Nakai; Nobuyuki Hara; Takahiro Matsuda
In this paper, the authors propose a novel gesture-based virtual keyboard (Gesture Keyboard) that uses a standard QWERTY keyboard layout, and requires only one camera, and employs a machine learning technique. Gesture Keyboard tracks the user's fingers and recognizes finger motions to judge keys input in the horizontal direction. Real-Adaboost (Adaptive Boosting), a machine learning technique, uses HOG (Histograms of Oriented Gradients) features in an image of the user's hands to estimate keys in the depth direction. Each virtual key follows a corresponding finger, so it is possible to input characters at the user's preferred hand position even if the user displaces his hands while inputting data. Additionally, because Gesture Keyboard requires only one camera, keyboard-less devices can implement this system easily. We show the effectiveness of utilizing a machine learning technique for estimating depth.
Kaleidoscopes for binocular rivalry BIBAFull-Text 30
  Yoichi Ochiai
When you look into the two kaleidoscopes at the same time, a wonderful and strange scene is in sight. We developed the stereo electronic kaleidoscope with the high definition display. It shows the images as beautiful as classic kaleidoscopes can show.
   We tested and selected the images which cause the binocular rivalry effect. This work makes the augmented kaleidoscopes which gives us wonderful feeling on the structure and functions of our brain.
Invisible feet under the vehicle BIBAFull-Text 31
  Yoichi Ochiai; Keisuke Toyoshima
When we drive a car, we have many blind spots. The information from the outside is almost limited to vision and sound. We have a vision that the driver and the car unified and moves as one [1] to face the problem of the gap of the information between the outside of the car and inside. We call the unity of driver and the car Homunculus which makes communication with the outside of the vehicle.
   With this concept, we developed a new haptic system. Our system assigns the sense of driver's foot to the bottom of a car. It connects nine vibration motors on a grid to the nine IR distance sensors on a grid. If users use this system, they can feel something passed through the bottom of a car, a bump and so on with feeling like a sole of hid foot was touched. It is like a invisible foot (Figure1) is sticked out bottom of the cars.
   We applied our prototype to several cases in driving and found several interesting points on this. We would discuss on these points on this paper.
Presentation of directional information by sound field control BIBAFull-Text 32
  Yutaka Takase; Shoichi Hasegawa
We propose a novel method for presentation of directional information. The system presents an perception of presence of obstacles by controlling environment sound field. Visual map and voice prompts are practical method for directional information presentation and used in car navigation system. However they occupy sense of sight and hearing.
   By contrast, our method can present directional information naturally without occupying sensory channels. Therefore, users can get benefits of directional information with enjoying surrounding environment.