HCI Bibliography Home | HCI Conferences | AH Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AH Tables of Contents: 10111213141516

Proceedings of the 2011 Augmented Human International Conference

Fullname:Proceedings of the 2nd Augmented Human International Conference
Editors:Masahiko Inami; Jun Rekimoto; Hideki Koike; Hideo Saito
Location:Tokyo, Japan
Dates:2011-Mar-13 to 2011-Mar-13
Standard No:ISBN: 1-4503-0426-5, 978-1-4503-0426-9; ACM DL: Table of Contents hcibib: AH11
Links:Conference Home Page
Interacting with smart walls: a multi-dimensional analysis of input technologies for augmented environments BIBAFull-Text 1
  Felix Heidrich; Martina Ziefle; Carsten Röcker; Jan Borchers
This paper reports on a multi-dimensional evaluation of three typical interaction devices for wall-sized displays in augmented environments. Touch, trackpad and gesture input were evaluated regarding a variety of usability dimensions in order to understand the quality profile of each input device. Among the three interaction devices, the touch input showed the highest scores in performance and acceptance as well as hedonic value.
Interactive bookshelf surface for in situ book searching and storing support BIBAFull-Text 2
  Kazuhiro Matsushita; Daisuke Iwai; Kosuke Sato
We propose an interactive bookshelf surface to augment a human ability for in situ book searching and storing. In book searching support, when a user touches the edge of the bookshelf, the cover image of a stored book located above the touched position is projected directly onto the book spine. As a result, the user can search for a desired book by sliding his (or her) finger across the shelf edge. In book storing support, when a user brings a book close to the bookshelf, the place where the book should be stored is visually highlighted by a projection light. This paper also presents sensing technologies to achieve the above mentioned interactive techniques. In addition, by considering the properties of the human visual system, we propose a simple visual effect to reduce the legibility degradation of the projected image contents by the complex textures and geometric irregularities of the spines. We confirmed the feasibility of the system and the effectiveness of the proposed interaction techniques through user studies.
Homunculus: the vehicle as augmented clothes BIBAFull-Text 3
  Yoichi Ochiai; Keisuke Toyoshima
In this paper we propose to add a new system with valuable functionalities to vehicles. We call it "Homunculus". It is based on a new concept of interactions between humans and vehicles. It promotes and augments nonverbal communicability of humans in the vehicles.
   It is difficult to communicate with the drivers in the vehicles by eye contact, hand gestures or touching behavior. Our "Homunculus" is a system to solve these problems. The instruments of "Homunculus" are composed of three system modules. The First is Robotic Eyes System which is a set of robotic eyes that follows drivers eye movements & head rotations. The Second is Projection System which shows drivers hand gestures on the road. The Third is Haptic Communication System which consists of IR Distance Sensors Array on the vehicle and Vibration motors attached to the driver. It gives drivers the haptic sense to approaching objects to the vehicle. These three Systems are set on vehicle's hood or side.
   We propose the situation that humans and vehicles can be unified as one unit by Homunculus. This system works as a middleman for communications between men and vehicles, people in other cars, or even people just walking the street. We suggest the new relationship of men and their vehicles could be like men and their clothes.
Full body interaction for serious games in motor rehabilitation BIBAFull-Text 4
  Christian Schönauer; Thomas Pintaric; Hannes Kaufmann
Serious games and especially their use in healthcare applications are an active and rapidly growing area of research. A key aspect of games in rehabilitation is 3D input. In this paper we present our implementation of a full body motion capture (MoCap) system, which, together with a biosignal acquisition device, has been integrated in a game engine. Furthermore, a workflow has been established that enables the use of acquired skeletal data for serious games in a medical environment. Finally, a serious game has been implemented, targeting rehabilitation of patients with chronic pain of the lower back and neck, a group that has previously been neglected by serious games. The focus of this work is on the full body MoCap system and its integration with biosignal devices and the game engine. A short overview of the application and prelimiary results are provided.
The PhantomStation: towards funneling remote tactile feedback on interactive surfaces BIBAFull-Text 5
  Hendrik Richter; Alina Hang; Benedikt Blaha
We present the PhantomStation, a novel interface that communicates tactile feedback to remote parts of the user's body. Thus, touch input on interactive surfaces can be augmented with synchronous tactile sensations. With the objective to reduce the number of tactile actuators on the user's body, we use the psychophysical Phantom Sensation (PhS) [1]. This illusion occurs when two or more tactile stimuli are presented simultaneously to the skin. The location of the pseudo-tactile sensation can be changed by modulating intensity or interstimulus time interval. We compare three different actuator technologies to recreate the PhS. Furthermore, we discuss how remote tactile of this kind can improve interaction accuracy. We present our prototype and propose scenarios in conjunction with interactive surfaces.
Acquisition of 3D gaze information from eyeball movements using inside-out camera BIBAFull-Text 6
  Shoichi Shimizu; Hironobu Fujiyoshi
We propose a method for obtaining 3D gaze information using inside-out camera. Such information on 3D gaze points can be useful not only to clarify higher cognitive processes in humans but also to reproduce the 3D shape of an object from eyeball movement simply by gazing at the object as an extension of the visual function. Using half-mirrors, an inside-out camera can capture a person's eyeball head-on and can capture the person's visual field from a position equivalent to that of the eyeball. Here, the relationship between the gaze vector obtained from images of the eyeball and the gaze point in images capturing the visual field is expressed by a conversion equation. The 3D position of the gaze point can then be estimated by using stereo constraints in two scene cameras. In an evaluation experiment, the gaze point could be estimated with an average error of about 15 pixels, and we also showed the 3D scan path obtained by the proposed method from eyeball movement by gazing at the object.
Flying sports assistant: external visual imagery representation for sports training BIBAFull-Text 7
  Keita Higuchi; Tetsuro Shimada; Jun Rekimoto
Mental imagery is a quasi-perceptual experience emerging from past experiences. In sports psychology, mental imagery is used to improve athletes' cognition and motivation. Eminent athletes often create their mental imagery as if they themselves are the external observers; such ability plays an important role in sport training and performance. Mental image visualization refers to the representation of external vision containing one's own self from the perspective of others. However, without technological support, it is difficult to obtain accurate external visual imagery during sports. In this paper, we have proposed a system that has an aerial vehicle (a quadcopter) to capture athletes' external visual imagery. The proposed system integrates various sensor data to autonomously track the target athlete and compute camera angle and position. The athlete can see the captured image in realtime through a head mounted display, or more recently through a hand-held device. We have applied this system to support soccer and other sports and discussed how the proposed system can be used during training.
Peripheral vision annotation: noninterference information presentation method for mobile augmented reality BIBAFull-Text 8
  Yoshio Ishiguro; Jun Rekimoto
Augmented-reality (AR) systems present information about a user's surrounding environment by overlaying it on the user's real-world view. However, such overlaid information tends to obscure a user's field of view and thus impedes a user's real-world activities. This problem is especially critical when a user is wearing a head-mounted display. In this paper, we propose an information presentation mechanism for mobile AR systems by focusing on the user's gaze information and peripheral vision field. The gaze information is used to control the positions and the level-of-detail of the information overlaid on the user's field of view. We also propose a method for switching displayed information based on the difference in human visual perception between the peripheral and central visual fields. We develop a mobile AR system to test our proposed method consisting of a gaze-tracking system and a retinal imaging display. The eye-tracking system estimates whether the user's visual focus is on the information display area or not, and changes the information type from simple to detailed information accordingly.
Ego-motion analysis using average image data intensity BIBAFull-Text 9
  Kojiro Kato; Kris M. Kitani; Takuya Nojima
In this paper, we present a new method to perform ego-motion analysis using intensity averaging of image data. The method can estimate general motions from two sequential images on pixel plane by calculating cross correlations. With distance information between camera and objects, this method also enables estimates of camera motion. This method is sufficiently robust even for out of focus image and the calculational overhead is quite low because it uses a simple averaging method. In the future, this method could be used to measure fast motions such as human head tracking, or robot movement. We present a detailed description of the proposed method, and experimental results demonstrating its basic capability. With these results, we verify that our proposed system can detect camera motion even with blurred images. Furthermore, we confirm that it can operate at up to 714 FPS in calculating one dimensional translation motion.
Weight illusion by tangential deformation of forearm skin BIBAFull-Text 10
  Yuki Kuniyasu; Shogo Fukushima; Masahiro Furukawa; Hiroyuki Kajimoto
When we perform exercise or undergo rehabilitation, it is helpful to be supported by another person. To get this support, we normally take hold of a person's arm, and pull it. In this paper, we investigate the use of a special device to produce a "pulling arm" sensation on the forearm. Using a weight comparison task, we performed an experiment to confirm the sensation of illusory external force with our device. We concluded that our current device presented about 10g to 20g weight perception.
EdgeSonic: image feature sonification for the visually impaired BIBAFull-Text 11
  Tsubasa Yoshida; Kris M. Kitani; Hideki Koike; Serge Belongie; Kevin Schlei
We propose a framework to aid a visually impaired user to recognize objects in an image by sonifying image edge features and distance-to-edge maps. Visually impaired people usually touch objects to recognize their shape. However, it is difficult to recognize objects printed on flat surfaces or objects that can only be viewed from a distance, solely with our haptic senses. Our ultimate goal is to aid a visually impaired user to recognize basic object shapes, by transposing them to aural information. Our proposed method provides two types of image sonification: (1) local edge gradient sonification and (2) sonification of the distance to the closest image edge. Our method was implemented on a touch-panel mobile device, which allows the user to aurally explore image context by sliding his finger across the image on the touch screen. Preliminary experiments show that the combination of local edge gradient sonification and distance-to-edge sonification are effective for understanding basic line drawings. Furthermore, our tests show a significant improvement in image understanding with the introduction of proper user training.
An augmented reality learning space for PC DIY BIBAFull-Text 12
  Heien-Kun Chiang; Yin-Yu Chou; Long-Chyr Chang; Chun-Yen Huang; Feng-Lan Kuo; Hown-Wen Chen
Because of the advances of computer hardware and software, Computer Aided Instruction (CAI) makes learning effective and interesting through the use of interactive multimedia technology. Recently, Augmented Reality (AR) technology has begun to surge as a new CAI tool because of its ability to create tangible and highly interactive user interface. In addition, recent studies have shown that the learning content as well as the participation of learners in learning activities can greatly affect learners' learning performance. However, studies of the integration of PC DIY (Personal Computer Do It Yourself) learning with AR technology are still few in current literature. Therefore, this study proposes an AR learning space for PC DIY whose system architecture and implementation are detailed. To evaluate the usability of the proposed system, a questionnaire is given to twenty-six graduate students after their hands-on experience with the prototype. Results of the questionnaire show the proposed AR learning space for PC DIY offers students a motivating, pleasant, and satisfying learning experience. Limitation, conclusion and future studies are given.
Audiolizing body movement: its concept and application to motor skill learning BIBAFull-Text 13
  Naoyuki Houri; Hiroyuki Arita; Yutaka Sakaguchi
We propose a concept of "audiolization of body movement," which transforms the posture/movement of the human body or human-controlled-tools into acoustic signals and feeds them back to the users in a real-time manner. It aims at helping people being aware of their body/tool states, and resultantly assisting their motor skill learning. The present paper describes features of the concepts and introduces some demonstrative applications.
ClippingLight: a method for easy snapshots with projection viewfinder and tilt-based zoom control BIBAFull-Text 14
  Yasuhiro Kajiwara; Keisuke Tajimi; Keiji Uemura; Nobuchika Sakata; Shogo Nishida
In this paper, we present a novel method to take photos with a hand-held camera. Cameras are being used for new purposes in our daily lives these days, such as to augment human memory or scan visual markers (e.g. QR-codes) and opportunities to take snapshots are increasing. However, taking snapshots with today's hand-held camera is troublesome, because its viewfinder forces the user to see the real space through itself, and it requires complicated operation to control zoom levels and press a shutter-release button at the same time. Therefore, we propose ClippingLight that is a combination method of Projection Viewfinder and tilt-based zoom control. It enables to take snapshots with low effort. We implement this method using a prototype of real-world projection camera. We conducted user study to confirm the effect of CippingLight in situations to take photos one after another. As a result, we found that ClippingLight is more comfortable and requires lower effort than today's typical camera when a user takes a photo quickly.
Designing the sports prosthetic leg BIBAFull-Text 15
  Shunji Yamanaka; Yuki Tsuji; Mariko Higaki; Hideka Suzuki
From a prosthesis hidden under clothing to a one comes on spotlight. Our common recognition is changing through sports. For amputee's more beautiful form in running, we've developed prostheses specially focused on usability, exterior, and safety. Here we'd like to introduce how we've designed the prosthesis for lower limb, knee joints and air stabilizer for the carbon fiber foot.
Head orientation sensing by a wearable device for assisted locomotion BIBAFull-Text 16
  Keisuke Takahashi; Hideki Kadone; Kenji Suzuki
In this paper, we propose a novel wearable sensor device for the measurement of the head orientation and relative position against the body trunk in real-time. It is known that in natural walking, human locomotion is preceded by changes in head orientation [1, 2, 3] and the walking direction can therefore be predicted by observing the head orientation. We have been developing a wearable sensing device for the measurement of head orientation, which enables prediction of the future walking direction in real-time for the assistive technologies for locomotion -- such as exoskeleton robots and wheelchair. Existing body posture measurement devices tend to be large and non-portable [4], therefore measurement in everyday space is still difficult. On the other hand, the developed system enables wireless and location independent measurement of the orientation of the head and it can be applied for assisted locomotion.
   In evaluation of the accuracy of the developed device, we observed the head anticipation during natural walking. Additionally, we compared the head anticipation in natural walking and electric wheelchair locomotion using the developed device and discuss a novel wheelchair control based on head orientation.
Earthlings Attack!: a ball game using human body communication BIBAFull-Text 17
  Masato Takahashi; Charith Lasantha Fernando; Yuto Kumon; Shuhey Takeda; Hideaki Nii; Takuji Tokiwa; Maki Sugimoto; Masahiko Inami
In this paper, we describe a ball game "Earthlings Attack!" that uses the contact between users and an active ball device as an information channel to the game content. When the ball device with built-in transmitter comes in contact with the user who wears the receiver, theis system transmits information from the ball device to the receiver through user's body with the human body communication. With this method, we aim at the interaction improvement of the augmentation of the interaction in such a way that presenting information on user's body according to the contact between each ball device and each user. This system also enables to use in a wide range field in the same network by managing contact information of both collectively.
Parasitic Humanoid: the wearable robotics as a behavioral assist interface like oneness between horse and rider BIBAFull-Text 18
  Taro Maeda; HideyUki Ando; Hiroyuki Iizuka; Tomoko Yonemura; Daisuke Kondo; Masataka Niwa
The Parasitic Humanoid (PH) is a wearable robotic human interface for sampling, modeling, and assisting nonverbal human behavior. This anthropomorphic robot senses the behavior of the wearer and has the internal models to learn the process of human sensory motor integration, thereafter it begins to predict the next behavior of the wearer using the learned models. When the reliability of the prediction is sufficient, the PH outputs the difference from the actual behavior as a request for motion to the wearer by motion induction using sensory illusion. Through the symbiotic interaction, the internal model and the process of human sensory motor integration approximate each other asymptotically. This process is available to transmit modalities such as senses of sight, hearing, touch, force and balance with human embodiment. This synergistic multimodal communication between distant people wearing PH can realize experience-sharing, skill transmission, and human behavior supports.
"Vection field" for pedestrian traffic control BIBAFull-Text 19
  Masahiro Furukawa; Hiromi Yoshikawa; Taku Hachisu; Shogo Fukushima; Hiroyuki Kajimoto
Visual signs and audio cues are commonly used for pedestrian control in the field of general traffic research. Because pedestrians need to first acquire and then recognize such cues, time delays invariably occur between cognition and action. To better cope with this issue of delays, wearable devices have been proposed to control pedestrians more intuitively. However, the attaching and removing of the devices can be cumbersome and impractical. In this study, we propose a new visual navigation method for pedestrians using a "Vection Field" in which the optical flow is presented on the ground. The optical flow is presented using a lenticular lens, a passive optical element that generates a visual stimulus based on a pedestrian's movement without an electrical power supply. In this paper we present a design for the fundamental visual stimulus and evaluate the principle of our proposed method for directional navigation. Results revealed that the optical-flow of a stripe and random-dot pattern displaced pedestrian pathways significantly, and that implementation with a lenticular lens is feasible.
Skill transmission for hand positioning task through view-sharing system BIBAFull-Text 20
  Keitaro Kurosaki; Hiroki Kawasaki; Daisuke Kondo; Hiroyuki Iizuka; Hideyuki Ando; Taro Maeda
In this paper, we describe the skill transmission through our view-sharing system that can mix or exchange the first person perspectives from the exact the partner's viewpoints. Since a non-skilled person can see the first person perspective of a skilled person, the motion of the non-skilled person is intuitively modified and supported. The task for the skill transmission is to play theremin that requires precise hand motions. As a result we show that the skill transmission effectively happens with our view-sharing system compared with the conventional method, side-by-side teaching. The way of effective augmenting human ability will be discussed.
Smart skincare system: remote skincare advice system using life logs BIBAFull-Text 21
  Maki Nakagawa; Koji Tsukada; Itiro Siio
Many women find it difficult to maintain beautiful skin as skincare approaches require a great deal of effort, time, and special knowledge. Women often ask experts in cosmetic stores for skincare advice. However, this approach has limitations in terms of time, place, and privacy. To solve these problems, we propose a remote skincare advice system using life logs. This system helps users automatically log information related to their skin condition and share these data with skincare experts in order to obtain appropriate advice. First, we performed a feasibility study to select proper life log data for our system, and then we built prototype systems. Finally, we verified the effectiveness of our system through two studies.
Skill evaluation method based on variability of antagonism power of EMG BIBAFull-Text 22
  Yuta Takahashi; Masashi Toda; Shigeru Sakurazawa; Junichi Akita; Kazuaki Kondo; Yuichi Nakamura
We can more effectively take the physical skills of individual people into consideration from various points of view when we focus on evaluating their skills while exercising. We can focus on their maximum levels of speed and power, their smoothness through a series of exercises, their instantaneous force, repeatability, and their adjustability to agitation or obstacles such as circumjacent people or nature. A lot of exercise skills can relatively and easily be quantitatively evaluated by carefully analyzing the results and performance.
   However, it is difficult to evaluate the "repeatability" aspect, which is only one of exercise skill, when judging its degree from only viewing the given exercise. An example of a physical exercise process that can contribute stable results would need to be equivalent to a "skill" such as hitting a home run each time. We believe that the acquisition of a given skill is very useful in fields such as physical training. Therefore, we examined the repeatability aspect from this point of view.
   We also used an antagonism power index calculated using EMG to achieve such purposes. The index represents any adjustments made in the output power from the muscles. I thought that the adjust function of the output power of the muscles would be very useful when evaluating the exercise skills of a given individual. The antagonism power was calculated using the quasi-muscular tension and a skeletal muscle model consisting of one joint and two muscles. We also made a comparison between the unskilled state and a skilled state. As a result, the differences in exercise skill appeared to be antagonism power. Therefore, we thought that antagonism power was effective enough for creating a new exercise skill evaluation index that we define in this paper.
Coordinated behavior between visually coupled dyads BIBAFull-Text 23
  Hiroyuki Iizuka; Daisuke Kondo; Hiroki Kawasaki; Hideyuki Ando; Taro Maeda
We describe how visually coupled people start their synchronized behavior with two visual coupling conditions: view-swapping and view-blending. In the view-swapping condition, two people's views are changed from the first-person perspective so that both see their partner's views. The view-blending condition allows people to see the blended view of both views. We report the results of different coordinated strategies to start synchronization that is observed in different conditions. In terms of the time required to start synchronization, view-swapping outperforms view-blending.
The emotional economy for the augmented human BIBAFull-Text 24
  Jean-Marc Seigneur
Happiness research findings are increasingly being taken into account in standard economics. However, most findings are based on a posteriori surveys trying to infer how happy people have been. In this paper, we argue that the advances in wearable computing, especially brain-computer interfaces, can lead to realtime measurements of happiness. We then propose a new kind of economy model where people pay depending on the emotions they have experienced. We have combined current commercial-on-the-shelf software and hardware components to create a proof-of-concept of the model.
Wearable MC system a system for supporting MC performances using wearable computing technologies BIBAFull-Text 25
  Tomonari Okada; Tetsuya Yamamoto; Tsutomu Terada; Masahiko Tsukamoto
A master of ceremonies (MC) plays an important role to ensure all events progress smoothly because unexpected interruption make them unsuccessful. MCs must have various abilities such as being able to memorize the content of given scenarios and manage problems that occur unexpectedly. Moreover, since unskilled MCs cannot intuit the atmosphere in the audiences during an event, they cannot control this smoothly. Therefore, we propose a wearable system that solves these problems for MCs achieved through wearable computing technologies. Our system has functions to support MCs in carrying out their duties smoothly, such as a robust voice-tracking function for them to read scripts, a user interface that does not interrupt other tasks, and a function that enables MCs intuit grasp the atmosphere of the audience. We implemented a prototype of the wearable MC system and actually used it at several events. The results we obtained from actually using it confirmed that it worked well and helped MCs to carry out their official duties smoothly.
View sharing system for motion transmission BIBAFull-Text 26
  Daisuke Kondo; Keitaro Kurosaki; Hiroyuki Iizuka; Hideyuki Ando; Taro Maeda
We are developing 'view sharing' system for supporting a remote corporative work. The view sharing is constructed from the video-see-through head mounted displays (VST-HMD) and motion trackers. This system allows two users in remote places to share their first-person views each other. The users can share what the other user is seeing, and furthermore the users can correspond their spatial perception, motion and head movement. By sharing those sensations, the non-verbal skills can be transmitted from skilled person to the non-skilled person. Using this system expert in remote place can instruct the non-skilled person to improve task performance.
HASC Challenge: gathering large scale human activity corpus for the real-world activity understandings BIBAFull-Text 27
  Nobuo Kawaguchi; Nobuhiro Ogawa; Yohei Iwasaki; Katsuhiko Kaji; Tsutomu Terada; Kazuya Murao; Sozo Inoue; Yoshihiro Kawahara; Yasuyuki Sumi; Nobuhiko Nishio
Understandings of human activity through wearable sensors will enable the next-generation human-oriented computing. However, most of researches on the activity recognition so far are based on small number of test subjects, and not well adapted for real world applications. To overcome the situation, we have started a project named "HASC Challenge" to collect a large scale human activity corpus. By the end of 2010, by the collaboration of 20 teams, more than 6700 accelerometer data with 540 subjects have been collected through our project. We also developed a tool named "HASC Tool" for management, evaluation and collection of the large number of activity sensor data.
Effective galvanic vestibular stimulation in synchronizing with ocular movement BIBAFull-Text 28
  Aru Sugisaki; Yuki Hashimoto; Tomoko Yonemura; Hiroyuki Iizuka; Hideyuki Ando; Taro Maeda
It is known that galvanic vestibular stimulation can cause ocular movement. Our final goal is to use GVS to support ocular movements. However, the effects of GVS to ocular movements are basically investigated while gazing at a certain point despite the fact that we have two different strategies to follow a moving target such as saccade and smooth pursuit. The effect might be different because those two use different mechanism. Therefore, this paper investigates the GVS effects during saccade. As a result, we show that the effect of GVS depends on the timing when GVS is given after the target marker moves.
Catchy account: a system for acquiring a realistic sense of expenditures BIBAFull-Text 29
  Mieko Nakamura; Homei Miyashita
In this paper, we propose a new household accounting system for realistically sensing expenditures. In 2D mode, expenditures are visualized through the placement of rectangles whose areas are proportional to the amount spent; thus, each item can be understood within the context of the total expenditure. In AR mode, spheres whose volumes are proportional to the amount spent appear to be floating in the camera image. The spheres fill the entire room and the size of expenditure can be realistically sensed. We designed this system in an attempt to "augment" the experience, so that the user can acquire a more realistic sense of expenditures.
Designing augmented environment with hybrid prototyping using virtual simulation and physical device BIBAFull-Text 30
  Koji Sekiguchi; Yasuto Nakanishi; Soh Kitahara; Takuro Ohmori; Daisuke Akatsuka
In this paper, we describe hybrid prototyping that combines virtual simulation with physical device, and argue the possibility of hybrid prototyping through a simulation of an augmented environment.
   In this paper, we describe hybrid prototyping that combines virtual simulation with physical device, and argue the possibility of hybrid prototyping through a simulation of an augmented environment.
Smart glasses linking real live and social network's contacts by face recognition BIBAFull-Text 31
  Martin Kurze; Axel Roselius
Imagine you participate in a big meeting with several people remotely known to you. You remember their faces but not their names. This is where "Smart Glasses" supports you: Smart Glasses consist of a (wearable) display, a tiny camera, some local processing power and an uplink to a backend service. The current implementation is based on Android and runs on smartphones, early research prototypes with different types of wearable displays have been evaluated as well. The system executes face detection and face tracking locally on the device (e.g. smartphone) and then links to the service running in the cloud to perform the actual face recognition based on the user's personal contact list (gallery). Recognized and identified persons are then displayed with names and latest social network activities.
   The approach is directed towards an AR ecosystem for mobile use. Therefore, open interfaces on the device are provided as well as to the service backend. We intend to take today's location based AR systems one step further towards computer vision based AR to really fit the needs of today's and tomorrow's users.
Inducing human motion by visual manipulation BIBAFull-Text 32
  Shin Okamoto; Hiroki Kawasaki; Hiroyuki Iizuka; Takumi Yokosaka; Tomoko Yonemura; Yuki Hashimoto; Hideyuki Ando; Taro Maeda
This paper reports a study of augmenting human motions by manipulating visual images displayed to users. The target motion is not only the motion that can be seen in the subject's views (i.e. hands or foots motion) but also the full-body motion that cannot be captured from their own perspective. As a result, it is shown that the motions are modulated without any physical contacts only by manipulated images.
FlexTorque, FlexTensor, and HapticEye: exoskeleton haptic interfaces for augmented interaction BIBAFull-Text 33
  Dzmitry Tsetserukou
In order to realize haptic interaction (e.g., holding, pushing, and contacting the object) in virtual environment and mediated haptic communication with human beings (e.g., handshaking), the force feedback is required. Recently there has been a substantial need and interest in haptic displays, which can provide realistic and high fidelity physical interaction in virtual environment. The aim of our research is to implement wearable haptic displays for presentation of realistic feedback (kinesthetic stimulus) to the human arm. We developed wearable devices FlexTorque and FlexTensor that induce forces to the human arm and do not require holding any additional haptic interfaces in the human hand. It is a new technology for Virtual Reality that allows user to explore surroundings freely. The concept of Karate (empty hand) Haptics proposed by us is opposite to conventional interfaces (e.g., Wii Remote, SensAble's PHANTOM, SPIDAR [1]) that require holding haptic interface in the hand, restricting thus the motion of the fingers in midair. The HapticEye interface allows the blind person to explore the unknown environment in a natural and effective manner. The wearer can literally see the environment by hand.
Augmented gustation using electricity BIBAFull-Text 34
  Hiromi Nakamura; Homei Miyashita
In this paper, we propose a method to augment gustation and increase the number of perceptible tastes. Electric taste is the sensation elicited upon stimulating the tongue with electric current. We used this phenomenon to convey information that humans cannot perceive with their tongue. Our method involves changing the taste of foods and drinks by using electric taste. First, we propose a system to drink beverages using straws that are connected to an electric circuit. Second, we propose a system to eat foods using a fork or chopsticks connected to an electric circuit. Finally, we discuss augmented gustation using various sensors.
FutureBody: design of perception using the human body BIBAFull-Text 35
  Makoto Okamoto; Takanori Komatsu; Kiyohide Ito; Junichi Akita; Tetsuo Ono
We created a new interactive design concept "FutureBody" that generates or augments new perceptions for users. The concept of FutureBody consists of two elements, "active searching" and "embodiment," allowing users to search their environment actively and to emit indirect feedback to activate users' embodiments. We believe this concept will form the basis for a new perception design methodology for people.
Augmented perception through mirror worlds BIBAFull-Text 36
  Don Kimber; Jim Vaughan; Eleanor Rieffel
We describe a system that mirrors a public physical space into cyberspace to provide people with augmented awareness of that space. Through views on web pages, portable devices, or on 'Magic Window' displays located in the physical space, remote people may 'look in' to the space, while people within the space are provided information not apparent through unaided perception. For example, by looking at a mirror display, people can learn how long others have been present, where they have been, etc. People in one part of a building can get a sense of the activities in the rest of the building, who is present in their office, look in to a talk in another room, etc. We describe a prototype for such a system developed in our research lab and office space.
Training support system for violin bowing BIBAFull-Text 37
  Yuuki Tanjo; Junichi Ogawa; Sadanori Ito; Ryuuki Sakamoto; Ichiro Umata; Hiroshi Ando
The purpose of this paper is to propose a multimodal data viewer for teaching the violin. There are many studies on motor skills with multimodal data captured from motion capture systems. Using normal motion capture data alone, however, it is difficult to give explanations when experts teach their skills to beginners. For example, not only the motion of the right arm and wrist but also shifting the pressure on the strings with the bow is a critical skill to master when playing the violin. The shifting pressures can be obtained by strain gauge sensors. In this paper, we propose a system designed to provide training support with multimodal data by using composed visualizing motion data and other sensor data such as a strain gauge. As an example, we show a teaching violin support system and experiment data.
An arm wrestling robot system for human upper extremity wear BIBAFull-Text 38
  Takashi Yamada; Tomio Watanabe
In this study, we develop a prototype of arm wrestling robot system called AssistRobot for human upper extremity wear. Further, we introduce a force display response model based on the impact absorption of human hand, proposed earlier by the authors, into the system. The effectiveness of the system for arm wrestling is demonstrated by sensory evaluation from viewpoints of operability and enjoyment.