HCI Bibliography Home | HCI Conferences | AH Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AH Tables of Contents: 10111213141516

Proceedings of the 2014 Augmented Human International Conference

Fullname:Proceedings of the 5th Augmented Human International Conference
Editors:Tsutomu Terada; Masahiko Inami; Kai Kunze; Takuya Nojima
Location:Kobe, Japan
Dates:2014-Mar-07 to 2014-Mar-09
Publisher:ACM
Standard No:ISBN: 978-1-4503-2761-9; ACM DL: Table of Contents; hcibib: AH14
Papers:57
Links:Conference Website
  1. 1. Touch
  2. 2. Sports
  3. 3. Look into Your Eyes
  4. 4. Beyond Smartphones
  5. 5. Look Inside Your Body
  6. 6. Wearables
  7. 7. Driving
  8. 8. Super Perception
  9. 9. Posters
  10. 10. Demos

1. Touch

Haptic foot interface for language communication BIBAFull-Text 9
  Erik Hill; Hiroyuki Hatano; Masahiro Fujii; Yu Watanabe
This paper examines the feasibility of language transmission through a haptic foot interface. The devices tested the placement, timing, and complexity of an array of vibrating electromagnets with the optimal device using an array of ten electromagnets placed under the arch of one foot. Moderate proficiency was reached after only an hour of training at which point the subjects were able to read a short e-mail through the device.
A half-implant device on fingernails BIBAFull-Text 10
  Emi Tamaki; Ken Iwasaki
Hand gesture feedback systems using tactile or visual information can only be used in given situations because of the limitations of the device features such as the need for a battery. In this paper, a half-implant device is suggested. The half-implant device consists of a radio frequency (RF) receiving antenna, small electronic parts, and UV gel. The UV gel is used to glue the parts onto the filed user's nail and cover the parts meant to be waterproof. The device receives power from the RF antenna; therefore, it does not require a battery to function. The device notifies whether the finger is in a gesture space by lighting an LED or activating a vibration motor. The primary benefit of this device is that the user can feel hand gesture feedback, anytime and anywhere. The device can be placed on the users' fingernail for approximately three weeks. To verify the devices' influence on the users' gesture task, we conducted a preliminary user study. The experiment revealed that the tactile notification reduced the task time by 2.62 seconds compared to that of the test with no feedback. We also investigated user's acceptability of this kind of technology. It revealed that this technology is acceptable only when it can be removed in the user's will and used in daily-life.
Anywhere surface touch: utilizing any surface as an input area BIBAFull-Text 37
  Takehiro Niikura; Yoshihiro Watanabe; Masatoshi Ishikawa
The current trend towards smaller and smaller mobile devices may cause considerable difficulties in using them. In this paper, we propose an interface called Anywhere Surface Touch, which allows any flat or curved surface in a real environment to be used as an input area. The interface uses only a single small camera and a contact microphone to recognize several kinds of interaction between the fingers of the user and the surface. The system recognizes which fingers are interacting and in which direction the fingers are moving. Additionally, the fusion of vision and sound allows the system to distinguish the contact conditions between the fingers and the surface. Evaluation experiments showed that users became accustomed to our system quickly, soon being able to perform input operations on various surfaces.
Illusion cup: interactive controlling of beverage consumption based on an illusion of volume perception BIBAFull-Text 39
  Eiji Suzuki; Takuji Narumi; Sho Sakurai; Tomohiro Tanikawa; Michitaka Hirose
This paper proposes a system and an interaction design for implicitly influencing the satisfaction we experience while drinking a beverage and for controlling beverage consumption by creating a volume perception illusion using augmented reality. Recent studies have revealed consumption of food and beverage is influenced by both its actual volume and external factors during eating/drinking. We focus on the fact that the shape of the beverage container influences beverage consumption. Based on this fact, we constructed a system that changes the apparent size of the cup. We investigated how the beverage consumption would change by using the proposed system. The results showed subjects consumed significantly greater amounts when they drank from a visually lengthened cup and consumed significantly smaller amounts when they drank from a visually shortened cup. This technique can be used for daily health-care applications with wearable computers.
Let me grab this: a comparison of EMS and vibration for haptic feedback in free-hand interaction BIBAFull-Text 46
  Max Pfeiffer; Stefan Schneegass; Florian Alt; Michael Rohs
Free-hand interaction with large displays is getting more common, for example in public settings and exertion games. Adding haptic feedback offers the potential for more realistic and immersive experiences. While vibrotactile feedback is well known, electrical muscle stimulation (EMS) has not yet been explored in free-hand interaction with large displays. EMS offers a wide range of different strengths and qualities of haptic feedback. In this paper we first systematically investigate the design space for haptic feedback. Second, we experimentally explore differences between strengths of EMS and vibrotactile feedback. Third, based on the results, we evaluate EMS and vibrotactile feedback with regard to different virtual objects (soft, hard) and interaction with different gestures (touch, grasp, punch) in front of a large display. The results provide a basis for the design of haptic feedback that is appropriate for the given type of interaction and the material.

2. Sports

Video generation method based on user's tendency of viewpoint selection for multi-view video contents BIBAFull-Text 1
  Yuki Muramatsu; Takatsugu Hirayama; Kenji Mase
A multi-view video makes it possible for users to watch video contents, for example, live concerts or sports events, more freely from various viewpoints. However, the users need to select a camera that captures a scene from their own preferred viewpoint at each event. In this paper, we propose a video generation method based on the user's View Tendency, which is a tendency of viewpoint selection according to the user-dependent interest for multi-view video content. The proposed method learns the View Tendency by Support Vector Machine (SVM) using several measures such as the geometric features of an object. Then, this method estimates the consistency of each viewpoint with the learned View Tendency and integrates the estimation results to obtain a temporal sequence of the viewpoints. The proposed method enables the users to reduce the burden of viewpoint selection and to watch the viewpoint sequence that reflects the interest as viewing assistance for the multi-view video content.
HoverBall: augmented sports with a flying ball BIBAFull-Text 13
  Kei Nitta; Keita Higuchi; Jun Rekimoto
Balls are the most popular equipment for sports. To play with balls, certain physical methods, or "vocabularies," such as throwing, hitting, spinning, or kicking have been developed by reflecting the fact that balls obey physical dynamics. This feature forms the foundation of ball-based sports; however, we consider that it limits the possibility of such sports. For instance, the speed of balls could be considerably fast for small children, senior people, or people with physical disabilities. In this paper, we propose a flying ball based on quadcopter technology. This ball has the ability hover and to change its location and behavior based on the context of the sport or game. With this technology, the physical dynamics of a ball can be re-programmed by sports designers, and new ball-playing vocabularies, such as hovering, anti-gravity, proximity, or remote manipulation, can be introduced to extend the method in which people interact with balls. In this paper, we introduce this concept as a method of augmenting sports, and present our initial flying ball system that consists of a grid shell that comprises a micro quadcopter, and demonstrates new sports interactions with the ball.
PhotoelasticBall: a touch detectable ball using photoelasticity BIBAFull-Text 16
  Kei Nitta; Toshiki Sato; Hideki Koike; Takuya Nojima
Balls are a key equipment for sports and entertainments such as juggling, etc. Then, much research has been conducted for developing balls of the next generation to enhance ball related entertainments. Such balls have plenty of special effects such as sound and light, but have limited input method. Those effects are often controlled through ball's native motion by using accelerometers, etc. However, as increasing the variety of special functions of such balls, the appropriate input method should be required. In this research, we developed a force vector sensor sheet unit that can be implemented on the surface of the ball. In this paper, we report the detail of the sensory system and its experimental results.
Around me: a system with an escort robot providing a sports player's self-images BIBAFull-Text 41
  Junya Tominaga; Kensaku Kawauchi; Jun Rekimoto
Providing self-images is an effective approach to identifying sports players' body movements that should be corrected. Traditional means providing self-images, however, such as mirrors and videos, are not effective in terms of mobility and immediacy. In this paper we propose a system, Around Me, providing self-images through a display attached to an escort robot that runs in front of the user. This system captures the user's posture from the front and recognizes his/her position relative to the robot. The user's movements are synchronized with the robot's movements because the robot's movements are determined by the user's location. In this research we developed an experimental prototype specialized for assistance in jogging. In pilot studies we observed that the ability of Around Me to provide real-time images is potentially able to encourage the user to improve his/her jogging form, which is essential for performance and for injury prevention. In addition, compared with the robot running in front of the user with one following behind the user, we clarified the frontal robot's characteristics: the robot can control the jogging speed, and the user needs to adjust the robot's steering and the distance between the robot and him/her as he/she requires. Then we found indications that Around Me can, with various jogging support functions, encourage the user to practice jogging with ideal form.
TAMA: development of trajectory changeable ball for future entertainment BIBAFull-Text 48
  Tomoya Ohta; Shumpei Yamakawa; Takashi Ichikawa; Takuya Nojima
In this paper, we propose a ball interface "TAMA" (Trajectory chAnging, Motion bAll; "tama" means a ball in Japanese) that can change its own trajectory dynamically. Conventionally, it is impossible to go against the laws of physics. However, if the trajectory of the balls can be changed during flight, that virtually means balls can fly against physical laws. This should enhance the pleasure of ball-related games such as baseball, basketball, juggling, etc. In this research, we used the force of compressed gas from within the ball itself to change the ball trajectory. Previously, we developed the ball prototype equipped with a gas-jet unit. However, the primal prototype was too heavy to use in amusement. Additionally, there was no control of the timing of the jet and it was wired for power supply. In this paper, we introduce the latest prototype of TAMA, which trims the weight and adds new functionality. We discuss the feasibility of this system through experimentation in changing the ball's trajectory during downward flight.

3. Look into Your Eyes

Tearsense: a sensor system for illuminating and recording teardrops BIBAFull-Text 2
  Marina Mitani; Yasuaki Kakehi
People shed tears when they become emotional watching movies or reading novels. In this research, we propose a sensor system for illuminating and recording teardrops and describe its applications pertaining to entertainment content such as movies and novels. A tape sensor is attached below the eyes, which is controlled by a microcontroller and a PC. Two parallel lines are drawn on the tape with conductive ink, and when a teardrop runs over these two lines, a microcontroller senses the change in electrical voltage. In this paper, we propose a system that provides a new way to experience and communicate reactions to entertainment content by converting tears into light or sound in real-time, or by sharing with others archives of the content that has moved us to tears.
Present information through afterimage with eyes closed BIBAFull-Text 3
  Kozue Nojiri; Suzanne Low; Koki Toda; Yuta Sugiura; Youichi Kamiyama; Masahiko Inami
We propose a display method using the afterimage effect to illustrate images, so that people can perceive the images with their eyes closed. Afterimage effect is an everyday phenomenon that we often experienced and it is commonly utilized in many practical situations such as in movie creation. However, many of us are not aware of it. We strongly believe that this afterimage effect is an interesting phenomenon to display information to the users. We conducted an experiment to compare the duration of the afterimage effect to the duration of participant exposure to the image projection. We also prototyped a wearable type display to give more flexibility and mobility to our proposal. With this, one can utilize this method for various applications such as to confirm password at a bank etc.
Eyefeel & EyeChime: a face to face communication environment by augmenting eye gaze information BIBAFull-Text 7
  Asako Hosobori; Yasuaki Kakehi
In face-to-face communication, humans convey nonverbal information to supplement verbal language. Eye gaze in particular is a critical element. While a variety of studies on communication support focusing on eye gaze have been performed in the past, most of these studies have aimed to support communications between people in remote locations. In contrast, this study aims to extend and transform gaze to lower the hurdles of establishing communication, or to induce a new form of communication through gaze in face-to-face communication. As a specific proposal, in this study we developed two types of systems: Eyefeel, which converts and delivers the gaze of another as tactile information, and EyeChime, which produces a spatial presentation by converting events such as gazing at another or eyes contact to sound. The preliminary study suggested that these interfaces induced communication through active use of eye gaze, gave users the opportunity to increase the amount of time gazes were sent to the conversation partner, and that the hurdles to make eye contact were lowered. In this study, we discuss the design and implementation of the system as well as the details of its use.
In the blink of an eye: combining head motion and eye blink frequency for activity recognition with Google Glass BIBAFull-Text 15
  Shoya Ishimaru; Kai Kunze; Koichi Kise; Jens Weppner; Andreas Dengel; Paul Lukowicz; Andreas Bulling
We demonstrate how information about eye blink frequency and head motion patterns derived from Google Glass sensors can be used to distinguish different types of high level activities. While it is well known that eye blink frequency is correlated with user activity, our aim is to show that (1) eye blink frequency data from an unobtrusive, commercial platform which is not a dedicated eye tracker is good enough to be useful and (2) that adding head motion patterns information significantly improves the recognition rates. The method is evaluated on a data set from an experiment containing five activity classes (reading, talking, watching TV, mathematical problem solving, and sawing) of eight participants showing 67% recognition accuracy for eye blinking only and 82% when extended with head motion patterns.
Boundary conditions for information visualization with respect to the user's gaze BIBAFull-Text 42
  Marcus Tönnis; Gudrun Klinker
Gaze tracking in Augmented Reality is mainly used to trigger buttons and access information. Such selectable objects are usually placed in the world or in screen coordinates of a head- or hand-mounted display. Yet, no work has investigated options to place information with respect to the line of sight.
   This work presents our first steps towards gaze-mounted information visualization and interaction, determining boundary conditions for such an approach. We propose a general concept for information presentation at an angular offset to the line of sight. A user can look around freely, yet having information attached nearby the line of sight. Whenever the user wants to look at the information and does so, the information is placed directly at the axis of sight for a short time.
   Based on this concept we investigate how users understand frames of reference, specifically, if users relate directions and alignments in head or world coordinates. We further investigate if information may have a preferred motion behavior. Prototypical implementations of three variants are presented to users in guided interviews. The three variants resemble a rigid offset and two different floating motion behaviors of the information. Floating algorithms implement an inertia based model and either allow the user's gaze to surpass the information or to push information with the gaze. Testing our proto-types yielded findings that users strongly prefer information maintaining world-relation and that less extra motion is preferred.

4. Beyond Smartphones

Pressure detection on mobile phone by camera and flash BIBAFull-Text 11
  Suzanne Low; Yuta Sugiura; Dixon Lo; Masahiko Inami
This paper proposes a method to detect pressure asserted on a mobile phone by utilizing the back camera and flash on the phone. There is a gap between the palm and camera when the phone is placed on the palm. This allows the light from the flashlight to be reflected to the camera. However, when pressure is applied on the phone, the gap will reduce, reducing the brightness captured by the camera. This phenomenon is applied to detect two gestures: pressure applied on the screen and pressure applied when user squeezes the phone. We also conducted an experiment to detect the change in brightness level depending on the amount of force asserted on the phone when it is placed in two positions: parallel to the palm and perpendicular to the palm. The results show that when the force increases, the brightness level decreases. Using the phones ability to detect fluctuations in brightness, various pressure interaction applications such as for gaming purposes may be developed.
Emotional priming of mobile text messages with ring-shaped wearable device using color lighting and tactile expressions BIBAFull-Text 14
  Gilang Andi Pradana; Adrian David Cheok; Masahiko Inami; Jordan Tewell; Yongsoon Choi
In this paper, as a hybrid approach to place a greater emphasis on existing cues in Computer Mediated Communication (CMC), the authors explore the emotional augmentation benefit of vibro-tactile stimulation, color lighting, and simultaneous transmission of both signals to accompany text messages. Ring U, A ring-shaped wearable system aimed at promoting emotional communications between people using vibro-tactile and color lighting expressions, is proposed as the implementation method. The result of the experiment has shown that non-verbal stimuli can prime the emotion of a text message, and it can be driven into the direction of the emotional characteristic of the stimuli. Positive stimuli can prime the emotion to a more positive valence, and negative stimuli can invoke a more negative valence. Another finding from the experiment is that compared to the effect on valence, touch stimuli have more effect on the activity level.
Using smart phone mobility traces for the diagnosis of depressive and manic episodes in bipolar patients BIBAFull-Text 36
  Agnes Gruenerbl; Venet Osmani; Gernot Bahle; Jose C. Carrasco; Stefan Oehler; Oscar Mayora; Christian Haring; Paul Lukowicz
In this paper we demonstrate how smart phone sensors, specifically inertial sensors and GPS traces, can be used as an objective "measurement device" for aiding psychiatric diagnosis. In a trial with 12 bipolar disorder patients conducted over a total (summed over all patients) of over 1000 days (on average 12 weeks per patient) we have achieved state change detection with a precision/recall of 96%/94% and state recognition accuracy of 80%. The paper describes the data collection, which was conducted as a medical trial in a real life every day environment in a rural area, outlines the recognition methods, and discusses the results.

5. Look Inside Your Body

Unloading muscle activation enhances force perception BIBAFull-Text 4
  Yuichi Kurita; Jumpei Sato; Takayuki Tanaka; Minoru Shinohara; Toshio Tsuji
In this study, we examined (1) weight discrimination capability of human subjects with different body postures, and (2) sensorimotor capability of human subjects when using a muscle assistive equipment. According to previous studies, humans can sense the intensity of external stimulus more accurately when voluntary muscle activation is less. We developed a three-dimensional musculoskeletal model based on an upper extremity model, and calculated the muscle activity required to keep a posture. We also conducted human experiments and revealed that the weight discrimination capability improves as voluntary muscle activation is less. Based on the experimental results, we developed a muscle assistive equipment that unloads the weight of one's upper limb and evaluated the improvement in the sensorimotor capability when using the equipment. The results show that assisting the muscle load is effective to improve the sensorimotor performance.
On the tip of my tongue: a non-invasive pressure-based tongue interface BIBAFull-Text 12
  Jingyuan Cheng; Ayano Okoso; Kai Kunze; Niels Henze; Albrecht Schmidt; Paul Lukowicz; Koichi Kise
Mobile and wearable devices became pervasive in daily life. The dominant input techniques for mobile and wearable technology are touch and speech. Both approaches are not appropriate in all settings. Therefore, we propose a novel interface that is controlled through the tongue. It is based on an array of textile pressure sensors attached to the user's cheek. It can be easily integrated into helmets or face masks in a non-invasive way. In an initial study, we investigate gestures for tongue-based interface. Six participants repeatedly performed five simple tongue gestures. We show that gestures can be recognized with 98% accuracy. Based on feedback from participants, we discuss potential use cases and provide an outlook on further improvement of the system.
Towards emotional regulation through neurofeedback BIBAFull-Text 40
  Marc Cavazza; Fred Charles; Gabor Aranyi; Julie Porteous; Stephen W. Gilroy; Gal Raz; Nimrod Jakob Keynan; Avihay Cohen; Gilan Jackont; Yael Jacob; Eyal Soreq; Ilana Klovatch; Talma Hendler
This paper discusses the potential of Brain-Computer Interfaces based on neurofeedback methods to support emotional control and pursue the goal of emotional control as a mechanism for human augmentation in specific contexts. We illustrate this discussion through two proof-of-concept, fully-implemented experiments: one controlling disposition towards virtual characters using pre-frontal alpha asymmetry, and the other aimed at controlling arousal through activity of the amygdala. In the first instance, these systems are intended to explore augmentation technologies that would be incorporated into various media-based systems rather than permanently affect user behaviour.

6. Wearables

Evaluating effect of types of instructions for gesture recognition with an accelerometer BIBAFull-Text 6
  Kazuya Murao; Tsutomu Terada
Mobile phones or remotes for video games using gesture recognition technologies enable easy and intuitive operations such as scrolling browser and drawing objects. Gesture input has an advantage of rich expressive power over the conventional interfaces, but it is difficult to share the gesture trajectory with other people through writing or verbally. In this paper, we evaluate how user gestures change according to the types of the instruction. We obtained acceleration data for 10 kinds of gestures instructed through three types of texts, figures, and videos, totalling 44 patterns from 13 test subjects, for a total of 2,630 data samples.
On achieving dependability for wearable computing by device bypassing BIBAFull-Text 8
  Tsutomu Terada; Seiji Takeda; Masahiko Tsukamoto; Yutaka Yanagisawa; Yasue Kishino; Takayuki Sugyama
When using wearable computers used for medical operation and aviation safety, disrupted information presentation due to hardware/software problems can have serious consequences including medical accidents. We propose a mechanism that maintains information presentation in such situations by I/O device bypassing. In out method, I/O devices directly communicate with other devices if a system failure happens. The proposed method selects appropriate data converters by considering recognizability in order to present information that is easily understandable to users. We confirmed that the proposed method works effectively by implementing several applications with our prototype system.
Representing indoor location of objects on wearable computers with head-mounted displays BIBAFull-Text 18
  Markus Funk; Robin Boldt; Bastian Pfleging; Max Pfeiffer; Niels Henze; Albrecht Schmidt
With head-mounted displays becoming more ubiquitous, the vision of extending human object search capabilities using a wearable system becomes feasible. Wearable cameras can recognize known objects and store their indoor location. But how can the location of objects be represented on a wearable device like Google Glass and how can the user be navigated towards the object? We implemented a prototype on a wearable computer with a head-mounted display and compared a last seen image representation against a map representation of the location. We found a significant interaction effect favoring the last seen image with harder hidden objects. Additionally, all objective and subjective measures generally favor the last seen image. Results suggest that a map representation is more helpful for gross navigation and an image representation is more supportive for fine navigation.
An information presentation method for head mounted display considering surrounding environments BIBAFull-Text 45
  Masayuki Nakao; Tsutomu Terada; Masahiko Tsukamoto
When wearing a head mounted display (HMD), the degree of concentration on the HMD varies depending on the surrounding environment. In this work, we developed an information presentation method considering cognitive cost and safety in wearable computing environments. The proposed method changes its information presentation method based on possible gazing time that varies in according with the surrounding environment and user context. We used an eye tracker to measure the relationship between eyestrain and watching an HMD and also clarified the relationship between gaze time and surrounding environment. We then used the results to develop an algorithm to change the information presentation method. Evaluation results revealed cases in which it was difficult and dangerous to gaze at an HMD and therefore necessary to change the information presentation.
Implementation and evaluation on a concealed interface using abdominal circumference BIBAFull-Text 49
  Hirotaka Sumitomo; Takuya Katayama; Tsutomu Terada; Masahiko Tsukamoto
The downsizing of computers enables users to operate computers anywhere. Generally, since a user operates his/her computer by his/her hands, computer operation is unconcealed to surrounding people. On the other hand, there are demands for hidden operation of the computer in situations such as face to face communication and during a meeting. The operation of computer in those scenes often gives a bad impression to surroundings and discommunication. Therefore, in this study, we propose an interface using a user's abdominal circumference as input. The motion of the abdomen is hard to be recognized by surroundings, and a user can move one's abdomen independently of the other body parts. We implemented a prototype of input method using the moving velocity and the absolute size of abdomen as input. Then, we evaluated the granularity, reproducibility, parallelism, resistance to environment, confidentiality and resistance to mis-recognition on the proposed interface.

7. Driving

Multi-touch steering wheel for in-car tertiary applications using infrared sensors BIBAFull-Text 5
  Shunsuke Koyama; Yuta Sugiura; Masa Ogata; Anusha Withana; Yuji Uema; Makoto Honda; Sayaka Yoshizu; Chihiro Sannomiya; Kazunari Nawa; Masahiko Inami
This paper proposes a multi-touch steering wheel for in-car tertiary applications. Existing interfaces for in-car applications such as buttons and touch displays have several operating problems. For example, drivers have to consciously move their hands to the interfaces as the interfaces are fixed on specific positions. Therefore, we developed a steering wheel where touch positions can correspond to different operating positions. This system can recognize hand gestures at any position on the steering wheel by utilizing 120 infrared (IR) sensors embedded in it. The sensors are lined up in an array surrounding the whole wheel. An Support Vector Machine (SVM) algorithm is used to learn and recognize the different gestures through the data obtained from the sensors. The gestures recognized are flick, click, tap, stroke and twist. Additionally, we implemented a navigation application and an audio application that utilizes the torus shape of the steering wheel. We conducted an experiment to observe the possibility of our proposed system to recognize flick gestures at three positions. Results show that an average of 92% of flick could be recognized.
CarCast: a framework for situated in-car conversation sharing BIBAFull-Text 17
  Kohei Matsumura; Yasuyuki Sumi
In this paper, we propose a situated in-car conversation sharing framework. People often have conversations in the car. In those conversations, people talk about their points of interest that they have just passed. These conversations may contain valuable information because the conversations reflect situations such as seasons and passenger's own experiences. However, in-car conversations are transient and cannot be shared to others. We therefore aim to share these valuable in-car conversation with others. This paper describes a framework of our in-car conversation sharing system and discusses challenges to realize it.
Augmenting expressivity of artificial subtle expressions (ASEs): preliminary design guideline for ASEs BIBAFull-Text 38
  Takanori Komatsu; Kazuki Kobayashi; Seiji Yamada; Kotaro Funakoshi; Mikio Nakano
Unfortunately, there is little hope that information-providing systems will ever be perfectly reliable. The results of some studies have indicated that imperfect systems can reduce the users' cognitive load in interacting with them by expressing their level of confidence to users. Artificial subtle expressions (ASEs), which are machine-like artificial sounds to express the confidence information to users added just after the system's suggestions, were keenly focused on because of their simplicity and efficiency. The purpose of the work reported here was to develop a preliminary design guideline for ASEs in order to determine the expandability of ASEs. We believe that augmenting the expressivity of ASEs would lead reducing the users' cognitive load for processing the information provided from the systems, and this would also lead augmenting users' various cognitive capacities. Our experimental results showed that ASEs with decreasing pitch conveyed a low confidence level to users. This result were used to formulate a concrete design guideline for ASEs.

8. Super Perception

What's on your mind?: mental task awareness using single electrode brain computer interfaces BIBAFull-Text 43
  Alireza Sahami Shirazi; Mariam Hassib; Niels Henze; Albrecht Schmidt; Kai Kunze
Recognizing and summarizing persons' activities have proven to be effective for increasing self-awareness and enable to improve habits. Reading improves one's language skills and periodic relaxing improves one's health. Recognizing these activities and conveying the time spent would enable to ensure that users read and relax for an adequate time. Most previous attempts in activity recognition deduce mental activities by requiring expensive/bulky hardware or by monitoring behavior from the outside. Not all mental activities can, however, be recognized from the outside. If a person is sleeping, relaxing, or intensively thinks about a problem can hardly be differentiated by observing carried-out reactions. In contrast, we use simple wearable off-the-shelf single electrode brain computer interfaces. These devices have the potential to directly recognize user's mental activities. Through a study with 20 participants, we collect data for five representative activities. We describe the dataset collected and derive potential features. Using a Bayesian classifier we show that reading and relaxing can be recognized with 97% and 79% accuracy. We discuss how sensory tasks associated with different brain lobes can be classified using a single dry electrode BCI.
JackIn: integrating first-person view with out-of-body vision generation for human-human augmentation BIBAFull-Text 44
  Shunichi Kasahara; Jun Rekimoto
JackIn is a new human-human communication framework for connecting two or more people. With first-person view video streaming from a person (called Body) wearing a transparent head-mounted display and a head-mounted camera, the other person (called Ghost) participates in the shared first-person view. With JackIn, people's activities can be shared and assistance or guidance can be given through other peoples expertise. This can be applied to daily activities such as cooking lessons, shopping navigation, education in craft-work or electrical work, and sharing experiences of sporting and live events. For a better viewing experience with first-person view, we developed the out-of-body view in which first-person images are integrated to construct a scene around a Body, and a Ghost can virtually control the viewpoint to look around the space surrounding the Body. We also developed a tele-pointing gesture interface. We conducted an experiment to evaluate how effective this framework is and found that Ghosts can understand the spatial situation of the Body.
SpiderVision: extending the human field of view for augmented awareness BIBAFull-Text 47
  Kevin Fan; Jochen Huber; Suranga Nanayakkara; Masahiko Inami
We present SpiderVision, a wearable device that extends the human field of view to augment a user's awareness of things happening behind one's back. SpiderVision leverages a front and back camera to enable users to focus on the front view while employing intelligent interface techniques to cue the user about activity in the back view. The extended back view is only blended in when the scene captured by the back camera is analyzed to be dynamically changing, e.g. due to object movement. We explore factors that affect the blended extension, such as view abstraction and blending area. We contribute results of a user study that explore 1) whether users can perceive the extended field of view effectively, and 2) whether the extended field of view is considered a distraction. Quantitative analysis of the users' performance and qualitative observations of how users perceive the visual augmentation are described.

9. Posters

Two-level fast-forwarding using speech detection for rapidly perusing video BIBAFull-Text 19
  Kazutaka Kurihara; Yoko Sasaki; Jun Ogata; Masataka Goto
In video content such as feature films, the main themes and messages are often sufficiently conveyed through dialogue and narration. To augment human capability to consume video content, here we propose a system for watching such videos at very high speed while ensuring that speech is still comprehensible. Specifically, we employ a purpose-built automatic speech detector to realize two-level fast-forwarding for a wide variety of video content: very fast during segments without speech, and understandably fast during segments with speech. In our experiments, practical performance was achieved by frame-by-frame audio classification using Gaussian mixture models trained on subtitle information from 120 commercial DVD movies.
Real-time typing action detection in a 3D pointing gesture interface BIBAFull-Text 20
  Risa Ishijima; Kayo Ogawa; Masakazu Higuchi; Takashi Komuro
In this paper, we propose a method to detect typing actions in the air by applying principal component analysis and linear discriminant analysis on time-series finger scale data in real time. The proposed method was implemented to an experimental system of in-air typing interface, and a preliminary user study using a keyboard typing application was conducted. The results showed about 95% of the detection rate of typing actions and more than 80% of the input recognition rate.
The health bar: a persuasive ambient display to improve the office worker's well being BIBAFull-Text 21
  Victor Mateevitsi; Khairi Reda; Jason Leigh; Andrew Johnson
Recent research studies have shown the serious health risks associated with prolonged sitting. Standing up and walking on a regular basis has been proved to improve an office worker's well-being. Small behavioral changes like the aforementioned are the basis of preventive medicine and even though they appear easy to follow, in practice are difficult to apply. Advances in technology miniaturization and smart sensors are paving the way for the development of devices that empower preventive medicine. These devices use the power of persuasion to help people change behavior and maintain well-being. They act as an ambient personal 'coach' that monitors and intervenes at the right time. In this paper we present the HealthBar, an ambient persuasive device that helps users break up their prolonged sitting habits.
Single-trial decoding for an event-related potential-based brain-computer interface BIBAFull-Text 22
  Yaming Xu; Yoshikazu Nakajima
We propose a trial-based decoding method for an event-related potential (ERP)-based brain-computer interface (BCI). In contrast to conventional methods decoding based on single ERP-epoch, we integrate within-trial ERP-epochs and combine a 5-gram language model to ease BCI spelling. Experiment results on 10 subjects show that the proposed method improves the ERP decoding accuracy by 18.05%, when compared with state-of-the-art method.
BubStack: a self-revealing chorded keyboard on touch screens to type for remote wall displays BIBAFull-Text 23
  Hyeonjoong Cho; Chulwon Kim
The common soft keyboard on touchscreens for numerous emerging smart devices requires users to look repeatedly at their fingertip locations. To reduce this visual dependency on size-restricted touchscreens, chorded keyboards have been restudied recently. However, one of their intrinsic problems, learning difficulty, limits their widespread use. Here, we introduce a visual guide that makes a chorded keyboard self-revealing to alleviate the learning difficulty. Next, we propose a simple supplementary instrument and a finger-recognition algorithm for tablet computers to be used as remote controllers for wall displays. In this use case, we claim that our self-revealing chorded keyboard and the proposed configuration provide complete visual independence.
AnyButton: unpowered, modeless and highly available mobile input using unmodified clothing buttons BIBAFull-Text 24
  Liwei Chan; Chien-Ting Weng; Rong-Hao Liang; Bing-Yu Chen
This paper presents wearable opportunistic controls using unmodified clothing buttons. Buttons are commonly sewn on formal clothing and often came with multiple duplicates. In this paper, we turn passive buttons into dial widgets. Each button provides simple input modalities (e.g., tap and spin inputs). Multiple buttons allow for modeless and rich interactions. We present AnyButton, a wearable motion-sensor set, allowing for transferring buttons on clothing into mobile input on the move. Our prototype consists of three motion sensors attached on the index fingernail, the wrist, and the elbow. We interpret which button is under user interaction according to the wrist and elbow orientations, and how the button in the user's finger pinches being operated according to the motions on the fingertips. Each button allows for partial tap, discrete spin and dwell spin inputs. By distributing interface to the buttons, applications such as music players and call centers can use opportunistic clothing buttons as wearable controls.
TongueDx: a tongue diagnosis for health care on smartphones BIBAFull-Text 25
  Ini Ryu; Itiro Siio
On the TongueDx system, users can keep track of their health condition by recording the color of tongue coating and body on smartphones. In fact, our system uses tongue diagnosis techniques originated from Traditional Chinese Medicine (TCM) theories. In the theories, tongue symptom as one of the important diagnosing indicators can tell the health of human body. To avoid color error affected by surrounding light, a tongue color calibration by using teeth color is proposed to adjust white balance of the tongue picture. K-means algorithm is used to separate tongue coating from body. From the line graph of tongue coating and body color displayed on smartphones, people can know their health conditions timely. We have evaluated the TongueDx performance for one month in the preliminary user experience.
Narratology and narrative generation: expanded literary theory and the integration as a narrative generation system (2) BIBAFull-Text 26
  Takashi Ogata; Shohei Imabuchi; Taisuke Akimoto
This paper overviews our approaches towards narrative generation using various literary theories in the field of narratology in the first part and proposes the synthetic use of the following three literary theories in the integrated narrative generation system in the second part. Although we have studied several literary theories in the relation with the narrative generation, the three literary theories by Propp, Genette and Jauss are organically incorporated in the current version of the integrated system.
Carrier pigeon-like sensing system: animal-computer interface design for opportunistic data exchange interaction for a wildlife monitoring application BIBAFull-Text 27
  Keijiro Nakagawa; Hiroki Kobayashi; Kaoru Sezaki
Carrier pigeon-like sensing system is a future -- present archetype in human interface that will enable the humans to observe inaccessible and contaminated forests such as around Fukushima nuclear power plant. The system employs wildlife-borne sensing devices, which have Animal- Touch'n Go (ATG) and animal-to-animal Internet sharing capability, and can be used to expand the size of monitoring areas where the electricity supply and information infrastructure is either limited or nonexistent. Thus, monitoring information can be collected from remote areas cost-effectively and safely. The system is based on the concept of human -- computer -- biosphere interaction. This paper presents an overview of the concept, the methods employed, and the work in progress.
An interface for unconscious learning using mismatch negativity neurofeedback BIBAFull-Text 28
  Ming Chang; Hiroyuki Iizuka; Yasushi Naruse; Hideyuki Ando; Taro Maeda
There are a lot of skills that it takes time for us to learn in our life. To be precise, it is not clear what and how to learn. For example, one of the biggest problems in the language learning is that learners cannot recognize novel sounds that do not exist in their native language, and it is difficult to gain a listening ability for these novel sounds [1]. Here, we developed a novel neurofeedback (NF) method, using the mismatch negativity (MMN) responses elicited by similar sounds, that can help people to unconsciously improve their auditory perceptual skills. In our method, the strength of the participants' MMN as a measure of perceptual discriminability is presented as visual feedback to provide a continuous, not binary, cue for learning. We found evidence that significant performance improvement for behavioral auditory discrimination and neurophysiological measure occurs unconsciously. Based on our findings, the method has great potential to provide effortless auditory perceptual training and develop an unconscious learning interface device.
BITAIKA: development of self posture adjustment system BIBAFull-Text 29
  Haruna Ishimatsu; Ryoko Ueoka
We define "BITAIKA" as a self posture adjustment system while sitting for correcting its posture. As a first step of developing BITAIKA system, we developed a system visually inducing a user to correct its posture. BITAIKA monitors one's posture to find a continuous bad posture using kinect and multiple piezoelectric sensors. When a bad posture continues, a window to assist for adjusting a posture pops up on PC monitor. We conducted the prototype experiment to evaluate the effectiveness of the system while PC work and confirmed that BITAIKA will effectively work as a posture adjustment system.
Toward practical implementation of emotion driven digital camera using EEG BIBAFull-Text 30
  Tomomi Takashina; Miyuki Yanagi; Yoshiyuki Yamariku; Yoshikazu Hirayama; Ryota Horie; Michiko Ohkura
Photography is closely tied with people's emotion. Therefore, the concept of emotion driven camera is a natural consequence of future photography. As for EEG emotion detection in research laboratories, there have been many works but it is considered difficult to apply such methodologies in generic environments. For measuring ERP (event related potential), some kinds of cues are typically used for knowing the exact time in which a stimulus given, but it is difficult to know such cues in real world. Therefore, we propose a cue detection mechanism based on the architecture of digital single lens reflex camera. It enables to reproduce an environment similar to research laboratories and is still a natural configuration for ordinary digital cameras.
Haven't we met before?: a realistic memory assistance system to remind you of the person in front of you BIBAFull-Text 31
  Masakazu Iwamura; Kai Kunze; Yuya Kato; Yuzuko Utsumi; Koichi Kise
This paper presents a perceived real-time system for memory augmentation. We propose a realistic approach to realize a memory assistance system, focusing on retrieving the person in front of you. The proposed system is capable of fully automatic indexing and is scalable in the database size. We utilize face recognition to show the user previous encounters with the person they are currently looking at. The system works fast (under 200 ms, perceived real time) with a decent database size (45 videos of 24 people). We also provide evidence in terms of an online questionnaire that our proposed memory augmentation system is useful and would be worn by most of the participants if it can be implemented in an unobtrusive way.
Development of tactile biofeedback system for amplifying horror experience BIBAFull-Text 32
  Kouya Ishigaki; Ryoko Ueoka
Adding physical effect to a 3D film is called 4D. This attraction system becomes a common entertainment system which generates more realistic sensation. In our laboratory, previous study concludes changing the viewing environment amplified horror experience of the viewers. Developing a further horror amplifying system we focus attention on biofeedback. In this paper, our tactile biofeedback system's prototype is described and preliminary experiment is conducted to evaluate the effect of feedback of heart rate and pseudo feedback of heart rate whether the tactile feedback causes entrainment of subjects' heart rate.
A fault diagnostic system by line status monitoring for ubiquitous computers connected with multiple communication lines BIBAFull-Text 33
  Shintaro Kawabata; Shoji Sano; Tsutomu Terada; Masahiko Tsukamoto
In ubiquitous computing environments, many computers should be controlled in cooperation to support human daily-life. We pick up a dance performance that performers wear a large amount of LEDs to combine body expression and lighting effects. In such situation, there is a problem that failure frequently occurs caused by dance movements. In the environment involving hundreds of computers, it costs much time and effort to check where failures have occurred. To solve this problem, we propose a system for identifying the position of the failure easily by checking the status of communication among computers. In addition, we implemented a fault diagnostic system that can provide a series of flows for detecting the failure, guessing why the failure occurred and taking a measure. Furthermore, we actually used our system into LED dance performances, conducted an evaluative experiment, and confirmed its usefulness.
User-centered design of a lamp customization tool BIBAFull-Text 34
  Monica Perusquía-Hernández; Hella Kriening; Carina Palumbo; Barbara Wajda
Unique self-designed products are currently in great demand. The customization process of these products requires a good understanding of the customer's needs as well as tools that allow them to make the right choices. We provide a solution that enables users to design lamps that fit their needs and the interior design of their home. Our proposal is an Augmented Reality (AR) tablet application that allows customization in context. The solution was low-and high-fidelity prototyped in several iterations. Users enjoyed the customization process and expressed satisfaction that the app would enable them to create a lamp that is personalized and unique.
Fall prevention using head velocity extracted from visual based VDO sequences BIBAFull-Text 35
  Nuth Otanasap; Poonpong Boonbrahm
More than ten millions elderly people fall each year. Falls are the important cause of injury related to death and set of symptoms. Most of "fall detection" systems are focused on critical and post-fall phase which mean that the faller may already be injured. In this study, we propose an early fall detection in critical fall phase using velocity characteristics, collected by Kinect sensor with 30 frames per second. A series of normal and falling activities were performed by 5 volunteers in first experiment and 11 volunteers in second experiment. The fall velocity based point was calculated by the first experiment as 50 postures, 2210 frames recorded.
   The result from the first experiment, velocity ratio 90.35 pixels per millisecond was calculated by adding μ with σ, was defined as velocity fall detection based point for the second experiment. In the second experiment, the fall activities were detected at 85.07 percent from 134 fall activities. The mean time of fall activities that was detected before dash to the ground is 391.15 milliseconds. The detected mean time may be useful to developing a preventive fall system to protect the faller before injured i.e. wearable airbag system. For high accuracy prediction, automatically adjusted vertical fall based point by train dataset of individual person will be investigated in future work.

10. Demos

Pseudo-transparent tablet based on 3D feature tracking BIBAFull-Text 50
  Makoto Tomioka; Sei Ikeda; Kosuke Sato
This demonstration shows geometrically consistent image rendering that realizes pseudo-transparency in tablet-based augmented reality. The rendering method is based on the homography estimated by feature tracking and face detection using on-board rear and front cameras, respectively. This configuration is the most practical for typical off-the-shelf tablets in the sense that it does not require any special devices or designed environments. Although the local misalignment of images rectified by homography is an unavoidable artifact in a non-planar scene, the rendered images are globally consistent with the real scene. This is the first demonstration in which such pseudo-transparency can be experienced in an unprepared environment.
KinecDrone: enhancing somatic sensation to fly in the sky with Kinect and AR.Drone BIBAFull-Text 51
  Kohki Ikeuchi; Tomoaki Otsuka; Akihito Yoshii; Mizuki Sakamoto; Tatsuo Nakajima
KinecDrone enhances our somatic sensation to fly in the sky with Kinect and AR.Drone. A Video stream captured in AR.Drone is transmitted to a user's head mounted display. While a user behaves like flying in the sky in a room, he/she can watch the scene captured by flying AR.Drone. Thus, the user feels that he/she is really flying in the sky. Also, the user can control AR.Drone with his/her natural gestures without losing the reality of the feeling that he/she is flying in the sky as if he/she becomes AR.Drone himself/herself. This significantly increases the immersive experiences as flying in the sky.
Early gesture recognition method with an accelerometer BIBAFull-Text 52
  Ryo Izuta; Kazuya Murao; Tsutomu Terada; Masahiko Tsukamoto
An accelerometer is installed in most current mobile phones, such as the iPhones, Android-powered devices, and video game controllers for Wii or PS3, which enable easy and intuitive operations such as scrolling browsers and drawing 3D objects by detecting the inclination and motion of devices. Therefore, many gesture-based user interfaces with accelerometers are expected to appear in the future. Gesture recognition systems with accelerometers generally have to construct gesture models with user's gesture data before use, and recognize unknown gestures by comparing them with training data. As recognition process generally starts after the gesture has finished, output of the recognition result and feedback, e.g. scrolling, have a delay, which may cause users to retry gestures and degrade interface usability. We propose a method of early gesture recognition that calculates the distance between input data and training data sequentially, and outputs recognition results only when one output candidate has a stronger likelihood than the others. Additionally, we implemented a gesture-based photo viewer as an example of useful applications of our proposed method.
A system for practicing formations in dance performance using a two-axis movable electric curtain track BIBAFull-Text 53
  Shuhei Tsuchida; Tsutomu Terada; Masahiko Tsukamoto
Improving physical expressions and the sense of rhythm in dance performances has become important in recent years due to the increase in child dancers and dance studios. Even beginners in dance gain more opportunities to perform dances in groups. When dancing in a group, collapsed formation will greatly reduce the quality of dance performance even if the choreography is synchronized with the music. Therefore, learning the dance formation in a group is as important as learning its choreography. It is also important to be aware of keeping the proper formation and moving smoothly into the next formation to perform professional level group dances. However, it is difficult to obtain the sense of a proper formation if some members of the dance cannot participate in the practice. We have proposed a practice-support system for performing the formation smoothly using a self-propelled screen even if there is no dance partner. However, the movement of people was limited more than necessary by the excessive presence of a self-propelled screen moving irregular and the fear of the collision with the screen. Therefore, the reproducibility of the trajectory in the case where the user danced with another dancer was low. In this work, we propose a practice-support system for performing the formation using the two-axis movable curtain rail, whose movement direction does not drift and the material used for projection is soft. These characteristics reduce the fear of the collision and improve the accuracy in movement of the screen.
iMake: computer-aided eye makeup BIBAFull-Text 54
  Ayano Nishimura; Itiro Siio
Many women enjoy applying makeup. Eye makeup is especially important for face makeup, because eyeshadow color and eye line shape can dramatically change a person's impression given to others. In addition to standard eye makeup, there is "artistic eye makeup," which tends to have a greater variety of designs and is more ostentatious than standard eye makeup. Artistic eye makeup often has a motif of characters or symbols, such as a butterfly or a musical note. Needless to say, it is often difficult for non-artistic people to apply this type of eye makeup. Artistic eye makeup requires a special technique; therefore, we propose and implement a computer-aided eye makeup design system called "iMake." This system generates eye makeup designs from the colors and shapes of a favorite characters selected by a user. Once the user has selected the desired eye makeup pattern, an ink-jet color printer prints it on a transfer sheet that the user can apply to his/her eyelids. The user can design any type of eye makeup with a simple operation, and then apply the transfer sheet makeup without any special techniques.
An interactive system for recognizing user actions on a surface using accelerometers BIBAFull-Text 55
  Naoya Isoyama; Tsutomu Terada; Masahiko Tsukamoto
There are various approaches to recognizing user actions for interactive arts. By making a system interactive, the audience has more fun because they are participating, and the artists can translate what is in their imagination more richly. Since user actions have great variety and the restrictions on installations are diverse, conventional systems use mechanisms for recognizing user motions that are specialized to their own work, i.e., that are not general. In this paper, we propose a method that adds interactivity to any surface and recognizes the position and intensity of a preformed action by using multiple accelerometers.
A multi-modal interface for performers in stuffed suits BIBAFull-Text 56
  Yoshiyuki Tei; Tsutomu Terada; Masahiko Tsukamoto
In wearable computing environments, a user can obtain information safely and efficiently without disturbing their daily life. However, since the surrounding conditions change frequently according to the situation, the information presentation method of the wearable system needs to adapt to the change. The main purpose of our research is to construct a system that changes information presentation methods according to the user situation. We focus on supporting performers in stuffed suits with multi-modal information presentation. We investigated the interfaces for these performers who cannot acquire sufficient information of the surrounding environments.
A sound-based lifelog system using ultrasound BIBAFull-Text 57
  Hiroki Watanabe; Tsutomu Terada; Masahiko Tsukamoto
We propose an activity and context recognition method where the user carries a neck-worn receiver comprising a microphone, and small speakers on his wrists that generate ultrasounds. The system recognizes gestures on the basis of the volume of the received sound and the Doppler effect. The former indicates the distance between the neck and wrists, and the latter indicates the speed of motions. Thus, our approach substitutes the wired or wireless communication typically required in body area motion sensing networks by ultrasounds. Our system also recognizes the place where the user is in and the people who are near the user by ID signals generated from speakers placed in rooms and on people. The strength of the approach is that, for offline recognition, a simple audio recorder can be used for the receiver. In this paper, we introduce our new device.