HCI Bibliography Home | HCI Conferences | AH Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AH Tables of Contents: 10111213141516

Proceedings of the 2013 Augmented Human International Conference

Fullname:Proceedings of the 4th Augmented Human International Conference
Editors:Albrecht Schmidt; Andreas Bulling; Christian Holz
Location:Stuttgart, Germany
Dates:2013-Mar-07 to 2013-Mar-08
Publisher:ACM
Standard No:ISBN: 978-1-4503-1904-1; ACM DL: Table of Contents; hcibib: AH13
Papers:48
Pages:244
Links:Conference Website
Summary:With technological advances, computing has progressively moved beyond the desktop into new physical and social contexts. As physical artifacts gain new computational behaviors, they become reprogrammable, customizable, repurposable, and interoperable in rich ecologies and diverse contexts. They also become more complex, and require intense design effort in order to be functional, usable, and enjoyable. Designing such systems requires interdisciplinary thinking. Their creation must not only encompass software, electronics, and mechanics, but also the system's physical form and behavior, its social and physical milieu, and beyond.
FingerDraw: more than a digital paintbrush BIBAFull-Text 1-4
  Anuruddha Hettiarachchi; Suranga Nanayakkara; Kian Peen Yeo; Roy Shilkrot; Pattie Maes
Research in cognitive science shows that engaging in visual arts has great benefits for children particularly when it allows them to bond with nature [7]. In this paper, we introduce FingerDraw, a novel drawing interface that aims to keep children connected to the physical environment by letting them use their surroundings as templates and color palette. The FingerDraw system consists of (1) a finger-worn input device [13] which allows children to upload visual contents such as shapes, colors and textures that exist in the real world; (2) a tablet with touch interface that serves as a digital canvas for drawing. In addition to real-time drawing activities, children can also collect a palette of colors and textures in the input device and later feed them into the drawing interface. Initial reactions from a case study indicated that the system could keep a child engaged with their surroundings for hours to draw using the wide range of shapes, colors and patterns found in the natural environment.
SmartFinger: an augmented finger as a seamless 'channel' between digital and physical objects BIBAFull-Text 5-8
  Shanaka Ransiri; Suranga Nanayakkara
Connecting devices in the digital domain for exchanging data is an essential task in everyday life. Additionally, our physical surrounding is full of valuable visual information. However, existing approaches for transferring digital content and extracting information from physical objects require separate equipment. SmartFinger aims to create a seamless 'channel' between digital devices and physical surrounding by using a finger-worn vision based system. It is an always available and intuitive interface for 'grasping' and semantically analyzing visual content from physical objects as well as sharing media between digital devices. We hope that SmartFinger will lead to seamless digital information 'channel' among all entities with a semblance in the physical and digital worlds.
AugmentedForearm: exploring the design space of a display-enhanced forearm BIBAFull-Text 9-12
  Simon Olberding; Kian Peen Yeo; Suranga Nanayakkara; Jurgen Steimle
Recent technical advances allow traditional wristwatches to be equipped with high processing power. Not only do they allow for glancing at the time, but they also allow users to interact with digital information. However, the display space is very limited. Extending the screen to cover the entire forearm is promising. It allows the display to be worn similarly to a wristwatch while providing a large display surface. In this paper we present the design space of a display-augmented forearm, focusing on two specific properties of the forearm: its hybrid nature as a private and a public display surface and the way clothing influences information display. We show a wearable prototypical implementation along with interactions that instantiate the design space: sleeve-store, sleeve-zoom, public forearm display and interactive tattoo.
EyeRing: a finger-worn input device for seamless interactions with our surroundings BIBAFull-Text 13-20
  Suranga Nanayakkara; Roy Shilkrot; Kian Peen Yeo; Pattie Maes
Finger-worn interfaces remain a vastly unexplored space for user interfaces, despite the fact that our fingers and hands are naturally used for referencing and interacting with the environment. In this paper we present design guidelines and implementation of a finger-worn I/O device, the EyeRing, which leverages the universal and natural gesture of pointing. We present use cases of EyeRing for both visually impaired and sighted people. We discuss initial reactions from visually impaired users which suggest that EyeRing may indeed offer a more seamless solution for dealing with their immediate surroundings than the solutions they currently use. We also report on a user study that demonstrates how EyeRing reduces effort and disruption to a sighted user. We conclude that this highly promising form factor offers both audiences enhanced, seamless interaction with information related to objects in the environment.
Whole hand modeling using 8 wearable sensors: biomechanics for hand pose prediction BIBAFull-Text 21-28
  Christopher-Eyk Hrabia; Katrin Wolf; Mathias Wilhelm
Although Data Gloves allow for the modeling of the human hand, they can lead to a reduction in usability as they cover the entire hand and limit the sense of touch as well as reducing hand feasibility. As modeling the whole hand has many advantages (e.g. for complex gesture detection) we aim for modeling the whole hand while at the same time keeping the hand's natural degrees of freedom (DOF) and the tactile sensibility as high as possible while allowing for manual tasks like grasping tools and devices. Therefore, we attach motion sensor boards (accelerometer, magnetometer and gyroscope) to the human hand. We conducted a user study and found the biomechanical dependence of the joint angles between the fingertip close joint (DIP) and the palm close joint (PIP) in a relation of DIP = 0.88 PIP for all four fingers (SD=0.10, R2=0.77). This allows the data glove to be reduced by 8 sensors boards, one per finger, three for the thumb, and one on the back of the hand as an orientation baseline for modeling the whole hand through. Even though we found a joint flexing relationship also for the thumb, we decided to retain 3 sensor units here, as the relationship varied more (R2=0.59). Our hand model could potentially serve for rich handmodel-based gestural interaction as it covers all 26 DOF in the human hand.
Tangential force sensing system on forearm BIBAFull-Text 29-34
  Yasutoshi Makino; Yuta Sugiura; Masa Ogata; Masahiko Inami
In this paper, we propose a sensing system that can detect one dimensional tangential force on a forearm. There are some previous tactile sensors that can detect touch conditions when a user touches a human skin surface. Those sensors are usually attached on a fingernail, so therefore a user cannot touch the skin with two fingers or with their palm. In the field of cosmetics, for example, they want to measure contact forces when a customer puts their products onto their skin. In this case, it is preferable that the sensor can detect contact forces in many different contact ways. In this paper, we decided to restrict a target area to a forearm. Since the forearm has a cylindrical shape, its surface deformation propagates to neighboring areas around a wrist and an elbow. The deformation can be used to estimate tangential force on the forearm. Our system does not require any equipment for the active side (i.e. fingers or a palm). Thus a user can touch the forearm in arbitrary ways. We show basic numerical simulation and experimental results which indicate that the proposed system can detect tangential force on the forearm. Also we show some possible applications that use the forearm as a human-computer interface device.
Manipulation of an emotional experience by real-time deformed facial feedback BIBAFull-Text 35-42
  Shigeo Yoshida; Tomohiro Tanikawa; Sho Sakurai; Michitaka Hirose; Takuji Narumi
The main goals of this paper involved assessing the efficacy of computer-generated emotion and establishing a method for integrating emotional experience. Human internal processing mechanisms for evoking an emotion by a relevant stimulus have not been clarified. Therefore, there are few reliable techniques for evoking an intended emotion in order to reproduce this process.
   However, in the field of cognitive science, the ability to alter a bodily response has been shown to unconsciously generate emotions. We therefore hypothesized emotional experience could be manipulated by having people recognize pseudo-generated facial expressions as changes to their own facial expressions. Our results suggest that this system was able to manipulate an emotional state via visual feedback from artificial facial expressions. We proposed the Emotion Evoking system based on the facial feedback hypothesis.
Development of roller-type itch-relief device employing alternating hot and cold stimuli BIBAFull-Text 43-46
  Ryo Watanabe; Taku Hachisu; Michi Sato; Shogo Fukushima; Hiroyuki Kajimoto; Naoki Saito; Yuichiro Mori
Painful thermal stimulation is known to inhibit itch, which is a significant problem in many diseases. We focused on thermal grill illusion and synthetic heat, which are well-known phenomena that can generate pain or burning sensation without physical damage; we tried to achieve a similar effect via a harmless-range thermal stimulation. We developed a roller-type itch-relief device. When the device is rolled onto the user's skin, the skin is alternately exposed to hot and cold stimuli. The roller is composed of an aluminum pipe cut into two parts along the longitudinal axis. One part is set to hot and the other is set to cold by embedded Peltier devices. When the device is rolled on the user's skin, the skin is alternately exposed to hot and cold stimuli. In addition, vibration is applied so that a virtual scratching feeling is presented without damage to the skin. We evaluated the device by eliciting an itch using a lactic acid solution and then applying the device. The results showed that the device provides effective temporal relief from itch and that its effect continues for a few minutes.
Paired vibratory stimulation for haptic feedback BIBAFull-Text 47-50
  Yasutoshi Makino; Takashi Maeno
In this paper, we show a haptic feedback method named "Paired Vibratory Stimulation." We use two vibrators, one is attached onto the device and the other one is attached onto the fingernail. When the two vibrators are activated with different but close frequency, the beat vibration occurs only when the finger touches the device. A human can feel the beat vibration even when each original vibration is hard to be perceived. Therefore, the system can give vibratory sensation only at the contact area by using two vibrators. This is suitable for haptic feedback for a handheld mobile device. For the handheld device, sensation only arises at the contact area not at the holding hand or at the fingernail. This is also applicable to human skin interface system. Recently some researchers have proposed the systems which take advantage of human skin surface as an input media. Our method is suitable to give vibratory haptic feedback for that situation. We show our experimental results which clarified that the Paired Vibratory Stimulation can be achieved and applied to the human skin interface system.
Sensing the environment through SpiderSense BIBAFull-Text 51-57
  Victor Mateevitsi; Brad Haggadone; Jason Leigh; Brian Kunzer; Robert V. Kenyon
Recent scientific advances allow the use of technology to expand the number of forms of energy that can be perceived by humans. Smart sensors can detect hazards that human sensors are unable to perceive, for example radiation. This fusing of technology to human s forms of perception enables exciting new ways of perceiving the world around us. In this paper we describe the design of SpiderSense, a wearable device that projects the wearer s near environment on the skin and allows for directional awareness of objects around him. The millions of sensory receptors that cover the skin presents opportunities for conveying alerts and messages. We discuss the challenges and considerations of designing similar wearable devices.
Tactile distance feedback for firefighters: design and preliminary evaluation of a sensory augmentation glove BIBAFull-Text 58-64
  Anthony Carton; Lucy E. Dunne
In this paper, we describe the design and preliminary evaluation of a vibrotactile glove for distance display in low vision search contexts. Specifically, this glove was developed for firefighting applications in which users experience compromised vision due to a combination of smoke and low ambient light levels. The glove maps an ultrasonic rangefinder to a pair of vibrating motors on the dorsal surface of the hand. Initial perceptibility testing with 15 participants showed participants were consistently able to detect the presence and absence of obstacles in a gap-detection task (93% correct detection) and to detect relative changes in the proximity of an obstacle (74% correct identification of relative position). Mapping tactile stimuli to absolute position was more challenging, with an accuracy rate of 57% (adjusted to 89% within one unit of actual position). Challenges to implementation of the concept include response time-lag, challenges of absolute judgment, and width of the sensor signal cone.
Device-free interaction in smart domestic environments BIBAFull-Text 65-68
  Felix Heidrich; Ivan Golod; Peter Russell; Martina Ziefle
This paper contributes to the exploration of user preferences of device-free interaction with smart appliances and services in the domestic environment. We presented a prototype system for on-surface gesture control to users in a natural environment and surveyed the perceived advantages of a potentially truly ubiquitous input method. Results show a positive attitude of users towards augmenting domestic environments with such a system. By reporting the most influencing user characteristics and our experience in designing the system we want to inform developers of future systems that support multiple input devices to better understand the role of device-free input in domestic spaces.
SEMarbeta: mobile sketch-gesture-video remote support for car drivers BIBAFull-Text 69-76
  Sicheng Chen; Miao Chen; Andreas Kunz; Asim Evren Yantaç; Mathias Bergmark; Anders Sundin; Morten Fjeld
Uneven knowledge distribution is often an issue in remote support systems, creating the occasional need for additional information layers that extend beyond plain videoconference and shared workspaces. This paper introduces SEMarbeta, a remote support system designed for car drivers in need of help from an office-bound professional expert. We introduce a design concept and its technical implementation using low-cost hardware and techniques inspired by augmented reality research. In this setup, the driver uses a portable Android tablet PC while the expert mechanic uses a stationary computer equipped with a video camera capturing his gestures and sketches. Hence, verbal instructions can be combined with supportive gestures and sketches added by the expert mechanic to the car's video display. To validate this concept, we carried out a user study involving two typical automotive repair tasks: checking engine oil and examining fuses. Based on these tasks and following a between-group (drivers and expert mechanics) design, we compared voice-only with additional sketch- and gesture-overlay on video screenshots measuring objective and perceived quality of help. Results indicate that sketch- and gesture-overlay can benefit remote car support in typical breakdown situations.
Urine computer interaction to avoid spattering: study of urination handling BIBAFull-Text 77-80
  Katsufumi Matsui; Kazunori Ogasawara; Emi Tamaki; Ken Iwasaki
To prevent spattering accidents during urination, an electrophone system is proposed. This system informs whether the urine flow reaches a "safety area." Eight sensors are mounted in this safety area. To define the safety area, spatter is checked based on the position of the urine flow. We determined appropriate distances and sizes for the targets and examined the data conforming to Fitts' Law. Then an electrophone prototype is built that generates sound feedback. The user controls their urine flow and enjoys the sounds. Future applications are discussed such as behavioural therapy or health care.
A depth cue method based on blurring effect in augmented reality BIBAFull-Text 81-88
  Lin Xueting; Takefumi Ogawa
In this paper, a depth cue method based on blurring effect in augmented reality is proposed. Distinguished from the previous researches, the proposed method offers an algorithm which, based on the spatial information in the real world and the intrinsic parameters of the camera, estimates the blurring effect in the whole scene. Through one-time checkerboard calibration, the camera parameters are registered, the value of the Point Spread Function parameters is measured and the blur circle radius on a particular position could be predicted for later virtual object rendering. The measurement procedure of the blur circle radius and the estimation algorithm are discussed. A prototype of the proposed AR system is implemented. In the end, an evaluation of the blur circle radius estimation algorithm is provided and the future work of the research is discussed.
RoomSense: an indoor positioning system for smartphones using active sound probing BIBAFull-Text 89-95
  Mirco Rossi; Julia Seiter; Oliver Amft; Seraina Buchmeier; Gerhard Tröster
We present RoomSense, a new method for indoor positioning using smartphones on two resolution levels: rooms and within-rooms positions. Our technique is based on active sound fingerprinting and needs no infrastructure. Rooms and within-rooms positions are characterized by impulse response measurements. Using acoustic features of the impulse response and pattern classification, an estimation of the position is performed. An evaluation study was conducted to analyse the localization performance of RoomSense. Impulse responses of 67 within-rooms positions from 20 rooms were recorded with the hardware of a smartphone. In total 5360 impulse response measurements were collected. Our evaluation study showed that RoomSense achieves a room-level accuracy of > 98% and a within-rooms positions accuracy of > 96%. Additionally, the implementation of RoomSense as an Android App is presented in detail. The RoomSense App enables to identify an indoor location within one second.
A sensing architecture for empathetic data systems BIBAFull-Text 96-99
  Johannes Wagner; Daniele Mazzei; Alberto Betella; Riccardo Zucca; Pedro Omedas; Paul F. M. J. Verschure
Today's increasingly large and complex databases require novel and machine aided ways of exploring data. To optimize the selection and presentation of data, we suggest an unconventional approach. Instead of exclusively relying on explicit user input to specify relevant information or to navigate through a data space, we exploit the power and potential of the users' unconscious processes in addition. To this end, the user is immersed in a mixed reality environment while his bodily reactions are captured using unobtrusive wearable devices. The users' reactions are analyzed in real-time and mapped onto higher-level psychological states, such as surprise or boredom, in order to trigger appropriate system responses that direct the users' attention to areas of potential interest in the visualizations. The realization of such a close experience-based human-machine loop raises a number of technical challenges, such as the real-time interpretation of psychological user states. The paper at hand describes a sensing architecture for empathetic data systems that has been developed as part of such a loop and how it tackles the diverse challenges.
Device-free and device-bound activity recognition using radio signal strength BIBAFull-Text 100-107
  Markus Scholz; Till Riedel; Mario Hock; Michael Beigl
Background: We investigate direct use of 802.15.4 radio signal strength indication (RSSI) for human activity recognition when 1) a user carries a wireless node (device-bound) and when 2) a user moves in the wireless sensor net (WSN) without a WSN node (device-free). We investigate recognition feasibility in respect to network topology, subject and room geometry (door open, half, closed).
   Methods: In a 2 person office room 8 wireless nodes are installed in a 3D topology. Two subjects are outfitted with a sensor node on the hip. Acceleration and RSSI are recorded while subject performs 6 different activities or room is empty. We apply machine learning for analysis and compare our results to acceleration data.
   Results: 10-fold cross-validation with all nodes gives accuracies of 0.896 (device-bound), 0.894 (device-free) and 0.88 (accelerometer). Topology investigation reveals that similar accuracies may be reached with only 5 (device-bound) or 4 (device-free) selected nodes. Applying trained data from one subject to the other and vice-versa shows higher recognition difference on RSSI than on acceleration. Changing of door state has smaller effect on both systems than subject change; with least impact when door is closed.
   Conclusion: 802.15.4 RSSI suited for activity recognition. 3D topology is helpful in respect to type of activities. Discrimination of subjects seems possible. Practical systems must adapt no only to long-term environmental dispersion but consider typical geometric changes. Adaptable, robust recognition models must be developed.
Improving activity recognition without sensor data: a comparison study of time use surveys BIBAFull-Text 108-115
  Marko Borazio; Kristof Van Laerhoven
Wearable sensing systems, through their proximity with their user, can be used to automatically infer the wearer's activity to obtain detailed information on availability, behavioural patterns and health. For this purpose, classifiers need to be designed and evaluated with sufficient training data from these sensors and from a representative set of users, which requires starting this procedure from scratch for every new sensing system and set of activities. To alleviate this procedure and optimize classification performance, the use of time use surveys has been suggested: These large databases contain typically several days worth of detailed activity information from a large population of hundreds of thousands of participants. This paper uses a strategy first suggested by [16] that utilizes time use diaries in an activity recognition method. We offer a comparison of the aforementioned North-American data with a large European database, showing that although there are several cultural differences, certain important features are shared between both regions. By cross-validating across the 5160 households in this new data with activity episodes of 13798 individuals, especially distinctive features turn out to be time and participant's location. Additionally, we identify for 11 different activities which features are most suited to be used for later on activity recognition.
Qualitative activity recognition of weight lifting exercises BIBAFull-Text 116-123
  Eduardo Velloso; Andreas Bulling; Hans Gellersen; Wallace Ugulino; Hugo Fuks
Research on activity recognition has traditionally focused on discriminating between different activities, i.e. to predict which activity was performed at a specific point in time. The quality of executing an activity, the how (well), has only received little attention so far, even though it potentially provides useful information for a large variety of applications. In this work we define quality of execution and investigate three aspects that pertain to qualitative activity recognition: specifying correct execution, detecting execution mistakes, providing feedback on the to the user. We illustrate our approach on the example problem of qualitatively assessing and providing feedback on weight lifting exercises. In two user studies we try out a sensor- and a model-based approach to qualitative activity recognition. Our results underline the potential of model-based assessment and the positive impact of real-time user feedback on the quality of execution.
Engineers meet clinicians: augmenting Parkinson's disease patients to gather information for gait rehabilitation BIBAFull-Text 124-127
  Sinziana Mazilu; Ulf Blanke; Daniel Roggen; Gerhard Tröster; Eran Gazit; Jeffrey M. Hausdorff
Many people with Parkinson's disease suffer from freezing of gait, a debilitating temporary inability to pursue walking. Rehabilitation with wearable technology is promising. State of the art approaches face difficulties in providing the needed bio-feedback with a sufficient low-latency and high accuracy, as they rely solely on the crude analysis of movement patterns allowed by commercial motion sensors. Yet the medical literature hints at more sophisticated approaches. In this work we present our first step to address this with a rich multimodal approach combining physical and physiological sensors. We present the experimental recordings including 35 motion and 3 physiological sensors we conducted on 18 patients, collecting 23 hours of data. We provide best practices to ensure a robust data collection that considers real requirements for real world patients. To this end we show evidence from a user questionnaire that the system is low-invasive and that a multimodal view can leverage cross modal correlations for detection or even prediction of gait freeze episodes.
Experiencing the ball's POV for ballistic sports BIBAFull-Text 128-133
  Kodai Horita; Hideki Sasaki; Hideki Koike; Kris M. Kitani
We place a small wireless camera inside an American football to capture the ball's point-of-view during flight to augment a spectator's experience of the game of football. To this end, we propose a robust video synthesis algorithm that leverages the unique constraints of fast spinning cameras to obtain a stabilized bird's eye point-of-view video clip. Our algorithm uses a coarse-to-fine image homography computation technique to progressively register images. We then optimize an energy function defined over pixel-wise color similarity and distance to image borders, to find optimal image seams to create panoramic composite images. Our results show that we can generate realistic videos from a camera spinning at speeds of up to 600 RPM.
Message bag: can assistive technology combat forgetfulness? BIBAFull-Text 134-137
  Christine Farion; Matthew Purver
Forgetfulness can be a cause for concern when it begins affecting our daily lives. Forgetfulness is associated with feelings of embarrassment and shame [1] and yet there is little attention given to forgetfulness in a healthy population. Forgetfulness is a lived experience and something that happens in our day to day. Therefore we propose the "message bag", which will be carried throughout regular daily activities, with an aim to alleviate the cognitive load, in an effort to eliminate forgetfulness. We describe a prototype for a device that will be tested in the wild.
A tool for mental workload evaluation and adaptation BIBAFull-Text 138-141
  Inês Oliveira; Nuno Guimarães
This paper studies the use of mental workload patterns measured from electroencephalographic (EEG) signals in the adaptation of reading activities. Mental workload is associated with the feeling of (dis) comfort of users, based on the assumption that a higher mental workload involves a greater discomfort.
   There is increasing interest in the use of physiological signals for the design of interactive systems, reinforcing the link between the application behavior and the user's emotional and mental states.
   Reading processes are pervasive in visual user interfaces. Previous work has integrated EEG signals in prototypical applications, designed to analyze reading tasks, and tried to identify the most relevant features for discriminating reading and non-reading mental states. In this paper we address the possibility of adjusting the reading conditions to the user's mental state.
   We start by analyzing the correlation between the mental workload and the variation of some relevant HCI textual aspects, such as text size. Then we developed applications that analyze the user's mental workload and adjust the speed of text presentation to the user's mental load. The experiments have been performed in a conventional HCI lab, with non clinical EEG equipment and setup. This is an explicit and design condition, as it targets ecological reading situations.
Investigation of fNIRS brain sensing as input to information filtering systems BIBAFull-Text 142-149
  Evan M. Peck; Daniel Afergan; Robert J. K. Jacob
Today's users interact with an increasing amount of information, demanding a similar increase in attention and cognition. To help cope with information overload, recommendation engines direct users' attention to content that is most relevant to them. We suggest that functional near-infrared spectroscopy (fNIRS) brain measures can be used as an additional channel to information filtering systems. Using fNIRS, we acquire an implicit measure that correlates with user preference, thus avoiding the cognitive interruption that accompanies explicit preference ratings. We explore the use of fNIRS in information filtering systems by building and evaluating a brain-computer movie recommender. We find that our system recommends movies that are rated higher than in a control condition, improves recommendations with increased interaction with the system, and provides recommendations that are unique to each individual.
Who are you?: A wearable face recognition system to support human memory BIBAFull-Text 150-153
  Yuzuko Utsumi; Yuya Kato; Kai Kunze; Masakazu Iwamura; Koichi Kise
Have you ever experienced that you cannot remember the name of a person you meet again? To circumvent such an awkward situation, it would be great if you had had a system that tells you the name of the person in secret. In this paper, we propose a wearable system of real-time face recognition to support human memory. The contributions of our work are summarized as follows: (1) We discuss the design and implementation details of a wearable system capable of augmenting human memory by vision-based realtime face recognition. (2) We propose a 2 step recognition approach from coarse-to-fine grain to boost the execution time towards the social acceptable limit of 900 [ms]. (3) In experiments, we evaluate the computational time and recognition rate. As results, the proposed system could recognize a face in 238 ms with the the cumulative recognition rate at the 10th rank was 93.3%. Computational time with the coarse-to-fine search was 668 ms less than that without coarse-to-fine search and the results showed that the proposed system has enough ability to recognize faces in real time.
The design of artifacts for augmenting intellect BIBAFull-Text 154-161
  Cassandra Xia; Pattie Maes
Fifty years ago, Doug Engelbart created a conceptual framework for augmenting human intellect in the context of problem-solving. We expand upon Engelbart's framework and use his concepts of process hierarchies and artifact augmentation for the design of personal intelligence augmentation (IA) systems within the domains of memory, motivation, decision making, and mood. This paper proposes a systematic design methodology for personal IA devices, organizes existing IA research within a logical framework, and uncovers underexplored areas of IA that could benefit from the invention of new artifacts.
Sonification of images for the visually impaired using a multi-level approach BIBAFull-Text 162-169
  Michael Banf; Volker Blanz
This paper presents a system that strives to give visually impaired persons direct perceptual access to images via an acoustic signal. The user explores the image actively on a touch screen and receives auditory feedback about the image content at the current position. The design of such a system involves two major challenges: what is the most useful and relevant image information, and how can as much information as possible be captured in an audio signal. We address both problems, and propose a general approach that combines low-level information, such as color, edges, and roughness, with mid- and high-level information obtained from Machine Learning algorithms. This includes object recognition and the classification of regions into the categories "man made" versus "natural". We argue that this multi-level approach gives users direct access to what is where in the image, yet it still exploits the potential of recent developments in Computer Vision and Machine Learning.
Swimoid: a swim support system using an underwater buddy robot BIBAFull-Text 170-177
  Yu Ukai; Jun Rekimoto
In the field of sports and athletics, it is important for athletes to recognize their own performance by themselves to gain skills effectively. Although swimming is a popular life-long sport all around the world, it is difficult for non-professional swimmers to understand how they swim. In other sports such as baseball, golf and dancing, mirrors are utilized to examine the players' form. However, it is difficult to use the mirror for this purpose when it comes to continuous sports such as swimming, running and cycling. To solve this problem, we propose a buddy robot that has an ability to recognize, follow, and present information to the swimmer. We developed a swim support system called "Swimoid". The buddy robot can swim directly under the user, and present information through the display mounted over the main body. To follow the user, we utilized image processing techniques on the footages captured by two cameras mounted on the front and rear of the robot. Swimoid can augment the user's ability underwater environments in two different ways. First of all, Swimoid enables swimmers to recognize their swimming form in real time. Secondly, Swimoid could allow coaches on the pool side to give instructions to swimmers. These two functions are for improving swimming techniques. However, we also believe we can use the buddy robot for a different purposes, such as entertaining novice swimmers and we implemented a game function to get familiar with water using the touch interaction with a swimmer. As a result of user tests, we confirmed this system works properly. Finally, we measured our contribution in the research field by comparison with related works.
A system for practicing formations in dance performance supported by self-propelled screen BIBAFull-Text 178-185
  Shuhei Tsuchida; Tsutomu Terada; Masahiko Tsukamoto
Collapsed formation in a group dance will greatly reduce the quality of the performance even if the dance in the group is synchronized with music. Therefore, learning the formation of a dance in a group is as important as learning its choreography. However, if someone cannot participate in practice, it is difficult for the rest of the members to gain a sense of the proper formation in practice. We propose a practice-support system for performing the formation smoothly using a self-propelled screen even if there is no dance partner. We developed a prototype of the system and investigated whether a sense of presence provided by both methods of practicing formations was close to the sense we really obtain when we dance with humans. The result verified that the sense of dancing with a projected video was closest to the sense of dancing with a dancer, and the trajectory information from dancing with a self-propelled robot was close to the trajectory information from dancing with a dancer. Practicing in situations similar to real ones is able to be done by combining these two methods. Furthermore, we investigated whether the self-propelled screen obtained the advantages of dancing with both methods and found that it only obtained advantages of dancing with projected video.
NeuroPlace: making sense of a place BIBAFull-Text 186-189
  Lulwah Al-Barrak; Eiman Kanjo
The ability to detect mental states, whether relaxation or stressed, would be useful in categorizing places according to their impact on our brains and many other domains. Newly available, affordable and dry-electrode devices make electroencephalography headsets (EEG) feasible to use outside the lab, for example in open spaces and shopping malls. The purpose of this pervasive experimental manipulation is to analyze brain signals in order to label outdoor places according to how users perceive them with a focus on -- relaxing and -- stressful mental states. That is, when the user is experiencing tranquil brain waves or not when visiting a particular place. This paper demonstrates the potential of exploiting the temporal structure of EEG signals in making sense of outdoor places. The EEG signals induced by the place stimuli are analyzed and exploited to distinguish what we refer to as a place signature.
A monitoring device as assistive lifestyle technology: combining functional needs with pleasure BIBAFull-Text 190-193
  Florian Güldenpfennig; Geraldine Fitzpatrick
Assistive Technologies can be of enormous help for people with disabilities. Still, such supportive devices are often considered to be poor in aesthetics, leaving the person feeling stigmatised by the technology and resulting in a reduced usage and compliance. In this paper we report on a case study of a young person suffering from cerebral palsy and describe a wearable device, RemoteLogCam, that was designed to help him self-manage his hand spasms and at the same time provide his first opportunity to take his own photos. We call this an example of assistive lifestyle technologies (ALT), designed not only to assist people with special needs in a functional sense, but that also enhance the experience of such a device in a pleasing way. In this case, over the course of 6 months use to date, RemoteLogCam augmented our participant's own self-management of spasms and his creative and practical documentation needs.
Using RFID tags as reference for phone location and orientation in daily life BIBAFull-Text 194-197
  F. Wahl; O. Amft
This paper investigates a novel approach to obtain location and orientation annotation for smartphones in real-life recordings. We attached RFID tags to places where phones are located in daily life, such as pockets and backpacks. The RFID reader integrated in modern smartphones was used to continuously scan for registered tags. In a first evaluation across several full-day recordings and using nine locations, our approach achieved an accuracy of 80% when compared to a manual diary. Only 5.3% of all tags were missed. We conclude that RFID-based location and orientation tagging is a viable option to obtain ground truth reference for real-life activity recognition algorithm developments.
Recovering 3-D gaze scan path and scene structure from inside-out camera BIBAFull-Text 198-201
  Yuto Goto; Hironobu Fujiyoshi
First-Person Vision (FPV) is a wearable sensor that takes images from a user's visual field and interprets them, with available information about the user's head motion and gaze, through eye tracking [1]. Measuring the 3-D gaze trajectory of a user moving dynamically in 3-D space is interesting for understanding a user's intention and behavior. In this paper, we present a system for recovering 3-D scan path and scene structure in 3-D space on the basis of ego-motion computed from an inside-out camera. Experimental results show that the 3-D scan paths of a user moving in complex dynamic environments were recovered.
3D building reconstruction and thermal mapping in fire brigade operations BIBAFull-Text 202-205
  Christian Schönauer; Emanuel Vonach; Georg Gerstweiler; Hannes Kaufmann
Fire fighting remains a dangerous profession despite many recent technological and organizational measures. Sensors and technical systems can augment the performance of fire fighters to increase safety and efficiency during operation. An important aspect in that context is the awareness of location, structure and thermal properties of the environment.
   This paper focuses on the design and development of a mobile system, which can reconstruct a 3d model of a building's interior structure in real-time and fuses the visualization with the image of a thermal camera. In addition the position and viewing direction of the fire fighter within the model is determined and a thermal map can be generated from the gathered data, which could help an operational commander to guide his men during a mission.
   First tests with our system in different situations showed good results, being able to reconstruct different larger scenes and create thermal maps thereof.
Communication pedometer: a discussion of gamified communication focused on frequency of smiles BIBAFull-Text 206-212
  Yukari Hori; Yutaka Tokuda; Takahiro Miura; Atsushi Hiyama; Michitaka Hirose
Communication skills are essential in our everyday lives. Yet, it can be difficult for people with communication disorders to improve these skills without professional help. Quantifying communication and providing feedback advice in an automated manner would significantly improve that process. Therefore, we aim to propose a method to monitor communication that employs life-logging technology to evaluate parameters related to communication skills. In our study, we measured frequency of smiles as a metric for smooth communication. In addition, smiling can improve happiness even if a smile is mimicked. Ultimately, we provided feedback results to users in a gamified form and investigated the effects of feedback on communication.
A smile/laughter recognition mechanism for smile-based life logging BIBAFull-Text 213-220
  Kurara Fukumoto; Tsutomu Terada; Masahiko Tsukamoto
Most situations that cause people to smile are important and treasured events that happen in front of the other people. In life-logging systems that record everything with wearable cameras and microphones, it is difficult to extract the important events from a large amount of recordings. In this research, we design and implement a smile-based life-logging system that focuses on smile/laughter for indexing the interesting/enjoyable events on a recorded video. Our system, features an original smile/laughter recognition device using photo interrupters that is comfortable enough for daily use and proposed an algorithm that detects smile/laughter separately by threshold-based clustering. The main challenge is that, since the reasons people smile and laugh are quite diverse, the system has to detect a smile/laughter as different events. Evaluation results showed that our mechanism achieved a 73%/94% accuracy in detecting smile/laughter, while actual use of the system showed that it can accurately detect interesting scenes from a recorded life log.
A system for visualizing human behavior based on car metaphors BIBAFull-Text 221-228
  Hiroaki Sasaki; Tsutomu Terada; Masahiko Tsukamoto
There are many accidents such as bumping between walkers in crowded places. One of reasons for them is that it is difficult for each person to predict the behaviors of other people. On the other hand, cars implicitly communicate with other cars by presenting their contexts with equipments such as brake lights and turn signals. In this paper, we propose a system for visualizing the user context by using information presentation methods based on those found in cars, such as wearing LEDs as brake lights, which can be seen by surrounding people. The evaluation results when using our prototype system confirmed that our method visually and intuitively presented the user context. In addition, we evaluated the visibility effects of changing the mounting position of the wearable devices.
Geometrically consistent mobile AR for 3D interaction BIBAFull-Text 229-230
  Hikari Uchida; Takashi Komuro
In this study, we propose a method to present an image that maintains geometric consistency between the actual scene outside the mobile display and the camera image. Thereby, we expect that interaction with the virtual object through the mobile display becomes more intuitive, and the operability is improved. By cameras mounted on the front and back of the mobile display, the user's face position and the distance to the subject are obtained. Using the information, it is possible to present an image that maintains geometric consistency between the inside and outside of the display depending on the user's viewpoint.
Muscle-propelled force feedback: bringing force feedback to mobile devices using electrical stimulation BIBAFull-Text 231-232
  Pedro Lopes; Lars Butzmann; Patrick Baudisch
We propose mobile force feedback devices based on actuating the user's muscles using electrical stimulation. Since this allows us to eliminate exoskeletons, motors and reduce battery size, our approach results in devices that are substantially smaller and lighter than traditional motor-based devices, and thus suitable for usage on-the-go. We present a simple prototype that we mount to the back of a mobile phone. It actuates users' forearm muscles via four electrodes, causing the muscles to contract involuntarily, so that users tilt the device sideways. As users resist this motion using their other arm, they perceive force feedback. We demonstrate the interaction at the example of three interactive videogames in which our approach to mobile force feedback provides a richer gaming experience.
Augmented reality using a 3D motion capturing suit BIBAFull-Text 233-234
  Ionut Damian; Mohammad Obaid; Felix Kistler; Elisabeth André
In the paper, we propose an approach that immerses the human user in an Augmented Reality (AR) environment with the use of an inertial motion capturing suit and a Head Mounted Displays system. The proposed approach allows for full body interaction with the AR environment in real-time and it does not require the use of any markers or cameras.
Virtual prototyping of a spatial audio interface for obstacle avoidance using image processing BIBAFull-Text 235-236
  Yoko Nakanishi; Yasuto Nakanishi
In this paper, we describe a process used to prototype a spatial audio user interface using virtual simulation. The interface is designed to assist mobile users who are in motion avoid physical dangers while their eyes are engaged. Our prototype system uses a simple form of a non-speech, spatial audio composed with optical flow through a head mounted camera. This paper describes our prototyping process involving various candidate image processing and audio mappings via 3D virtual simulation.
Optimal selection of electrodes for muscle electrical stimulation using twitching motion measurement BIBAFull-Text 237-238
  Manami Katoh; Narihiro Nishimura; Maki Yokoyama; Taku Hachisu; Michi Sato; Shogo Fukushima; Hiroyuki Kajimoto
Muscle electrical stimulation envisions a wide range of human augmentation application. However, the applications commonly have issue of optimal electrodes placement. In this paper, we propose a method to select the optimal electrodes placement for finger flexion using twitching motion measurement. We delivered electrical stimulation producing twitching motion and measured the acceleration. By summing and averaging the acceleration waveforms and taking the difference between the maximum and minimum value, we measured the contribution of the electrical stimulation and used it to select the optimal electrodes pair for the movement. Preliminary experiment with four electrodes showed feasibility of our method.
Caruso: augmenting users with a tenor's voice BIBAFull-Text 239-240
  Jochen Feitsch; Marco Strobel; Christian Geiger
In this poster paper we describe an ongoing project that aims at providing users with the experience to sing like a tenor from the 20th century. We combine 3D body tracking, face recognition and morphing, sound synthesis and 3D character rendering into an interactive media application. Although this project is still in a preliminary stage, first prototypical results are encouraging to continue this work towards a media installation for a science fair.
Towards participatory design for contextual visualization in education using augmented reality x-ray BIBAFull-Text 241
  Marc Ericson C. Santos; Goshiro Yamamoto; Mitsuaki Terawaki; Jun Miyazaki; Takafumi Taketomi; Hirokazu Kato
We propose Augmented Reality (AR) x-ray as an educational tool for contextual visualization -- presenting virtual information in the rich context of a real environment. Teachers and students evaluated a state-of-the-art implementation of AR x-ray. Results show that realism, visibility, and perception of depth in AR x-ray are not significantly different from viewing 3D models with no occlusion cues. Moreover, teachers perceive AR x-ray useful.
Evaluation of a tactile device for augmentation of audiovisual experiences with a pseudo heartbeat BIBAFull-Text 242
  Narihiro Nishimura; Taku Hachisu; Michi Sato; Shogo Fukushima; Hiroyuki Kajimoto
The impression that the viewer has of characters is an important factor affecting the viewer's opinion of audiovisual media, such as movies, television and video games. In particular, when we feel affection toward characters, we sometimes go so far as to identify ourselves as one of them, leading to extreme immersion in the content of the media. Therefore, there is the possibility that content technology can control affective feelings towards characters and create an immersive environment. We propose a device that can be used to facilitate the affection of the user by controlling their positive feelings toward characters in the media content. Previous studies have shown that emotional or physiological states can be altered by the visual and auditory presentation of false heartbeats [1, 2, 3]. However, if these techniques are applied to audiovisual media such as movies, television, or video games, the audio and visual heartbeat cues may interfere with and pollute the audiovisual content.
A real-time gait improvement tool using a smartphone BIBAFull-Text 243
  Hirotaka Kashihara; Hiroki Shimizu; Hiroyoshi Houchi; Masato Yoshimi; Tsutomu Yoshinaga; Hidetsugu Irie
Recent handy devices are provided with various sensors and have realized a lot of functions as downsizing and speeding up of computers. Currently smartphones occupy significant positions as the multifunctional handy devices. One of the most observable feature is that the users carry the smartphone whenever leaving home. Analyzing the motion measured by such device can be useful to improve lifestyle habits. Gaits should be focused as the representative behavior of daily living, which is shown by the fact that there are a lot of exercises intended to improve gaits.
Applying augmented reality to industrial settings BIBAFull-Text 244
  Elina Vartiainen; Peder Boberg; Oskar Qvarnström; Jonas Brönmark
State-of-the-art mobile devices containing various sensors and interaction technologies have enabled the development of novel solutions for people working in industrial settings. In particular, introducing augmented reality into the mobile device domain could help maintenance engineers while they perform work tasks in a factory. This poster presents and discusses two concepts that explore how maintenance engineers could use augmented reality to view additional information related to equipment found in a factory setting.