HCI Bibliography Home | HCI Conferences | AH Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AH Tables of Contents: 10111213141516

Proceedings of the 2016 Augmented Human International Conference

Fullname:Proceedings of the 7th Augmented Human International Conference
Editors:Albrecht Schmidt; Tsutomu Terada; Woontack Woo; Pranav Mistry; Jean-Marc Seigneur; Jose M. Hernandez-Munoz; Paul McCullagh
Location:Geneva, Switzerland
Dates:2016-Feb-25 to 2016-Feb-27
Publisher:ACM
Standard No:ISBN: 978-1-4503-3680-2; ACM DL: Table of Contents; hcibib: AH16
Papers:51
Links:Conference Website
Generating Materials for Augmented Reality Applications using Natural Language BIBAFull-Text 1
  Sebastian Buntin
In this paper, I present a novel method in parametrizing the BRDF (and other BxDF) as well as the texture map of a material by using natural language. The visual properties of a material can be described by the user using rather complex phrases in real time. These phrases then will be parsed and mapped to BRDF and texture parameters. This allows an easy way to create and specify material representations for various applications. In the context of this paper, I focus on the application in augmented reality.
CASPER: A Haptic Enhanced Telepresence Exercise System for Elderly People BIBAFull-Text 2
  Azusa Kadomura; Akira Matsuda; Jun Rekimoto
Although the necessity and importance of exercise support for the elderly people is largely recognized, the lack of skilled and adequate instructors often limits such activities physically. Remote exercise systems can be a solution for this problem because they may be able to support exercise activities even when instructors and participants are in separate locations. However, when simply using normal video-conferencing systems, instructors and participants have difficulty understanding each side's situation, particularly during guided physical actions. In addition, remote exercise systems cannot support the adjustment of the position of each user, a task that is quite naturally performed in normal exercise activities. Our system, called CASPER, solves these problems by proposing a mirror-like image composition method in which all the participants and the instructor are shown on the same screen so that both sides can understand the situation clearly. We also introduce an airy haptic device to remotely send tactile feedback for further enhancing sensations. In this paper, we describe the system design and its evaluation. The evaluation confirms that our system could effectively allow users to perform exercise activities even at remote locations.
Workspace Awareness in Collaborative AR using HMDs: A User Study Comparing Audio and Visual Notifications BIBAFull-Text 3
  Marina Cidota; Stephan Lukosch; Dragos Datcu; Heide Lukosch
For most professional tasks nowadays, it is necessary to work in teams. Such collaboration often requires the exchange of visual context-related information among the team members. For so-called shared workspace collaboration, awareness of other people's activities is of utmost importance. We have developed an augmented reality (AR) framework in order to support visual communication between a team of two people who are virtually co-located. We address these people as the remote user, who uses a laptop and the local user, who wears a head-mounted display (HMD) with an RGB camera. The remote user can support the local user in solving a spatial problem by providing instructions as virtual objects in the view of the local user. For placing virtual objects in the shared workspace, we use a state-of-the-art algorithm for localization and mapping without markers. In this paper, we report on a user study that explores on how automatic audio and visual notifications about the remote user's activities affect the collaboration. The results show that in our current implementation, visual notifications are preferred over audio or no notifications independent from the level of difficulty of the task.
Predicting Grasps with a Wearable Inertial and EMG Sensing Unit for Low-Power Detection of In-Hand Objects BIBAFull-Text 4
  Marian Theiss; Philipp M. Scholl; Kristof Van Laerhoven
Detecting the task at hand can often be improved when it is also known what object the user is holding. Several sensing modalities have been suggested to identify handheld objects, from wrist-worn RFID readers to cameras. A critical obstacle to using such sensors, however, is that they tend to be too power hungry for continuous usage. This paper proposes a system that detects grasping using first inertial sensors and then Electromyography (EMG) on the forearm, to then selectively activate the object identification sensors. This three-tiered approach would therefore only attempt to identify in-hand objects once it is known a grasping has occurred. Our experiments show that high recall can be obtained for grasp detection, 95% on average across participants, with the grasping of lighter and smaller objects clearly being more difficult.
Exploring Eye-Tracking-Driven Sonification for the Visually Impaired BIBAFull-Text 5
  Michael Dietz; Maha El Garf; Ionut Damian; Elisabeth André
Most existing sonification approaches for the visually impaired restrict the user to the perception of static scenes by performing sequential scans and transformations of visual information to acoustic signals. This takes away the user's freedom to explore the environment and to decide which information is relevant at a given point in time. As a solution, we propose an eye tracking system to allow the user to choose which elements of the field of view should be sonified. More specifically, we enhance the sonification approaches for color, text and facial expressions with eye tracking mechanisms. To find out how visually impaired people might react to such a system we applied a user centered design approach. Finally, we explored the effectiveness of our concept in a user study with seven visually impaired persons. The results show that eye tracking is a very promising input method to control the sonification, but the large variety of visual impairment conditions restricts the applicability of the technology.
Feedback for Smooth Pursuit Gaze Tracking Based Control BIBAFull-Text 6
  Jari Kangas; Oleg Špakov; Poika Isokoski; Deepak Akkil; Jussi Rantala; Roope Raisamo
Smart glasses, like Google Glass or Microsoft HoloLens, can be used as interfaces that expand human perceptual, cognitive, and actuation capabilities in many everyday situations. Conventional manual interaction techniques, however, are not convenient with smart glasses whereas eye trackers can be built into the frames. This makes gaze tracking a natural input technology for smart glasses. Not much is known about interaction techniques for gaze-aware smart glasses. This paper adds to this knowledge, by comparing feedback modalities (visual, auditory, haptic, none) in a continuous adjustment technique for smooth pursuit gaze tracking. Smooth pursuit based gaze tracking has been shown to enable flexible and calibration free method for spontaneous interaction situations. Continuous adjustment, on the other hand, is a technique that is needed in many everyday situations such as adjusting the volume of a sound system or the intensity of a light source. We measured user performance and preference in a task where participants matched the shades of two gray rectangles. The results showed no statistically significant differences in performance, but clear user preference and acceptability for haptic and audio feedback.
Smart Handbag as a Wearable Public Display -- Exploring Concepts and User Perceptions BIBAFull-Text 7
  Ashley Colley; Minna Pakanen; Saara Koskinen; Kirsi Mikkonen; Jonna Häkkilä
Wearable computing has so far focused mostly on systems employing small displays, or no displays at all. In contrast, we explore the possibilities of a smart handbag that functions as a wearable public display, focusing on user perceptions of different design concepts. Our prototype smart handbag explores functionalities such as: changing the bag's appearance to match clothing, displaying textual information, creating a see-though perception enabling items inside the bag to be seen, and enabling interaction with items inside the bag. We report on the findings from a wizard-of-Oz based user study, which included the users walking in public with the smart handbag. The smart handbag concepts were positively received, especially from the utilitarian point of view, but issues related to privacy were raised. Key insights are e.g. the creation of a 'handbag mode' for smartphones placed within the smart handbag and the importance of evaluating such wearables in real-world contexts.
A Lifelog System for Detecting Psychological Stress with Glass-equipped Temperature Sensors BIBAFull-Text 8
  Hiroki Yasufuku; Tsutomu Terada; Masahiko Tsukamoto
Stress is extremely harmful to one's health. It is important to know which situations or events cause us to feel stressed: if we know the factors behind the stress, we can take corrective action. However, it is hard to perceive stress in everyday life by ourselves. Automatically detecting stress from biological information is one method for dealing with this. Stress is generally detected by using a physiological index pulse, brain activity, and breathing in order to ensure universality and accuracy. This biological information reacts to sudden stressors, not chronic stressors. However, it is difficult to use measuring devices for such data in everyday life because the devices require expertise for operation and are expensive. Our goal in this study is to develop a lifelog system featuring glass-equipped sensors that can be used on a daily basis. We detect stress by examining nasal skin temperature, which is decreased by sudden stressors. In order to investigate the recognition accuracy of the proposed system, we performed experiments at the scenes of feeling stress. Results showed that the system can distinguish factors other than stress from the change in nasal skin temperature with sufficient precision. Moreover, we investigated the optimum locations to attach temperature sensors to ensure that they have both reactivity and comfort. We also implemented an application for analyzing the measured data. The application calculates the time at which a user feels stress by analyzing the measured data and extracts a stressful scene from a video recorded from the point of view of the user.
Finding Motifs in Large Personal Lifelogs BIBAFull-Text 9
  Na Li; Martin Crane; Cathal Gurrin; Heather J. Ruskin
The term Visual Lifelogging is used to describe the process of tracking personal activities by using wearable cameras. A typical example of wearable cameras is Microsoft's SenseCam that can capture vast personal archives per day. A significant challenge is to organise and analyse such large volumes of lifelogging data. State-of-the-art techniques use supervised machine learning techniques to search and retrieve useful information, which requires prior knowledge about the data. We argue that these so-called rule-based and concept-based techniques may not offer the best solution for analysing large and unstructured collections of visual lifelogs. Treating lifelogs as time series data, we study in this paper how motifs techniques can be used to identify repeating events. We apply the Minimum Description Length (MDL) method to extract multi-dimensional motifs in time series data. Our initial results suggest that motifs analysis provides a useful probe for identification and interpretation of visual lifelog features, such as frequent activities and events.
Supporting Precise Manual-handling Task using Visuo-haptic Interaction BIBAFull-Text 10
  Akira Nomoto; Yuki Ban; Takuji Narumi; Tomohiro Tanikawa; Michitaka Hirose
Precise manual handling skills are necessary to create art and to paint models. However, these skills are difficult to learn. Some research has approached this issue using mechanical devices. However, mechanical systems have high costs and limit the user's degree-of-freedom. In our research, we propose a system using visuo-haptics to support accurate work without using any mechanical devices. We considered the principle that when a visuo-haptic force is generated on a user's hand in the opposite direction of a target path, the user moves her/his hand to the right direction reflexively to repel the force. Based on this idea, we created a system that can modify users' hand movement by showing a dummy hand using a mixed reality display, which supports precise manual-handling tasks. To demonstrate this, we performed experiments conducted with a video see-through system that uses a head mounted display (HMD). The results showed that an expansion of the deviation between the target route and the actual hand position improved accuracy up to 50%. We also saw a tendency for a lager expansion to give the most improvement in quality, but slow down working speed at the same time. According to experimental results, we find that a gain of about 2.5 gives an ideal balance between the working precision and the drawing speed.
Success Imprinter: A Method for Controlling Mental Preparedness Using Psychological Conditioned Information BIBAFull-Text 11
  Kyosuke Futami; Tsutomu Terada; Masahiko Tsukamoto
It is difficult to control one's mental preparedness in important situations such as sports performances. We propose Success Imprinter, a new mental control system that enables users to strengthen their mental preparedness simply by presenting information. With Success Imprinter, users can strengthen their mental preparedness more easily than by using the previous methods. We utilize the concept of conditioning, which is a learning principle. Our system presents users with a stimulus, which is presented upon the success of their performance repeatedly, to strengthen their mental preparedness. An evaluation confirmed that Success Imprinter has a consistent effect on user's mental preparedness and results of darts competition although its effect was different for each user. Moreover, we discuss a method to identify the effect on each user on the basis of their individual characteristics. From these results, we implemented a prototype that presents conditioned information automatically.
A Dance Performance Environment in which Performers Dance with Multiple Robotic Balls BIBAFull-Text 12
  Shuhei Tsuchida; Tsutomu Terada; Masahiko Tsukamoto
In recent years, as robotics technology progresses, various mobile robots have been developed to dance with humans. However, up until now there have been no system for interactively creating a performance using multiple mobile robots. Therefore, performance using multiple mobile robots is still difficult. In this study, we construct a mechanism by which a performer can interactively create a performance while he/she considers the correspondence between his/her motion and the mobile robots' movement and light. Specifically, we developed a system that enables performers to freely create performances with multiple robotics balls that can move omunidirectionnally and have full color LEDs. Performers can design both the movements of the robotic balls and the colors of the LEDs. To evaluate the effectiveness of the system, we had four performers use the system to create and demonstrate performances. Moreover, we confirmed that the system performed reliably in a real environment.
An Activity Recognition Method by Measuring Circumference of Body Parts BIBAFull-Text 13
  Kentaro Tsubaki; Tsutomu Terada; Masahiko Tsukamoto
This paper investigates an activity recognition method where the user wears stretch sensors made of conductive fabric on each part of the body. The system recognizes actions by measuring the circumference of body parts. The electrical resistivity of our stretch sensors changes in accordance with their expansion/shrinkage. The strengths of this approach are that the appearance of the user is not interfered with and that the sensor is less expensive than other sensors such as the myoelectric sensor. The results of evaluation confirmed that our method can recognize eight contexts by measuring the circumference of four body parts (abdominal region, waist, wrist, ankle) with 99.92% accuracy.
"DreamHouse": NUI-based Photo-realistic AR Authoring System for Interior Design BIBAFull-Text 14
  Jinwoo Park; Sung Sil Kim; Hyerim Park; Woontack Woo
This paper proposes a system which enables users to have enhanced interior deigning and authoring tool through augmented reality (AR). The proposed system, which we refer to as DreamHouse, focuses on providing natural user interaction and realistic AR experience by enabling following features: 1) allowing users to utilize bare-hand interaction with an attached egocentric RGB-D camera when trying to make detail adjustments to virtual objects' size, location and orientation, 2) rendering every virtual object in consideration of an environment and lighting conditions, creating photo-realistic scenes to help users to have immersive and realistic interior designing experience. As a result, DreamHouse allows users to freely and easily interact in physical space with virtual objects using bare hands and gives immersive and realistic AR authoring experience to users through photo-realistic rendering.
Empathizing Audiovisual Sense Impairments: Interactive Real-Time Illustration of Diminished Sense Perception BIBAFull-Text 15
  Fabian Werfel; Roman Wiche; Jochen Feitsch; Christian Geiger
This paper addresses the challenge of empathizing with audiovisual sense impairments, e.g. in the case of elderly people or people with special needs. The developed system aims at providing users with the experience of diminished sight and hearing. We designed a system that can simulate several disease patterns, focusing on but not limited to limitations experienced by older people.
   The system incorporates real-time usage of visual and auditory filters on the user's actual perception by combining two cameras with a head-mounted display for stereoscopic view and a pair of microphones with equalized headphones for spatial hearing. We tested the system informally with experts and non-expert users. They qualified it as useful for enhancing the empathy for audiovisual sense impairments, motivating a further development of this idea.
Laplacian Vision: Augmenting Motion Prediction via Optical See-Through Head-Mounted Displays BIBAFull-Text 16
  Yuta Itoh; Jason Orlosky; Kiyoshi Kiyokawa; Gudrun Klinker
Naïve physics [7], or folk physics, is our ability to understand physical phenomena. We regularly use this ability in life to avoid collisions in traffic, follow a tennis ball and time the return shot, or while working in dynamic industrial settings. Though this skill improves with practice, it is still imperfect, which leads to mistakes and misjudgments for time intensive tasks. People still often miss a tennis shot, which might cause them to lose the match, or fail to avoid a car or pedestrian, which can lead to injury or even death.
   As a step towards reducing these errors in human judgement, we present Laplacian Vision (LV), a vision augmentation system which assists the human ability to predict future trajectory information. By tracking real world objects and estimating their trajectories, we can improve a users's prediction of the landing spot of a ball or the path of an oncoming car. We have designed a system that can track a flying ball in real time, predict its future trajectory, and visualize it in the user's field of view. The system is also calibrated to account for end-to-end delays so that the trajectory appears to emanate forward from the moving object. We also conduct a user study where 29 subjects predict an object's landing spot, and show that prediction accuracy improves 3 fold using LV.
Enhancing Effect of Mediated Social Touch between Same Gender by Changing Gender Impression BIBAFull-Text 17
  Keita Suzuki; Masanori Yokoyama; Yuki Kinoshita; Takayoshi Mochizuki; Tomohiro Yamada; Sho Sakurai
This study realizes a method to enhance the effect of touch in remote same-gender communication by changing the gender impression with a voice changer during telecommunication. We focused on touch in communication. Although psychological studies have revealed that touch has various positive effects such as triggering altruistic behavior, these effects are restrained in some cases, especially in same-gender communication, because the touch between persons of the same gender tends to cause unpleasant feelings. We aimed to address this problem to utilize the effects for telecommunication purposes, such as remote medical care and remote education, by hypothesizing that the use of telepresence could change people's gender impression, reduce this unpleasantness, and enhance the effect of touch. We tested the effectiveness of this method in a situation in which a male operator asked male participants to perform a monotonous task, and the results showed that a touch by the male operator whose voice was changed to female-like could reduce the boredom of the task and improve the friendliness toward the operator.
TalkingCards: Using Tactile NFC Cards for Accessible Brainstorming BIBAFull-Text 18
  Georg Regal; Elke Mattheiss; David Sellitsch; Manfred Tscheligi
Few accessible methods to support brainstorming sessions for blind and visually impaired users exist. We present an approach consisting of a smartphone application and tangible near field communication (NFC) cards enhanced with an additional layer by using different tactile materials. With the smartphone application blind and visually impaired users can write information, like ideas in brainstorming sessions, on NFC cards using speech recognition or voice recording. Text stored on the cards can also be read out loud with the application by using text to speech synthesis. The approach was developed based on a user centered design process with strong participation of blind and visually impaired users. It is low cost, lightweight and easily transportable. Evaluation results show that the approach is fun, easy to use and appreciated by the users. It can be used by blind and visually impaired people for tasks which require spatial arrangement of information, thus supporting accessible brainstorming sessions.
Usability and Cost-effectiveness in Brain-Computer Interaction: Is it User Throughput or Technology Related? BIBAFull-Text 19
  Athanasios Vourvopoulos; Sergi Bermudez i Badia
In recent years, Brain-Computer Interfaces (BCIs) have been steadily gaining ground in the market, used either as an implicit or explicit input method in computers for accessibility, entertainment or rehabilitation. Past research in BCI has heavily neglected the human aspect in the loop, focusing mostly in the machine layer. Further, due to the high cost of current BCI systems, many studies rely on low-cost and low-quality equipment with difficulties to provide significant advancements in physiological computing. Open-Source projects are offered as alternatives to expensive medical equipment. Nevertheless, the effectiveness of such systems over their cost is still unclear, and whether they can deliver the same level of experience as their more expensive counterparts. In this paper, we demonstrate that effective BCI interaction in a Motor-Imagery BCI paradigm can be accomplished without requiring high-end/high-cost devices, by analyzing and comparing EEG systems ranging from open source devices to medically certified systems.
HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality BIBAFull-Text 20
  Spencer Russell; Gershon Dublon; Joseph A. Paradiso
In this paper we present a vision for scalable indoor and outdoor auditory augmented reality (AAR), as well as HearThere, a wearable device and infrastructure demonstrating the feasibility of that vision. HearThere preserves the spatial alignment between virtual audio sources and the user's environment, using head tracking and bone conduction headphones to achieve seamless mixing of real and virtual sounds. To scale between indoor, urban, and natural environments, our system supports multi-scale location tracking, using fine-grained (20-cm) Ultra-WideBand (UWB) radio tracking when in range of our infrastructure anchors and mobile GPS otherwise. In our tests, users were able to navigate through an AAR scene and pinpoint audio source locations down to 1m. We found that bone conduction is a viable technology for producing realistic spatial sound, and show that users' audio localization ability is considerably better in UWB coverage zones than with GPS alone. HearThere is a major step towards realizing our vision of networked sensory prosthetics, in which sensor networks serve as collective sensory extensions into the world around us. In our vision, AAR would be used to mix spatialized data sonification with distributed, livestreaming microphones. In this concept, HearThere promises a more expansive perceptual world, or umwelt, where sensor data becomes immediately attributable to extrinsic phenomena, externalized in the wearer's perception. We are motivated by two goals: first, to remedy a fractured state of attention caused by existing mobile and wearable technologies; and second, to bring the distant or often invisible processes underpinning a complex natural environment more directly into human consciousness.
Wearability Factors for Skin Interfaces BIBAFull-Text 21
  Xin Liu; Katia Vega; Pattie Maes; Joe A. Paradiso
As interfaces progress beyond wearables and into intrinsic human augmentation, the human body has become an increasingly important topic in the field of HCI. Wearables already act as a new layer of functionality located on the body that leads us to rethink the convergence between technology and fashion, not just in terms of the ability to wear, but also in how devices interact with us. Already, several options for wearable technology have emerged in the form of clothing and accessories. However, by applying sensors and other computing devices directly onto the body surface, wearables could also be designed as skin interfaces. In this paper, we review the wearability factors impacting wearables as clothes and accessories in order to discuss them in the context of skin interfaces. We classify these wearability factors in terms of body aspects (location, body movements and body characteristics) and device aspects (weight, attachment methods, accessibility, interaction, aesthetics, conductors, insulation, device care, connection, communication, battery life). We discuss these factors in the context of two different example skin interfaces: a rigid board embedded into special effects makeup and skin-mounted soft materials connected to devices.
Improvements on a Novel Hybrid Tracking System BIBAFull-Text 22
  Markus Zank; Leyla Kern; Andreas Kunz
Today's tracking systems typically require a fixed installation in a room in order to not drift quadratically with time like common inertia measurement units. This makes tracking a delicate task in ad-hoc VR installations. To overcome this problem, this paper describes a novel hybrid tracking system and shows further algorithmic improvements to increase tracking accuracy.
Sensible Shadow: Tactile Feedback from Your Own Shadow BIBAFull-Text 23
  Takefumi Hiraki; Shogo Fukushima; Takeshi Naemura
This paper proposes a new shadow interface system that provides tactile feedback from a user's shadow to physical body. When users obstruct the projected image by their bodies, wearable photoreactive tactile displays receive the light information; they decode it and transmit tactile sensation. The proposed system does not require complex sensing, complicated settings and communication systems; it can perform high speed tactile feedback with the information directly received from the projected light. Perspective projection of the projected image and occluding objects that generate shadows is preserved in principle; all the users have to do is obstructing the projected light. In this paper, we present the concept, the first prototype of the system, and basic evaluations. The system can easily serve as a human interaction system with touch sensitive shadow, and it has potential applications in the areas of interactive entertainment and user interfaces.
TombSeer: Illuminating the Dead BIBAFull-Text 24
  Isabel Pedersen; Nathan Gale; Pejman Mirza-Babaei
TombSeer immerses the wearer in a museum space engaging two senses (sight and touch) through a holographic, augmented reality, heads-up interface that brings virtual, historical artifacts "back to life" through gestural interactivity. The purpose of TombSeer is to introduce more embodied interaction to museum visits using an emerging hardware platform for 3D interactive holographic images (e.g., META Head-mounted display) in combination with customized software. The Tomb of Kitines case study was conducted at The Royal Ontario Museum in Canada. TombSeer's embodied gestural and visual augmented reality experience functions to aesthetically enhance museum exhibits.
Haptic Assistive Bracelets for Blind Skier Guidance BIBAFull-Text 25
  Marco Aggravi; Gionata Salvietti; Domenico Prattichizzo
Blindness dramatically limits quality of life of individuals and has profound implications for the person affected and the society as a whole. Physical mobility and exercises are strongly spurred within people, as ways to maintain health and well-being. Such activities can be really important for people with disability as well, and their increase is paramount in the well-being and assistive care system. In this work, we aim at improving the communication between the instructor and a visual impaired subject during skiing. Up to now, only the auditory channel is used to communicate basic commands to the skier. We introduce a novel use of haptic feedback in this context. In particular, the skier can receive directional information through two vibrating bracelets worn on the forearms. Haptic interaction has been proven to be processed faster by the brain demanding a less cognitive effort with respect to the auditory modality. The connection between the instructor and the skier is done by Bluetooth protocol. We tested different guiding modalities including only audio commands, audio and haptic commands and only haptic commands. Preliminary results on the use of the system reveled the haptic channel to be a promising way for guidance of blind people in winter sports.
We Are Super-Humans: Towards a Democratisation of the Socio-Ethical Debate on Augmented Humanity BIBAFull-Text 26
  Maurizio Caon; Vincent Menuz; Johann A. R. Roduit
Research in human enhancement technologies (e.g., nanotechnology, genetic engineering, robotics et cetera) is exploding bringing unforeseen solutions that will expand human capabilities further. Therefore, new socio-ethical issues need to be continuously addressed. In this scenario, we argue that a revolution in addressing these issues is needed and that we should enable a democratic process to cause broader reflections on the future augmented humanity. We will present the SuperHumains.ch project as an example of educational and collaborative thinking on the future of human enhancement.
Reading-based Screenshot Summaries for Supporting Awareness of Desktop Activities BIBAFull-Text 27
  Tilman Dingler; Passant El Agroudy; Gerd Matheis; Albrecht Schmidt
Lifelogging augments people's ability to keep track of their daily activities and helps them create rich archives and foster memory. Information workers perform a lot of their key activities throughout the day on their desktop computers. We argue that activity summaries can be informed by eye-tracking data. Therefore we investigate 3 heuristics to create such summaries based on screenshots to help reconstruct people's work day: a fixed time interval, people's focus of attention as indicated by their eye gaze, and a reading detection algorithm. In a field study with 12 participants who logged their desktop activities for 3 consecutive days we evaluated the usefulness of screenshot summaries based on these heuristics. Our results show the utility of eye tracking data, and more specifically of using reading detection to determine key activities throughout the day to inform the creation of activity summaries that are more relevant and require less time to review.
Charting Design Preferences on Wellness Wearables BIBAFull-Text 28
  Juho Rantakari; Virve Inget; Ashley Colley; Jonna Häkkilä
This paper presents a study on people's preferences with wearable wellness devices. The results are based on an online survey (n=84), where people assessed different features in wearable wellness devices. Our salient findings show that the highest rated features were the comfort of wearing the device and long battery lifetime. Altogether, factors related to the form factor and industrial design were emphasized, whereas social sharing features attracted surprisingly little attention.
Joint Trajectory Generation and Control for Overground Robot-based Gait Rehabilitation System MOPASS BIBAFull-Text 29
  Santiago Focke Martinez; Olena Kuzmicheva; Axel Graeser
Robotic gait rehabilitation systems have appeared in the last fifteen years as an alternative to traditional physiotherapy, offering tireless and precise performance. This paper presents the design of MOPASS, a robotic system for overground gait rehabilitation. Additionally, it presents the gait trajectory generator for hip and knee motion in sagittal plane that was implemented in the system. This generator creates healthy-like gait patterns based on extrema (or inflexion) points characteristic of each joint curve. Finally, the motion controllers designed for the active joints of MOPASS are presented.
Synthesizing Pseudo Straight View from A Spinning Camera Ball BIBAFull-Text 30
  Ryohei Funakoshi; Yoji Okudera; Hideki Koike
This paper proposed a spherical camera ball and its image processing algorithm, designed to provide a ball's point of view (POV) for spectating of ball sports. The proposed spherical camera ball has six cameras embedded at fixed intervals around the surface of the ball. One of the main issues for such ball-type cameras is that the device is spinning and therefore it is hard to obtain stable video stream from such spinning cameras. This paper proposed automatic selection of cameras using matching scores between the anchor frame and current frame. The resultant movie will then always shows the point of interest within the frame. We then applied image translation to obtain a pseudo straight view in which the point of interest is always shown in the center of the frame.
AR-Arm: Augmented Visualization for Guiding Arm Movement in the First-Person Perspective BIBAFull-Text 31
  Ping-Hsuan Han; Kuan-Wen Chen; Chen-Hsin Hsieh; Yu-Jie Huang; Yi-Ping Hung
In many activities, such as martial arts, physical exercise, and physiotherapy, the users are asked to perform a sequence of body movements with highly accurate arm positions. Sometimes, the movements are too complicated for users to learn, even by imitating the action of the coach directly. This paper presents a fully immersive augmented reality (AR) system, which provides egocentric hints to guide the arm movement of the user via a video see-through head-mounted display (HMD). By using this system, the user can perform the exactitude of arm movement simply by moving his arms to follow and match the virtual arms, rendered from coach's movement of database, in the first-person view. To ensure the rendered virtual arms correctly aligned with the user's real shoulders, a calibration method is proposed to estimate the length of the user's arms and the positions of his head and shoulders in advance. In addition, we apply the system to Tai-Chi-Chuan practicing, our preliminary study has shown that the proposed egocentric hints can provide intuitive guidance for users to follow the arm movement of the coach with exactitude.
Come alive! Augmented Mobile Interaction with Smart Hair BIBAFull-Text 32
  Masaru Ohkubo; Shuhei Umezu; Takuya Nojima
Many research and products have augmented mobile interaction by integrating shape changing interface to overcome limited interaction space. However, most of these control systems tend to be complex and hard to configure due to the mechanical limitations of the actuators. Furthermore there are a few studies about using a shape changing I/O integrated interface with mobile technology. Therefore, we introduce a way to augment interaction on mobile phones with shape changing interface called "Smart Hair". The "Smart Hair" is a sensor integrated actuator that curves its shape according to the intensity of light. This simple system enables the users to implement physical interaction that is synchronized with the mobile phone. Different from previous works of the research, we developed the input method for mobile use. This study describes the development of the shape changing interface on mobile phones and discusses the practical use of the developed interface. As an application example, we developed physical body of smart phone which behaves as if it is an interactive robot. In addition, we evaluated how people perceive the motion of the interface. The result showed that the behavior of the actuator could evoke subtle emotion to the users.
   This paper consists of three parts. First the augmentation methods are described. Then the applications are evaluated by the users and finally the possibilities and limitations of the study are discussed.
Projection Based Virtual Tablets System Involving Robust Tracking of Rectangular Objects and Hands BIBAFull-Text 33
  Yasushi Sugama; Taichi Murase; Yusaku Fujii
We propose a novel projection based markerless AR system that realizes multiple virtual tablets. This system detects position and posture of any rectangular objects, projects GUI to these objects, and detects touch gesture on objects. As a result, by using this system, we do not need any smart devices, but only a mere rectangular object, e.g. tissue box, book, cushion, table and so on. It does not really matter whether the tablet computer is in the living room for browsing the internet, for playing games, or for controlling consumer devices e.g. TV and air-conditioner. In order to realize this system, we developed a novel algorithm to detect arbitrary rectangular objects. This can recognize position and posture of rectangular object without markers. We measured error in the case of overlapping, as a result, experimental result shows our algorithm is more robust than existing algorithms.
Augmented Winter Ski with AR HMD BIBAFull-Text 34
  Kevin Fan; Jean-Marc Seigneur; Jonathan Guislain; Suranga Nanayakkara; Masahiko Inami
At time of writing, several affordable Head-Mounted Displays (HMD) are going to be released to the mass market, most of them for Virtual Reality (VR with Oculus Rift, Samsung Gear...) but also for indoor Augmented Reality (AR) with Hololens. We have investigated how to adapt such HMD as Oculus Rift for an outdoor AR ski slope. Rather than setting physical obstacles such as poles, our system employs AR to render dynamic obstacles by different means. During the demo, skiers will wear a video-see-through HMD while trying to ski on a real ski slope where AR obstacles are rendered.
Electrosmog Visualization through Augmented Blurry Vision BIBAFull-Text 35
  Kevin Fan; Jean-Marc Seigneur; Suranga Nanayakkara; Masahiko Inami
Electrosmog is the electromagnetic radiation emitted from wireless technology such as Wi-Fi hotspots or cellular towers, and poses potential hazard to human. Electrosmog is invisible, and we rely on detectors which show level of electrosmog in a warning such as numbers. Our system is able to detect electrosmog level from number of Wi-Fi networks, connected cellular towers and strengths, and show in an intuitive representation by blurring the vision of the users wearing a Head-Mounted Display (HMD). The HMD displays in real-time the users' augmented surrounding environment with blurriness, as though the electrosmog actually clouds the environment. For demonstration, participants can walk in a video-see-through HMD and observe vision gradually blurred while approaching our prepared dense wireless network.
Augmentation of Human Protection Functions Using Wearable and Sensing System BIBAFull-Text 36
  Ryoichiro Shiraishi; Takehiro Fujita; Kento Inuzuka; Rintaro Takashima; Yoshiyuki Sankai
Our muscles have characteristic features which are stiffness and elasticity. The reason we make the muscle's characteristic features stronger is to protect bones and organs from external attacks. The purpose of this study is to develop a voluntary protection suit that has stiffness and elasticity, and to confirm that the wearers can control it voluntarily and enhance their protective functions. The suits consist of sensing, control and pneumatic artificial muscles (PAM) units. When the sensing unit detects the wearer's protection intention based on bioelectrical signals, solenoid valve in the control unit releases compressed air in PAM. PAM, expanded by the air, generates increased stiffness and elasticity levels. The suits have PAM units around both the upper limbs and anterior and posterior the chest and the abdomen. By our basic experiment, we confirmed that the suit started putting compressed air into PAM by detecting the wearer's protection intention. The results of the wearing experiments showed that the suit could decrease the pain and pressure of wearer greatly. Based on the results, we concluded that we could develop a voluntary protection suit enhancing human protection functions. The proposed strategies could be applied for a new full-body protection system utilizing artificial muscle with stiffness and the elasticity.
EXILE: Experience based Interactive Learning Environment BIBAFull-Text 37
  Taihei Kojima; Atsushi Hiyama; Kenjirou Kobayashi; Sachiko Kamiyama; Naokata Ishii; Michitaka Hirose; Hiroko Akiyama
In a hyper-aged society, health promotive activities for senior people became important social issues. In this paper, we propose an interactive virtual learning environment for health promotive exercises. It is designed for self-training at a standard home environment and to provide training effects without an informed training instructor. Proposed system consists of 3 items; a PC, a Kinect camera, and a display. In exercises, trainee postures and ideal postures which trainees should imitate are visualized simultaneously in a 3D virtual environment. A trainee observes the differences in postures which are emphasized by real time feedback system and corrects own training postures.
Diagram Presentation using Loudspeaker Matrix for Visually Impaired People: Sound Characteristics for their Pattern Recognition BIBAFull-Text 38
  Takahiro Miura; Junya Suzuki; Ken-ichiro Yabu; Kazutaka Ueda; Tohru Ifukube
People with total visual impairments experience difficulty in understanding graphical information. They can comprehend graphics through tactile devices including pin matrices with audio characteristics presented using voice. Although these systems can represent static figures, it is difficult to use them for presenting moving images or details of high-resolution images. For solving this issue, we focused on the characteristics of auditory perception, including time resolution and frequency detection. In this paper, we propose a loudspeaker matrix system that displays 2D patterns such as trajectories or figures. We mainly investigated the design implications of acoustic presentation methods, including readily perceivable sound paths. The results indicate that effective presentation with a loudspeaker matrix would be realized by imposing the following conditions: sound type and duration should be determined as white noise and adjusted for the individual user, respectively.
PeaceKeeper: Augmenting the Awareness and Communication of Noise Pollution in Student Dormitories BIBAFull-Text 39
  Henrik Lanng; Anders Lykkegaard; Søren Thornholm; Philip Tzannis; Nervo Verdezoto
Noise pollution can be very problematic especially in shared-living facilities such as student dormitories. After conducting interviews with nine students, we found that students are usually not aware of the level of noise pollution they are producing as part of their everyday activities. To address this challenge, we explore how to augment the awareness and communication of environmental noise among dorm residents through an ambient awareness system: PeaceKeeper. We describe the initial design, implementation and concept validation of PeaceKeeper with two students living in contiguous rooms at the same dormitory. Based on our initial findings, we highlight how ambient awareness systems can provide opportunities to make students aware of their own level of noise to avoid disturbing their neighbours.
Semi-automatic Multiple Player Tracking of Soccer Games using Laser Range Finders BIBAFull-Text 40
  Yuma Kabeya; Fumiharu Tomiyasu; Kenji Mase
The diversity of sports broadcasting formats increases fans' interest in watching games. Soccer video is one such example -- it may include data analyses of player performances, highlighting of target players, and other useful features. Such video also supports player coaching and training. Numerous studies have attempted to estimate exact trajectories using camera information. However, camera information is not robust in situations with varying brightness levels, complicated backgrounds, and so on. To address these issues, we propose a soccer player tracking method using laser range finders. Our proposed method can precisely acquire the positional information of players. Over 95% of the temporal connections can be easily established with a graph construction method. In order to further reduce manual connecting operations, we adopt automatic corrections using a particle filter. Experimental results show significant improvements in tracking results, which can reduce manual operational costs.
Expressing Human State via Parameterized Haptic Feedback for Mobile Remote Implicit Communication BIBAFull-Text 41
  Jeffrey R. Blum; Jeremy R. Cooperstock
As part of a mobile remote implicit communication system, we use vibrotactile patterns to convey background information between two people on an ongoing basis. Unlike systems that use memorized tactons (haptic icons), we focus on methods for translating parameters of a user's state (e.g., activity level, distance, physiological state) into dynamically created patterns that summarize the state over a brief time interval. We describe the vibration pattern used in our current user study to summarize a partner's activity, as well as preliminary findings. Further, we propose additional possibilities for enriching the information content.
Social Sensing: a Wi-Fi based Social Sense for Perceiving the Surrounding People BIBAFull-Text 42
  Yoni Halperin; Galit Buchs; Shachar Maidenbaum; Maya Amenou; Amir Amedi
People who are blind or have social disabilities can encounter difficulties in properly sensing and interacting with surrounding people. We suggest here the use of a sensory augmentation approach, which will offer the user perceptual input via properly functioning sensory channels (e.g. visual, tactile) for this purpose. Specifically, we created a Wi-Fi signal based system to help the user determine the presence of one or more people in the room. The signal's strength determines the distance of the people in near proximity. These distances are sonified and played sequentially. The Wi-Fi signal arises from common Smartphones, and can therefore be adapted for everyday use in a simple manner.
   We demonstrate the use of this system by showing it's significance in determining the presence of others. Specifically, we show that it allows to determine the location (i.e. close, inside or outside) and amount of people at each distance. This system can be further adopted for purposes such as locating one's group in a crowd, following a group in a new location, enhancing identification for people with prosopagnosia, raising awareness for the presence of others as part of a rehabilitation behavioral program for people with ASD, or for real-life social networking.
A Pedagogical Virtual Machine for Assembling Mobile Robot using Augmented Reality BIBAFull-Text 43
  Malek Alrashidi; Ahmed Alzahrani; Michael Gardner; Vic Callaghan
In this paper, we propose a pedagogical virtual machine (PVM) model that aims to link physical-object activities with learning objects to reveal the related educational value of the physical objects in question. To examine the proposed method, we present an experiment based on assembling a modularised mobile-robot task called "Buzz-Boards." A between-group design method was chosen for both the experimental and control groups in this study. Participants in the experimental group used an augmented reality application to help them assemble the robot, while the control group took a paper-based approach. 10 students from University of Essex were randomly assigned to each group, for a total of 20 students' participants. The evaluation factors for this study are time completion, a post-test, cognitive overload, and the learning effectiveness of each method. In overlay, assemblers who used the augmented reality application outperformed the assemblers who use the paper-based approach.
A Co-located Meeting Support System by Scoring Group Activity using Mobile Devices BIBAFull-Text 44
  Hiroyuki Adachi; Seiko Myojin; Nobutaka Shimada
In this paper, we present a co-located meeting support system using mobile devices such as tablets and smartphones, which can use anywhere and easily set up. We have developed a system using tablets, and we re-designed its visualization and scoring methods for brainstorming. The system provides visual feedbacks of how long a person speaks to the other person and how long the person watches him/her. The system also provides two scores based on a balance degree of individual utterance and that of pair conversation as a group activity. The system is currently under experimentation.
s-Helmet: A Ski Helmet for Augmenting Peripheral Perception BIBAFull-Text 45
  Evangelos Niforatos; Ivan Elhart; Anton Fedosov; Marc Langheinrich
The growing popularity of winter sports, as well as the trend towards high speed carving skies, have increased the risk of accidents on today's ski slopes. While many skiers now wear ski helmets, their bulk might in turn lower a skier's ability to sense their surroundings, potentially leading to dangerous situations. In this demo paper, we describe our Smart Ski Helmet (s-Helmet) prototype. s-Helmet uses a set of laser range finders mounted on the back to detect skiers approaching from behind and warn the wearer about potential collisions using three LEDs. Below, we describe our motivation and how the system works.
SkiAR: Wearable Augmented Reality System for Sharing Personalized Content on Ski Resort Maps BIBAFull-Text 46
  Anton Fedosov; Ivan Elhart; Evangelos Niforatos; Alexander North; Marc Langheinrich
Winter sports like skiing and snowboarding are often group activities. Groups of skiers and snowboarders traditionally use folded paper maps or board-mounted larger-scale maps near ski lifts to aid decision making: which slope to take next, where to have lunch, or what hazards to avoid when going off-piste. To enrich those static maps with personal content (e.g., pictures, prior routes taken, or hazards encountered), we developed SkiAR, a wearable augmented reality system that allows groups of skiers and snowboarders to share such content in-situ on a printed resort map while on the slope.
3D Position Estimation of Badminton Shuttle Using Unsynchronized Multiple-View Videos BIBAFull-Text 47
  Hidehiko Shishido; Yoshinari Kameda; Itaru Kitahara; Yuichi Ohta
In this paper, we introduce a method to estimate 3D position of a badminton shuttle using unsynchronized multiple-view videos. The research of object tracking for sports is conducted as an application of Computer Vision to improve the tactics involved with such sports. This paper proposes a technique to stably estimate object's position by using motion blur that used be considered as observational noise in the ordinary works. Badminton shuttle has a large variation of the moving speed, the motion trajectory is unpredictable and moreover the observation size is very small. Thus, it cannot be grasped correctly with human eyes. We apply our proposed technique to badminton shuttle tracking to confirm the ability of our method to enhance the human vision. We also consider that there is some contribution to augment sports in future.
Neurogoggles for Multimodal Augmented Reality BIBAFull-Text 48
  Sylvain Cardin; Howard Ogden; Daniel Perez-Marcos; John Williams; Tomo Ohno; Tej Tadi
We present a neurogoggles system that offers truly immersive augmented and virtual reality with unique biofeedback based on brain and bodily signals. The system consists of a wearable head-mounted display that integrates two stereoscopic color cameras for video-through experience, a depth sensor for object tracking, an inertial sensor for head tracking and electrophysiological measurements for brain and bodily signals. The system is designed to ensure low-latency synchronization across all functional modules, which is a key element for multimodal real-time data analysis. We showcase this technology with an immersive experience combining multimodal data and artistic digital content using Kudan augmented reality engine. In this experience, the user is presented with an interactive world behind a brain image wall poster, which reacts to user's brain state.
On Control Interfaces for the Robotic Sixth Finger BIBAFull-Text 49
  Irfan Hussain; Gionata Salvietti; Domenico Prattichizzo
In this demo, we present two possible control interfaces for a robotic extra-finger called the Robotic Sixth Finger. One interface is an instrumented glove able to measure the human hand posture. The aim is the integration of the motion of robotic finger with that of the human hand so to achieve complex manipulation skills. The second interface is a ring with a push button embedded so to implement a simple and intuitive control. The presence of an extra robotic finger in human hand enlarges the workspace, increases the grasping capabilities and the manipulation dexterity. We will propose a series of grasping and manipulation tasks to be performed with the help of the robotic sixth finger and the relative interface so to prove their effectiveness in augmenting the human hand capabilities.
Repeated Cycling Sprints with Different Restricted Blood Flow Levels BIBAFull-Text 50
  Sarah J. Willis; Laurent Alvarez; Grégoire P. Millet; Fabio Borrani
We examined the effect of different levels of blood flow restriction (BFR) during repeated sprint performance in leg cycling. The aim of this study was to evaluate the RSA leg cycling performance between various levels of blood flow restriction in well-trained athletes. Eleven athletes (6 men; 5 women) performed four sessions (familiarization, 0% BFR, 45% BFR, 60% BFR) of repeated sprint ability (10sec sprint, followed by 20sec recovery; RSA) to exhaustion or task failure. The number of sprints across conditions decreased with higher BFR (29.8±13.7, 13.1±6.5, 7.5±6.4, respectively, P < 0.05). Total work performed and maximal heart rate during the sprints were also reduced as the level of BFR increased. Mean power decreased across sprints as well as with increased occlusion (P < 0.001), and a faster decrease in power output was demonstrated as occlusion increased (e.g. significantly different than sprint 1 at sprint 3 in 0% BFR (P = 0.007), sprint 3 in 45% BFR (P < 0.001), and sprint 2 (P < 0.05) in 60% BFR, respectively). Results indicated, as expected, that RSA performance is decreased with higher levels of BFR. It is likely that other mechanisms related to central and peripheral systems differ between each condition as suggested by the increased discomfort in the legs versus decreased discomfort in breathing as well as lower maximal HR with higher BFR. It is of interest to investigate if the response elicited with BFR is similar to that of repeated sprinting in hypoxia (i.e., is a stimulus of localized hypoxia, BFR, similar or different than a stimulus of systemic hypoxia).
Metamorphosis Hand: Dynamically Transforming Hands BIBAFull-Text 51
  Nami Ogawa; Yuki Ban; Sho Sakurai; Takuji Narumi; Tomohiro Tanikawa; Michitaka Hirose
Our body image is flexible enough to be able to incorporate external objects. Moreover, a change in body image can sometimes evoke particular feelings and result in modified behavior. We focus on the body as an interface between the internal self and the external world, and have constructed a system that provides users with the interactive experience of playing a virtual piano with dynamically transforming virtual hands, whose appearance/movement are considerably different from our own. We demonstrate that even though the appearance/movement of a virtual hand differs considerably from a real hand, we feel a strong body ownership. This illustrates the possibility of body ownership regarding a peculiar body image through a performance on the piano.