HCI Bibliography Home | HCI Conferences | ISWC Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ISWC Tables of Contents: 1314-114-215

Proceedings of the 2015 International Symposium on Wearable Computers

Fullname:ISWC 2015: 19th International Symposium on Wearable Computers
Editors:Kenji Mase; Marc Langheinrich; Daniel Gatica-Perez
Location:Osaka, Japan
Dates:2015-Sep-07 to 2015-Sep-11
Publisher:ACM
Standard No:ISBN: 978-1-4503-3578-2; ACM DL: Table of Contents; hcibib: ISWC15
Papers:39
Pages:187
Links:Symposium Website | Umbrella Conference Website
  1. Keynote speakers
  2. Smart watches
  3. Wearable interfaces
  4. Design and textiles
  5. Activity recognition I
  6. Activity recognition II
  7. Towards new wearable applications
  8. Eyewear computing
  9. Environmental sensing systems

Keynote speakers

Visualizing and manipulating brain dynamics BIBAFull-Text 1
  Mitsuo Kawato
Brain is not a mere input-output information transformation system, but a dynamical system that generates spontaneous spatiotemporal patterns even without sensory inputs, executed movements, or cognitive tasks. These spontaneously generated patterns by brain dynamics are called spontaneous brain activities for experimental animals, and resting state brain activities for humans. The resting state brain activity of an individual contains much information about age, cognitive capability, mental disorder etc. By combining information decoding from brain activity and its neurofeedback in reinforcement learning paradigms, we can unconsciously control brain activity patterns corresponding to specific information. This leads to therapies of psychiatric disorders, unconscious manipulation of facial preferences, color qualia, confidence in decision making, increase of cognitive capability, etc. Ubicomp community can expect this technology will soon be available in much cheaper and lighter devices such as EEG and near infrared spectroscopy instead of heavy and expensive fMRI or MEG.
Behind the scenes BIBAFull-Text 2
  Daito Manabe
Here is the real background to the stage show created by a team of Rhizomatiks Research engineers and director/choreographer MIKIKO of ELEVENPLAY.
   For the live production of an entertainment show, we can rarely expect favorable conditions in an environment like a museum, cozy hall or a laboratory. Even for a live show that rallies hundreds of thousands of people or a TV program watched by more than a hundred million people, there are barriers different from those found in art and research projects -- e.g. there is only one minute available for a conversion process or insufficient time for rehearsals. Therefore, there is still a very limited number of cases in which a risky system is adopted, using real-time image analyses and/or image processing. In most productions, a performer moves, adjusting to the pre-rendered images.
   The topics to be covered in this lecture include how the producer realizes an ideal world based on such conditions as the size of a venue or the available time for setting up, how to use image recognition, image processing techniques, control techniques, and data analysis techniques at a huge arena for a live performance or on a live broadcast program, while actual software and data used in the production process are introduced.

Smart watches

What can a dumb watch teach a smartwatch?: informing the design of smartwatches BIBAFull-Text 3-10
  Kent Lyons
With the release of Android Wear the Apple Watch, we are seeing a resurgence in the industry of smartwatch offerings. While there has been research on the technical feasibility of smartwatches as well as research proposing novel watch interactions, there has been relatively little work trying to uncover what user-centered values a smartwatch might offer to its wearer. We detail a user study of 50 everyday watch wearers focused on eliciting usage practices of traditional dumb watches. We discuss themes uncovered in our participants' perceptions of watch features, aesthetics, and the daily patterns of wearing and not wearing a watch. We also present participant perceptions of smartwatches and draw upon their mobile phone use. Using this data, we discuss possible smartwatch apps and the implications these findings might have for smartwatches.
Exploring current practices for battery use and management of smartwatches BIBAFull-Text 11-18
  Chulhong Min; Seungwoo Kang; Chungkuk Yoo; Jeehoon Cha; Sangwon Choi; Younghan Oh; Junehwa Song
As an emerging wearable device, a number of commercial smartwatches have been released and widely used. While many people have concerns about the battery life of a smartwatch, there is no systematic study for the main usage of a smartwatch, its battery life, or battery discharging and recharging patterns of real smartwatch users. Accordingly, we know little about the current practices for battery use and management of smartwatches. To address this, we conduct an online survey to examine usage behaviors of 59 smartwatch users and an in-depth analysis on the battery usage data from 17 Android Wear smartwatch users. We investigate the unique characteristics of smartwatches' battery usage, users' satisfaction and concerns, and recharging patterns through an online survey and data analysis on battery usage.
Smart-watch life saver: smart-watch interactive-feedback system for improving bystander CPR BIBAFull-Text 19-26
  Agnes Gruenerbl; Gerald Pirkl; Eloise Monger; Mary Gobbi; Paul Lukowicz
In this work a Smart-Watch application, that is able to monitor the frequency and depth of Cardiopulmonary Resuscitation (CPR) and provide interactive corrective feedback is described. We have evaluated the system with a total of 41 subjects who had undertaken a single episode of CPR training several years previously. This training was part of a First Aid course for lay people, commonly accessed in this population. The evaluation was conducted by measuring participant CPR competence using the "gold standard" of CPR training [10], namely frequency and compression depth. The evaluation demonstrated that the Smart Watch feedback system provided a significant improvement in the participant performance. For example, it doubled the number of people who could maintain bot the parameters in the recommended range for at least 50% of the time.
(Smart)watch your taps: side-channel keystroke inference attacks using smartwatches BIBAFull-Text 27-30
  Anindya Maiti; Murtuza Jadliwala; Jibo He; Igor Bilogrevic
In this paper, we investigate the feasibility of keystroke inference attacks on handheld numeric touchpads by using smartwatch motion sensors as a side-channel. The proposed attack approach employs supervised learning techniques to accurately map the uniqueness in the captured wrist movements to each individual keystroke. Experimental evaluation shows that keystroke inference using smartwatch motion sensors is not only fairly accurate, but also better than similar attacks previously demonstrated using smartphone motion sensors.

Wearable interfaces

ProximityHat: a head-worn system for subtle sensory augmentation with tactile stimulation BIBAFull-Text 31-38
  Matthias Berning; Florian Braun; Till Riedel; Michael Beigl
In this paper we present the iterative design process of our wearable sensory substitution system ProximityHat, which uses pressure actuators around the head to convey spatial information. It was already shown that the sense of touch can be used to augment our perception of reality. Existing systems focus on vibration signals for information transfer, but this is unsuitable for constant stimulation in everyday use. Our system determines the distance to surrounding objects with ultrasonic sensors and maps this information to an inward pressure. It was evaluated in a study with 13 blindfolded subjects in orientation and navigation tasks. The users were able to discern at least three different absolute pressure levels with high certainty. Combined with the sensors, they could also use continuous values to navigate hallways and find doors. Most users had only a few collisions, but a small group of three individuals struggled more. We attribute this to the latency and resolution of the final prototype, which led to slow walking speed and will be addressed in future work.
Hot & tight: exploring thermo and squeeze cues recognition on wrist wearables BIBAFull-Text 39-42
  Sunghyun Song; Geeyoung Noh; Junwoo Yoo; Ian Oakley; Jundong Cho; Andrea Bianchi
Wrist worn wearable computing devices are ideally suited for presenting notifications through haptic stimuli as they are always in direct contact with the user's skin. While prior work has explored the feasibility of haptic notifications, we highlight a lack of empirical studies on thermal and pressure feedback in the context of wearable devices. This paper introduces prototypes for thermal and pressure (squeeze) feedback on the wrist. It then presents a study characterizing recognition performance with thermal and pressure cues against baseline performance with vibrations.
Magnetic input for mobile virtual reality BIBAFull-Text 43-44
  Boris Smus; Christopher Riederer
Modern smartphones can create compelling virtual reality (VR) experiences through the use of VR enclosures, devices that encase the phone and project stereoscopic renderings through lenses into the user's eyes. Since the touch screen in such designs is typically hidden inside an enclosure, the main interaction mechanism of the device is not accessible. We present a new magnetic input mechanism for mobile VR devices which is wireless, unpowered, inexpensive, provides physical feedback, requires no calibration, and works reliably on the majority of modern smartphones. This is the main input mechanism for Google Cardboard, of which there are over one million units. We show robust gesture recognition, at an accuracy of greater than 95% across smartphones and assess the capabilities, accuracy and limitations of our technique through a user study.
Controlling stiffness with jamming for wearable haptics BIBAFull-Text 45-46
  Timothy M. Simon; Bruce H. Thomas; Ross T. Smith
Layer jamming devices enhance wearable technologies by providing haptic feedback through stiffness control. In this paper we present our prototype that demonstrates improved haptic fidelity of a wearable layer jamming device, using computer controlled solenoid to enable fine-grained control of the garments stiffness property. We also explore variable stiffness configurations for virtual UI components. An evaluation was conducted to validate the methodology, demonstrating dynamic stiffness control with a two waveforms.
PneuHaptic: delivering haptic cues with a pneumatic armband BIBAFull-Text 47-48
  Liang He; Cheng Xu; Ding Xu; Ryan Brill
PneuHaptic is a pneumatically-actuated arm-worn haptic interface. The system triggers a range of tactile sensations on the arm by alternately pressurizing and depressurizing a series of custom molded silicone chambers. We detail the implementation of our functional prototype and explore the possibilities for interaction enabled by the system.

Design and textiles

New directions in jewelry: a close look at emerging trends & developments in jewelry-like wearable devices BIBAFull-Text 49-56
  Yulia Silina; Hamed Haddadi
As wearables are entering the domain of fashion, it is not uncommon to see criticisms of their unfashionable aesthetics and gadgetry that do not necessarily consider consumer preferences and a need to create desire for wearable objects. As other categories of wearable devices, jewelry-like devices are in the process of undergoing a profound and rapid change. In this paper, we examine 187 jewelry-like devices that are either already available on the market, or are at various stages of development and research. We then examine various parameters using descriptive statistics, and give an overview of some major emerging trends and developments in jewelry-like devices. We then highlight and propose directions for technical features, use of material and interacting modalities and so on that could be applied in the development of the future computational jewelry devices.
A ball-grid-array-like electronics-to-textile pocket connector for wearable electronics BIBAFull-Text 57-60
  Andreas Mehmann; Matija Varga; Karl Gönner; Gerhard Tröster
In this paper we introduce and characterize a new type of connector between smart textile and electronic devices. The connector is based on a ball grid array structure which is pressed against conductive textile pads in a stretchable pocket. The connector keeps textile and electronics fabrication separate. It offers 56 connections on an area of 60 × 100mm2, which is about the size of a smart phone. The resistance of the connection is 1.4Ω at DC, mostly constant up to 10 kHz and below 3Ω up to 100 kHz. This resistance is low compared to measured sensor resistances for resistive pressure sensing or bio-impedance sensing.
Addressing dresses: user interface allowing for interdisciplinary design and calibration of LED embedded garments BIBAFull-Text 61-64
  Zane Cochran; Clint Zeagler; Sonia McCall
Wearable technology projects afford the opportunity to work within interdisciplinary teams to create truly innovative solutions. Sometimes it is difficult for teams of designers and engineers to work together because of process differences and communication issues. Here we present a case study that describes how one team developed a system to overcome these obstacles and propose viewing interdisciplinary collaboration tools as boundary objects. The system described here allows designers to work with programmers to create full color light effects in real time, through a calibration process and interface that allows designers an easy entry into discussions about the placement of electronics in an LED-embedded garments.
Surface-mount component attachment for e-textiles BIBAFull-Text 65-66
  Mary Ellen Berglund; Julia Duvall; Cory Simon; Lucy E. Dunne
Integration of electronic components into textile structures is a key requirement for smart clothing applications, particularly those in which electronics must be distributed over the body surface. Scalable manufacturing techniques for textile-integration of components are a key need in the wearables industry. Here, we introduce a novel technique for assembling surface-mount "fabric PCBs" using stitched traces and reflow soldering techniques. We present an initial evaluation of the durability of this method comparing three variables of manufacture. Results show that all configurations are sufficiently durable for low-intensity wear, and for high-intensity wear larger components and traces and perpendicular trace layout improve durability.

Activity recognition I

Recognizing new activities with limited training data BIBAFull-Text 67-74
  Le T. Nguyen; Ming Zeng; Patrick Tague; Joy Zhang
Activity recognition (AR) systems are typically built to recognize a predefined set of common activities. However, these systems need to be able to learn new activities to adapt to a user's needs. Learning new activities is especially challenging in practical scenarios when a user provides only a few annotations for training an AR model. In this work, we study the problem of recognizing new activities with a limited amount of labeled training data. Due to the shortage of labeled data, small variations of the new activity will not be detected resulting in a significant degradation of the system's recall. We propose the FE-AT (Feature-based and Attribute-based learning) approach, which leverages the relationship between existing and new activities to compensate for the shortage of the labeled data. We evaluate FE-AT on three public datasets and demonstrate that it outperforms traditional AR approaches in recognizing new activities, especially when only a few training instances are available.
Predicting daily activities from egocentric images using deep learning BIBAFull-Text 75-82
  Daniel Castro; Steven Hickson; Vinay Bettadapura; Edison Thomaz; Gregory Abowd; Henrik Christensen; Irfan Essa
We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a person's activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data.
Activity classification at a higher level: what to do after the classifier does its best? BIBAFull-Text 83-86
  Rabih Younes; Thomas L. Martin; Mark Jones
Research in activity classification has focused on the sensors, the classification techniques and the machine learning algorithms used in the classifier. In this work, we study a higher level of activity classification. We present two methods that can take the final observations of a classifier and improve them. The first method uses hidden Markov models to define a probabilistic model that can be used to improve classification accuracy. The second method is a novel method that we developed that uses probabilistic models along with matching costs in order to improve accuracy. Testing showed that both proposed methods presented a significant increase in classification accuracy rates, while also proving that they can both run in real time.
Creating general model for activity recognition with minimum labelled data BIBAFull-Text 87-90
  Jiahui Wen; Mingyang Zhong; Jadwiga Indulska
Since people perform activities differently, to avoid overfitting, creating a general model with activity data of various users is required before the deployment for personal use. However, annotating a large amount of activity data is expensive and time-consuming. In this paper, we create a general model for activity recognition with a limited amount of labelled data. We combine Latent Dirichlet Allocation (LDA) and AdaBoost to jointly train a general activity model with partially labelled data. After that, when AdaBoost is used for online prediction, we combine it with graphical models (such as HMM and CRF) to exploit the temporal information in human activities to smooth out accidental misclassifications. Experiments on publicly available datasets show that we are able to obtain the accuracy of more than 90% with 1% labelled data.
A wearable system for detecting eating activities with proximity sensors in the outer ear BIBAFull-Text 91-92
  Abdelkareem Bedri; Apoorva Verlekar; Edison Thomaz; Valerie Avva; Thad Starner
This paper presents an approach for automatically detecting eating activities by measuring deformations in the ear canal walls due to mastication activity. These deformations are measured with three infrared proximity sensors encapsulated in an off-the-shelf earpiece. To evaluate our method, we conducted a user study in a lab setting where 20 participants were asked to perform eating and non-eating activities. A user dependent analysis demonstrated that eating could be detected with 95.3% accuracy. This result indicates that proximity sensing offers an alternative to acoustic and inertial sensing in eating detection while providing benefits in terms of privacy and robustness to noise.

Activity recognition II

Sensor-based stroke detection and stroke type classification in table tennis BIBAFull-Text 93-100
  Peter Blank; Julian Hoßbach; Dominik Schuldhaus; Bjoern M. Eskofier
In this paper we present a sensor-based table tennis stroke detection and classification system. We attached inertial sensors to table tennis rackets and collected data of 8 different basic stroke types from 10 amateur and professional players. Firstly, single strokes were detected by a event detection algorithm. Secondly, features were computed and used as input for stroke type classification. Multiple classifiers were compared regarding classification rates and computational effort. The overall sensitivity of the stroke detection was 95.7% and the best classifier reached a classification rate of 96.7%. Therefore, our presented approach is able to detect table tennis strokes in time-series data and to classify each stroke into correct stroke type categories. The system has the potential to be implemented as an embedded real-time application for other racket sports, to analyze training exercises and competitions, to present match statistics or to support the athletes' training progress. To our knowledge, this is the first paper that addresses a wearable support system for table tennis, and our future work aims at using the presented results to build a complete match analysis system for this sport.
An energy-aware method for the joint recognition of activities and gestures using wearable sensors BIBAFull-Text 101-108
  Joseph Korpela; Kazuyuki Takase; Takahiro Hirashima; Takuya Maekawa; Julien Eberle; Dipanjan Chakraborty; Karl Aberer
This paper presents an energy-aware method for recognizing time series acceleration data containing both activities and gestures using a wearable device coupled with a smartphone. In our method, we use a small wearable device to collect accelerometer data from a user's wrist, recognizing each data segment using a minimal feature set chosen automatically for that segment. For each collected data segment, if our model finds that recognizing the segment requires high-cost features that the wearable device cannot extract, such as dynamic time warping for gesture recognition, then the segment is transmitted to the smartphone where the high-cost features are extracted and recognition is performed. Otherwise, only the minimum required set of low-cost features are extracted from the segment on the wearable device and only the recognition result, i.e., label, is transmitted to the smartphone in place of the raw data, reducing transmission costs. Our method automatically constructs this adaptive processing pipeline solely from training data.
A framework for early event detection for wearable systems BIBAFull-Text 109-112
  Eva Dorschky; Dominik Schuldhaus; Harald Koerger; Bjoern M. Eskofier
A considerable number of wearable system applications necessitate early event detection (EED). EED is defined as the detection of an event with as much lead time as possible. Applications include physiological (e.g., epileptic seizure or heart stroke) or biomechanical (e.g., fall movement or sports event) monitoring systems. EED for wearable systems is under-investigated in literature. Therefore, we introduce a novel EED framework for wearable systems based on hybrid Hidden Markov Models. Our study specifically targets EED based on inertial measurement unit (IMU) signals in sports. We investigate the early detection of high intensive soccer kicks, with the possible pre-kick adaptation of a soccer shoe before the shoe-ball impact in mind. We conducted a study with ten subjects and recorded 226 kicks using a custom IMU placed in a soccer shoe cavity. We evaluated our framework in terms of EED accuracy and EED latency. In conclusion, our framework delivers the required accuracy and lead times for EED of soccer kicks and can be straightforwardly adapted to other wearable system applications that necessitate EED.
Multi-sensor data-driven: synchronization using wearable sensors BIBAFull-Text 113-116
  Terrell R. Bennett; Nicholas Gans; Roozbeh Jafari
This paper presents a method to synchronize the data streams from multiple sensors, including wearables and sensors in the environment. Our approach exploits common events observed by the sensors as they interact. We detect physical and cyber couplings between the sensor data streams and determine which couplings will minimize the overall clock drift. We present a graph model to represent the event couplings between sensors and the drift in the sensor timing and propose a solution that employs a shortest path algorithm to minimize the overall clock drift in the system based on the graph model. Experimental results over two trials show an improvement of 21.5% and 43.7% for total drift and 59.4% and 60.7% for average drift.

Towards new wearable applications

Fast blur removal for wearable QR code scanners BIBAFull-Text 117-124
  Gábor Sörös; Stephan Semmler; Luc Humair; Otmar Hilliges
We present a fast restoration-recognition algorithm for scanning motion-blurred QR codes on handheld and wearable devices. We blindly estimate the blur from the salient edges of the code in an iterative optimization scheme, alternating between image sharpening, blur estimation, and decoding. The restored image is constrained to exploit the properties of QR codes which ensures fast convergence. The checksum of the code allows early termination when the code is first readable and precludes false positive detections. General blur removal algorithms perform poorly in restoring visual codes and are slow even on high-performance PCs. The proposed algorithm achieves good reconstruction quality on QR codes and outperforms existing methods in terms of speed. We present PC and Android implementations of a complete QR scanner and evaluate the algorithm on synthetic and real test images. Our work indicates a promising step towards enterprise-grade scan performance with wearable devices.
Wearing another's personality: a human-surrogate system with a telepresence face BIBAFull-Text 125-132
  Kana Misawa; Jun Rekimoto
ChameleonMask is a telepresence system that displays a remote user's face on another user's face. Whereas most telepresence systems are designed to provide the remote user with an existence via a teleoperated robot, the system uses a real human as a surrogate for the remote user. This is accomplished by the surrogate user wearing a mask-shaped display that shows the remote user's live face, and a voice channel transmitting the remote user's voice. The surrogate user mimics the remote user by following the remote user's directions. In initial experiments conducted, the surrogate tended to be regarded as the actual person (i.e., the remote user). We implemented applications the remote user gives the surrogate directions visually. We conducted user studies to determine how the remote user felt about giving directions to the surrogate and how the surrogate felt to be the body of the director. In the studies conducted, the director had the confidence to go outside with ChameleonMask and the surrogate tended to fulfill the director's requests and felt positive about being the surrogate.
Comparing order picking assisted by head-up display versus pick-by-light with explicit pick confirmation BIBAFull-Text 133-136
  Xiaolong Wu; Malcolm Haynes; Yixin Zhang; Ziyi Jiang; Zhengyang Shen; Anhong Guo; Thad Starner; Scott Gilliland
Manual order picking is an important part of distribution. Many techniques have been proposed to improve pick efficiency and accuracy. Previous studies compared pick-by-HUD (Head-Up Display) with pick-by-light but without the explicit pick confirmation that is typical in industrial environments. We compare a pick-by-light system designed to emulate deployed systems with a pick-by-HUD system using Google Glass. The pick-by-light system tested 50% slower than pick-by-HUD and required a higher workload. The number of errors committed and picker preference showed no statistically significant difference.
Estimating physical ability of stroke patients without specific tests BIBAFull-Text 137-140
  Adrian Derungs; Julia Seiter; Corina Schuster-Amft; Oliver Amft
We estimate the Extended Barthel Index (EBI) in patients after stroke using inertial sensor measurements acquired during daily activity, rather than specific assessments. The EBI is a standard clinical assessment showing patient independence in handling everyday tasks. Our work aims at providing a continuous ability estimate for patients and therapists that could be used without expert supervision. We extract nine activity primitives (AP), including sitting, standing, transition, etc. from the continuous sensor data using basic rules that do not require data-based training. Using the relative duration of activity primitives, we evaluate the EBI score estimation using two regression methods: Generalised Linear Models (GLM) and Support-Vector Regression (SVR). We evaluated our approaches in full-day study recordings from 11 stroke patients with totally 102 days in ambulatory rehabilitation in a day-care centre. Our results show that EBI can be estimated from the activity primitives with approximately 12% relative error on average for all study participants using SVR. Our results indicate that EBI can be estimated in daily life activity, thus supporting patients and therapists in tracking rehab progress.
Affordances for self-tracking wearable devices BIBAFull-Text 141-142
  Amon Rapp; Federica Cena
Wearable devices for self-tracking are gradually spreading on the market, allowing people to monitor their own behaviors almost everywhere at every time. However, their integration in people's daily life poses a variety of challenges. In this landscape the concept of affordance becomes crucial: it can reframe the design of wearable devices by focusing it on their "smart materiality".

Eyewear computing

An approach for user identification for head-mounted displays BIBAFull-Text 143-146
  Cynthia E. Rogers; Alexander W. Witt; Alexander D. Solomon; Krishna K. Venkatasubramanian
A head-mounted display (HMD) is a device, worn by a person, which has a display in front of one or both eyes. HMDs have applications in a variety of domains including gaming, virtual reality, and medicine. In this paper we present an approach that can identify a user, from among a group of users, by synchronously capturing their unconscious blinking and head-movements using integrated HMD sensors. We ask each user of the HMD to view a series of rapidly changing images of numbers and letters on the HMD display. Simultaneously, their blinks and head-movements are captured using infrared, accelerometer, and gyroscope sensors. Analysis of our approach using blink and head-movement data collected from 20 individuals demonstrates the feasibility of our approach with an accuracy of 94%.
Estimating visual attention from a head mounted IMU BIBAFull-Text 147-150
  Teesid Leelasawassuk; Dima Damen; Walterio W. Mayol-Cuevas
This paper concerns with the evaluation of methods for the estimation of both temporal and spatial visual attention using a head-worn inertial measurement unit (IMU). Aimed at tasks where there is a wearer-object interaction, we estimate the when and the where the wearer is interested in. We evaluate various methods on a new egocentric dataset from 8 volunteers and compare our results with those achievable with a commercial gaze tracker used as ground-truth. Our approach is primarily geared for sensor-minimal EyeWear computing.
Glass-physics: using google glass to support high school physics experiments BIBAFull-Text 151-154
  P. Lukowicz; A. Poxrucker; J. Weppner; B. Bischke; J. Kuhn; M. Hirth
We demonstrate how Smart Glasses can support high school science experiments. The vision is to (1) reduce the "technical" effort involved in conducting the experiments (measuring, generating plots etc.) and to (2) allow the students to interactively see/manipulate the theoretical representation of the relevant phenomena while at the same time interacting with them in the real world. As a use case, we have implemented a Google Glass app for a standard high school acoustics school experiment (determining the relationship between tone frequency that hitting a glass filled with water generates and the amount of water in the glass). We evaluated the system with a group of 36 high grade students split into a group using our application and a control group using an existing tablet based system. We show a statistically significant advantage in experiment execution speed, cognitive load, and curiosity.
MAGIC pointing for eyewear computers BIBAFull-Text 155-158
  Shahram Jalaliniya; Diako Mardanbegi; Thomas Pederson
In this paper, we propose a combination of head and eye movements for touchlessly controlling the "mouse pointer" on eyewear devices, exploiting the speed of eye pointing and accuracy of head pointing. The method is a wearable computer-targeted variation of the original MAGIC pointing approach which combined gaze tracking with a classical mouse device. The result of our experiment shows that the combination of eye and head movements is faster than head pointing for far targets and more accurate than eye pointing.
WISEglass: multi-purpose context-aware smart eyeglasses BIBAFull-Text 159-160
  Florian Wahl; Martin Freund; Oliver Amft
We extend regular eyeglasses with multi-modal sensing and processing functions for context awareness. Our aim was to leverage eyeglasses as a platform to acquire and process context information according to the wearer's needs. The eyeglasses provide inertial motion, environmental light, and pulse sensors, data processing and wireless functionality, besides a rechargeable battery. We implemented prototypes of the smart eyeglasses and evaluated recognition performance in a study of daily activities with nine participants. The accuracy of recognising nine activity clusters from the smart eyeglasses motion sensors was 77% on average, confirming the benefit of smart eyeglasses for context-aware applications.
Creating gaze annotations in head mounted displays BIBAFull-Text 161-162
  Diako Mardenbegi; Pernilla Qvarfordt
To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annotation, the user simply captures an image using the HMD's camera, looks at an object of interest in the image, and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can be shared. Our study showed that users found that gaze annotations add precision and expressiveness compared to annotations of the image as a whole.

Environmental sensing systems

Tracking motion context of railway passengers by fusion of low-power sensors in mobile devices BIBAFull-Text 163-170
  Takamasa Higuchi; Hirozumi Yamaguchi; Teruo Higashino
In this paper we develop StationSense, a novel mobile sensing solution for precisely tracking temporal stop-and-go patterns of railway passengers. While such motion context serves as a promising enabler of various traveler support systems, we found through experiments in a major railway network in Japan that existing accelerometer-based passenger tracking systems can poorly work in modern trains, where jolts during motion have been dramatically reduced. Towards robust motion tracking, StationSense harnesses characteristic features in ambient magnetic fields in trains to find candidates of stationary periods, and subsequently filters out false positive detections by a tailored acceleration fusion mechanism. Then it finds optimal boundaries between adjacent moving/stationary periods, employing unique signatures in accelerometer readings. Through field experiments around 16 railway lines, we show that StationSense can identify periods of train stops with accuracy of 81%, which is almost 2 times higher than the existing accelerometer-based solutions.
Improving floor localization accuracy in 3D spaces using barometer BIBAFull-Text 171-178
  Dipyaman Banerjee; Sheetal K. Agarwal; Parikshit Sharma
Technologies such as Wifi and BLE have been proven to be effective for indoor localization in two dimensional spaces with sufficiently good accuracy but the same techniques have large margin of errors when it comes to three dimensional spaces. Popular 3D spaces such as malls or airports are marked by distinct structural features -- atrium/hollow space and large corridors which reduces spatial variability of WiFi and BLE signal strengths leading to erroneous location prediction. A large fraction of these errors can be attributed to vertical jumps where the predicted location has same horizontal coordinate as the actual location but differs in the vertical coordinate. Smartphones now come equipped with barometer sensor which could be used to solve this problem and create 3D localization solution having better accuracy. Research shows that the barometer can be used to determine relative vertical movement and its direction with nearly 100% accuracy. However exact floor prediction requires repeated calibration of the barometer measurements as pressure values vary significantly across device, time and locations. In this paper we present a method of automatically calibrating smartphone embedded barometers to provide accurate 3D localization. Our method combines a probabilistic learning method with a pressure drift elimination algorithm. We also show that when the floor value is accurately predicted, Wifi localization accuracy improves by 25% for 3D spaces. We validate our techniques in a real shopping mall and provide valuable insights from practical experiences.
Robust in-situ data reconstruction from Poisson noise for low-cost, mobile, non-expert environmental sensing BIBAFull-Text 179-182
  Matthias Budde; Marcel Köpke; Michael Beigl
Personal and participatory environmental sensing, especially of air quality, is a topic of increasing importance. However, as the employed sensors are often cheap, they are prone to erroneous readings, e.g. due to sensor aging or low selectivity. Additionally, non-expert users make mistakes when handling equipment. We present an elegant approach that deals with such problems on the sensor level. Instead of characterizing systematic errors to remove them from the noisy signal, we reconstruct the true signal solely from its Poisson noise. Our approach can be applied to data from any phenomenon that can be modeled as particles and is robust against both offset and drift, as well to a certain extent against cross-sensitivity. We show its validity on two real-world datasets.
Fine-grained social relationship extraction from real activity data under coarse supervision BIBAFull-Text 183-187
  Kota Tsubouchi; Osamu Saisho; Junichi Sato; Seira Araki; Masamichi Shimosaka
Understanding social relationships plays an important role in smooth information sharing and project management. Recently, extracting social relationships from activity sensor data has gained popularity, and many researchers have tried to detect close relationship pairs based on the similarities between activity sensor data, namely, unsupervised approaches. However, there is room for further research into social relationship analysis of sensor data in terms of extraction performance. We therefore focus on improving the accuracy of detection and propose a novel fine-grained social relationship extraction from coarse supervision data by supervised approach based on multiple instance learning. In this paper, fine-grained relationship means the relationship including information about the time and duration they are together, and coarse supervision data is the data containing only information about whether they are together in a day. In this research, we evaluate the feasibility of our extraction method and analyze the extracted fine-grained social relationships. Our approach improve detection accuracy and achieve extraction of fine-grained relationships from coarse supervision data.