HCI Bibliography Home | HCI Conferences | ISWC Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ISWC Tables of Contents: 1314-114-215

Proceedings of the 2014 International Symposium on Wearable Computers

Fullname:ISWC 2014: 18th International Symposium on Wearable Computers
Editors:Lucy Dunne; Tom Martin; Michael Beigl
Location:Seattle, Washington
Dates:2014-Sep-13 to 2014-Sep-17
Volume:1
Publisher:ACM
Standard No:ISBN: 978-1-4503-2969-9; ACM DL: Table of Contents; hcibib: ISWC14-1
Papers:30
Pages:142
Links:Conference Website
  1. ISWC 2014-09-13 Volume 1
    1. Keynote speaker
    2. Activity and group interactions
    3. Contextual awareness on mobile devices
    4. Wearable input/output
    5. Eyewear computing
    6. Sensing the body
    7. Assistive devices
    8. Posters

ISWC 2014-09-13 Volume 1

Keynote speaker

Making space suits BIBAFull-Text 1
  Amy Ross
Successfully enabling a human to live and work in the vacuum of space requires a broad array of technologies, from textiles to hard goods to electronics. Space Suit design is predicated on two things: 1) Where you are going, and 2) What you are doing. Where you are going addresses the environment into which you are sending the suited astronaut. Considerations such as micrometeoroids, thermal variations, and dust are included. These considerations are primary functions of the space suit as a life support system. What you are doing depends on the goals for the mission. Are you deploying a habitat? Are you performing geologic experiments? Are you searching for life? The 'doing' question addresses the productive purpose for sending the astronauts to their destination. In this keynote, I will discuss these considerations and how they are realized in current and future space suit design.

Activity and group interactions

Group activity recognition using belief propagation for wearable devices BIBAFull-Text 3-10
  Dawud Gordon; Markus Scholz; Michael Beigl
Humans are social beings and spend most of their time in groups. Group behavior is emergent, generated by members' personal characteristics and their interactions. It is therefore difficult to recognize in peer-to-peer (P2P) systems where the emergent behavior itself cannot be directly observed. We introduce 2 novel algorithms for distributed probabilistic inference (DPI) of group activities using loopy belief propagation (LBP). We evaluate their performance using an experiment in which 10 individuals play 6 team sports and show that these activities are emergent in nature through natural processes. Centralized recognition performs very well, upwards of an F-score of 0.95 for large window sizes. The distributed methods iteratively converge to solutions which are comparable to centralized methods. DPI-LBP also reduces energy consumption by a factor of 7 to 40, where a centralized unit or infrastructure is not required.
Accommodating user diversity for in-store shopping behavior recognition BIBAFull-Text 11-14
  Sougata Sen; Dipanjan Chakraborty; Vigneshwaran Subbaraju; Dipyaman Banerjee; Archan Misra; Nilanjan Banerjee; Sumit Mittal
This paper explores the possibility of using mobile sensing data to detect certain in-store shopping intentions or behaviours of shoppers. We propose a person-independent activity recognition technique called CROSDAC, which captures the diversity in the manifestation of such intentions or behaviours in a heterogeneous set of users in a data-driven manner via a 2-stage clustering-cum-classification technique. Using smartphone based sensor data (accelerometer, compass and Wi-Fi) from a directed, but real-life study involving 86 shopping episodes from 30 users in a mall's food court, we show that CROSDAC's mobile sensing-based approach can offer reasonably high accuracy (77:6% for a 2-class identification problem) and outperforms the traditional community-driven approaches that unquestioningly segment users on the basis of underlying demographic or lifestyle attributes.
Detecting smoothness of pedestrian flows by participatory sensing with mobile phones BIBAFull-Text 15-18
  Tomohiro Nishimura; Takamasa Higuchi; Hirozumi Yamaguchi; Teruo Higashino
In this paper, we propose a novel system for estimating crowd density and smoothness of pedestrian flows in public space by participatory sensing with mobile phones. By analyzing walking motion of the pedestrians and ambient sound in the environment that can be monitored by accelerometers and microphones in off-the-shelf smartphones, our system classifies the current situation at each area into four categories that well represent the crowd behavior. Through field experiments using Android smartphones, we show that our system can recognize the current situation with accuracy of 60-78%.

Contextual awareness on mobile devices

Group affiliation detection using model divergence for wearable devices BIBAFull-Text 19-26
  Dawud Gordon; Martin Wirz; Daniel Roggen; Gerhard Tröster; Michael Beigl
Methods for recognizing group affiliations using mobile devices have been proposed using centralized instances to aggregate and evaluate data. However centralized systems do not scale well and fail when the network is congested. We present a method for distributed, peer-to-peer (P2P) recognition of group affiliations in multi-group environments, using the divergence of mobile phone sensor data distributions as an indicator of similarity. The method assesses pairwise similarity between individuals using model parameters instead of sensor observations, and then interprets that information in a distributed manner. An experiment was conducted with 10 individuals in different group configurations to compare P2P and conventional centralized approaches. Although the output of the proposed method fluctuates, we can still correctly detect 93% of group affiliations by applying a filter. We foresee applications in mobile social networking, life logging, smart environments, crowd situations and possibly crowd emergencies.
Public restroom detection on mobile phone via active probing BIBAFull-Text 27-34
  Mingming Fan; Alexander Travis Adams; Khai N. Truong
Although there are clear benefits to automatic image capture services by wearable devices, image capture sometimes happens in sensitive spaces where camera use is not appropriate. In this paper, we tackle this problem by focusing on detecting when the user of a wearable device is located in a specific type of private space -- the public restroom -- so that the image capture can be disabled. We present an infrastructure-independent method that uses just the microphone and the speaker on a commodity mobile phone. Our method actively probes the environment by playing a 0.1 seconds sine wave sweep sound and then analyzes the impulse response (IR) by extracting MFCCs features. These features are then used to train an SVM model. Our evaluation results show that we can train a general restroom model which is able to recognize new restrooms. We demonstrate that this approach works on different phone hardware. Furthermore, the volume levels, occupancy and presence of other sounds do not affect recognition in significant ways. We discuss three types of errors that the prediction model has and evaluate two proposed smoothing algorithms for improving recognition.
Exploiting usage statistics for energy-efficient logical status inference on mobile phones BIBAFull-Text 35-42
  Jon C. Hammer; Tingxin Yan
Logical statuses of mobile users, such as isBusy and isAlone, are the key enabler for a plethora of context-aware mobile applications. While on-board hardware sensors, such as motion, proximity, and location sensors, have been extensively studied for logical status inference, the continuous usage incurs formidable energy consumption and therefore user experience degradation. In this paper, we argue that smartphone usage statistics can be used for logical status inference with negligible energy cost. To validate this argument, this paper presents a continuous inference engine that (1) intercepts multiple operating system events, in particular foreground app, notifications, screen states, and connected networks; (2) extracts informative features from OS events; and (3) efficiently infers the logical status of mobile users. The proposed inference engine is implemented for unmodified Android phones, and an evaluation on a four-week trial has shown promising accuracy in identifying four logical statuses of mobile users with over 87% accuracy while the average energy impact on the battery life is less than 0.5%.
How much light do you get?: estimating daily light exposure using smartphones BIBAFull-Text 43-46
  Florian Wahl; Thomas Kantermann; Oliver Amft
We present an approach to estimate a persons light exposure using smartphones. We used web-sourced weather reports combined with smartphone light sensor data, time of day, and indoor/outdoor information, to estimate illuminance around the user throughout a day. Since light dominates every human's circadian rhythm and influences the sleep-wake cycle, we developed a smartphone-based system that does not require additional sensors for illuminance estimation. To evaluate our approach, we conducted a free-living study with 12 users, each carrying a smartphone, a head-mounted light reference sensor, and a wrist-worn light sensing device for six consecutive days. Estimated light values were compared to the head-mounted reference, the wrist-worn device and a mean value estimate. Our results show that illuminance could be estimated at less than 20% error for all study participants, outperforming the wrist-worn device. In 9 out of 12 participants the estimation deviated less than 10% from the reference measurements.

Wearable input/output

The tongue and ear interface: a wearable system for silent speech recognition BIBAFull-Text 47-54
  Himanshu Sahni; Abdelkareem Bedri; Gabriel Reyes; Pavleen Thukral; Zehua Guo; Thad Starner; Maysam Ghovanloo
We address the problem of performing silent speech recognition where vocalized audio is not available (e.g. due to a user's medical condition) or is highly noisy (e.g. during firefighting or combat). We describe our wearable system to capture tongue and jaw movements during silent speech. The system has two components: the Tongue Magnet Interface (TMI), which utilizes the 3-axis magnetometer aboard Google Glass to measure the movement of a small magnet glued to the user's tongue, and the Outer Ear Interface (OEI), which measures the deformation in the ear canal caused by jaw movements using proximity sensors embedded in a set of earmolds. We collected a data set of 1901 utterances of 11 distinct phrases silently mouthed by six able-bodied participants. Recognition relies on using hidden Markov model-based techniques to select one of the 11 phrases. We present encouraging results for user dependent recognition.
Hands-free gesture control with a capacitive textile neckband BIBAFull-Text 55-58
  Marco Hirsch; Jingyuan Cheng; Attila Reiss; Mathias Sundholm; Paul Lukowicz; Oliver Amft
We present a novel sensing modality for hands-free gesture controlled user interfaces, based on active capacitive sensing. Four capacitive electrodes are integrated into a textile neckband, allowing continuous unobtrusive head movement monitoring. We explore the capability of the proposed system for recognising head gestures and postures. A study involving 12 subjects was carried out, recording data from 15 head gestures and 19 different postures. We present a quantitative evaluation based on this dataset, achieving an overall accuracy of 79.1% for head gesture recognition and 40.4% for distinguishing between head postures (69.9% when merging the most adjacent positions), respectively. These results indicate that our approach is promising for hands-free control interfaces. An example application scenario of this technology is the control of an electric wheelchair for people with motor impairments, where recognised gestures or postures can be mapped to control commands.
FabriTouch: exploring flexible touch input on textiles BIBAFull-Text 59-62
  Florian Heller; Stefan Ivanov; Chat Wacharamanotham; Jan Borchers
Touch-sensitive fabrics let users operate wearable devices unobtrusively and with rich input gestures similar to those on modern smartphones and tablets. While hardware prototypes exist in the DIY crafting community, HCI designers and researchers have little data about how well these devices actually work in realistic situations. FabriTouch is the first flexible touch-sensitive fabric that provides such scientifically validated information. We show that placing a FabriTouch pad onto clothing and the body instead of a rigid support surface significantly reduces input speed but still allows for basic gestures. We also show the impact of sitting, standing, and walking on horizontal and vertical swipe gesture performance in a menu navigation task. Finally, we provide the details necessary to replicate our FabriTouch pad, to enable both the DIY crafting community and HCI researchers and designers to build on our work.
SwitchBack: an on-body RF-based gesture input device BIBAFull-Text 63-66
  Dana Hughes; Halley Profita; Nikolaus Correll
We present SwitchBack, a novel e-textile input device that can register multiple forms of input (tapping and bi-directional swiping) with minimal calibration. The technique is based on measuring the input impedance of a 7 cm microstrip short-circuit stub consisting of a strip of conductive fabric separated from a conductive fabric ground plane (also made of conductive fabric) by a layer of denim. The input impedance is calculated by measuring the stub's reflection coefficient using a simple RF reflectometer circuit, operating at 900MHz. The input impedance of the stub is affected by the dielectric properties of the surrounding material, and changes in a predictable manner when touched. We present the theoretical formulation, device and circuit design, and experimental results. Future work is also discussed.
Wearable jamming mitten for virtual environment haptics BIBAFull-Text 67-70
  Timothy M. Simon; Ross T. Smith; Bruce H. Thomas
This paper presents a new mitten incorporating vacuum layer jamming technology to provide haptic feedback to a user. We demonstrate that layer jamming technology can be successfully applied to a mitten, and discuss advantages layer jamming provides as a wearable technology through its low profile form factor. Jamming differs from traditional wearable haptic systems by restricting a user's movement, rather than applying an actuation force on the user's body. Restricting the user's movement is achieved by varying the stiffness of wearable items, such as gloves. We performed a pilot study where the qualitative results showed users found the haptic sensation of the jamming mitten similar to grasping the physical counterpart.

Eyewear computing

A comparison of order picking assisted by head-up display (HUD), cart-mounted display (CMD), light, and paper pick list BIBAFull-Text 71-78
  Anhong Guo; Shashank Raghu; Xuwen Xie; Saad Ismail; Xiaohui Luo; Joseph Simoneau; Scott Gilliland; Hannes Baumann; Caleb Southern; Thad Starner
Wearable and contextually aware technologies have great applicability in task guidance systems. Order picking is the task of collecting items from inventory in a warehouse and sorting them for distribution; this process accounts for about 60% of the total operational costs of these warehouses. Current practice in industry includes paper pick lists and pick-by-light systems. We evaluated order picking assisted by four approaches: head-up display (HUD); cart-mounted display (CMD); pick-by-light; and paper pick list. We report accuracy, error types, task time, subjective task load and user preferences for all four approaches. The findings suggest that pick-by-HUD and pick-by-CMD are superior on all metrics to the current practices of pick-by-paper and pick-by-light.
The effects of visual displacement on simulator sickness in video see-through head-mounted displays BIBAFull-Text 79-82
  Sei-Young Kim; Joong Ho Lee; Ji Hyung Park
We present an experiment exploring the role of visual displacement to simulator sickness in a video see-through head-mounted display (HMD). To identify the effect of visual displacement on simulator sickness, we examined the effect of visual displacement conditions (ranging from 50 to 300 mm) on simulator sickness and investigated the adaptation of simulator sickness over three days. The results indicated that the total symptom score of simulator sickness in the 300 mm visual displacement condition was significantly higher than that of simulator sickness in the other visual displacement conditions. In addition, the total symptom score of simulator sickness became significantly lower over 1-3 days in the 200 mm visual displacement condition and 1-2 days in the 300 mm visual displacement condition, which means that adaptation was found over three days. However, only partial adaptation was shown in the visual displacement 300, thereby suggesting that high sensory conflict in visual displacement 300 results in increased time to adapt. These results indicate that simulator sickness in video see-through HMDs are adaptable over time, which supports previous studies.
Understanding the wearability of head-mounted devices from a human-centered perspective BIBAFull-Text 83-86
  Vivian Genaro Motti; Kelly Caine
Extensive efforts have been dedicated to developing wearables, but existing solutions focus mainly on feasibility and innovation. Thus, although many devices are named 'wearable', users face some wearability issues. Previously adopted trial and error approaches have effectively produced wearables, but not focusing on human factors. Through an extensive analysis of online comments about head-mounted devices, this paper presents their problem space from a human perspective. The analysis of online comments from existing and potential users enabled us to identify key aspects of the wearability of head-mounted devices, bridging the gap between design decisions and users' requirements.
Looking at or through?: using eye tracking to infer attention location for wearable transparent displays BIBAFull-Text 87-90
  Mélodie Vidal; David H. Nguyen; Kent Lyons
Wearable near-eye displays pose interesting challenges for interface design. These devices present the user with a duality of visual worlds, with a virtual window of information overlaid onto the physical world. Because of this duality, we suggest that the wearable interface would benefit from understanding where the user's visual attention is directed. We explore the potential of eye tracking to address this problem, and describe four eye tracking techniques designed to provide data about where the user's attention is directed. We also propose some attention-aware user interface techniques demonstrating the potential of the eyes for wearable displays user interface management.

Sensing the body

Unobtrusive gait verification for mobile phones BIBAFull-Text 91-98
  Hong Lu; Jonathan Huang; Tanwistha Saha; Lama Nachman
Continuously and unobtrusively identifying the phone's owner using accelerometer sensing and gait analysis has a great potential to improve user experience on the go. However, a number of challenges, including gait modeling and training data acquisition, must be addressed before unobtrusive gait verification is practical. In this paper, we describe a gait verification system for mobile phone without any assumption of body placement or device orientation. Our system uses a combination of supervised and unsupervised learning techniques to verify the user continuously and automatically learn unseen gait pattern from the user over time. We demonstrate that it is capable of recognizing the user in natural settings. We also investigated an unobtrusive training method that makes it feasible to acquire training data without explicit user annotation.
Enhancing action recognition through simultaneous semantic mapping from body-worn motion sensors BIBAFull-Text 99-106
  Michael Hardegger; Long-Van Nguyen-Dinh; Alberto Calatroni; Gerhard Tröster; Daniel Roggen
Locations and actions are interrelated: some activities tend to occur at specific places, for example a person is more likely to twist his wrist when he is close to a door (to turn the knob). We present an unsupervised fusion method that takes advantage of this characteristic to enhance the recognition of location-related actions (e.g., open, close, switch, etc.). The proposed LocAFusion algorithm acts as a post-processing filter: At run-time, it constructs a semantic map of the environment by tagging action recognitions to Cartesian coordinates. It then uses the accumulated information about a location i) to discriminate between identical actions performed at different places and ii) to correct recognitions that are unlikely, given the other observations at the same location. LocAFusion does not require prior statistics about where activities occur, which allows for seamless deployment to new environments. The fusion approach is agnostic to the sensor modalities and methods used for action recognition and localization.
   For evaluation, we implemented a fully wearable setup that tracks the user with a foot-mounted motion sensor and the ActionSLAM algorithm. Simultaneously, we recognize hand actions through template matching on the data of a wrist-worn inertial measurement unit. In 10 recordings with 554 performed object interactions, LocAFusion consistently outperformed location-independent action recognition (8-31% increase in F1 score), identified 96% of the objects in the semantic map and overall correctly labeled 82% of the actions in problems with up to 23 classes.
Your activity tracker knows when you quit smoking BIBAFull-Text 107-110
  Ken Kawamoto; Takeshi Tanaka; Hiroyuki Kuriyama
This paper discusses outcomes of our exploratory research aiming to discover ways of utilising continuous long term respiratory rate data collected from actigraphy (wrist-worn accelerometers.) We show that by monitoring changes in respiratory rate during sleep, we can detect and visualise various physical conditions that were previously not detectable using such simple wearable sensors, namely; the subjective level of drunkenness, fever, and smoking cessation. This study provides valuable insight into the potential of actigraphy, not simply as a tool for detecting common daily activities, but as a base for building a generic lifelog system that can evaluate the more qualitative aspects of your life.

Assistive devices

Passive haptic learning of Braille typing BIBAFull-Text 111-118
  Caitlyn Seim; John Chandler; Kayla DesPortes; Siddharth Dhingra; Miru Park; Thad Starner
Passive Haptic Learning (PHL) is the acquisition of sensorimotor skills without active attention to learning. One method is to "teach" motor skills using vibration cues delivered by a wearable, tactile interface while the user is focusing on another, primary task. We have created a system for Passive Haptic Learning of typing skills. In a study containing 16 participants, users demonstrated significantly reduced error typing a phrase in Braille after receiving passive instruction versus control (32.85% average decline in error vs. 2.73% increase in error). PHL users were also able to recognize and read more Braille letters from the phrase (72.5% vs. 22.4%). In a second study, containing 8 participants thus far, we passively teach the full Braille alphabet over four sessions. Typing error reductions in participants receiving PHL were more rapid and consistent, with 75% of PHL vs. 0% of control users reaching zero typing error. By the end of the study, PHL participants were also able to recognize and read 93.3% of all Braille alphabet letters. These results suggest that Passive Haptic instruction facilitated by wearable computers may be a feasible method of teaching Braille typing and reading.
An assistive EyeWear prototype that interactively converts 3D object locations into spatial audio BIBAFull-Text 119-126
  Titus J. J. Tang; Wai Ho Li
We present an end-to-end prototype for an assistive EyeWear system aimed at Vision Impaired users. The system uses computer vision to detect objects on planar surfaces and sonifies their 3D locations using spatial audio. A key novelty of the system is that it operates in real time (15Hz), allowing the user to interactively affect the audio feedback by actively moving a headworn sensor. A quantitative user study was conducted on 12 blindfolded subjects performing an object localisation and placement task using our system. This detailed study of near field interactive spatial audio for users operating at around arm's length departs from existing studies focused on far-field audio and non-interactive systems. The object localisation accuracy achieved on naive users suggests that the EyeWear prototype has a lot of potential as a real world assistive device. User feedback collected from exit surveys and mathematical modelling of user errors provide several promising avenues to further improve system performance.

Posters

Washability of e-textile stretch sensors and sensor insulation BIBAFull-Text 127-128
  Mary Ellen Berglund; James Coughlin; Guido Gioberto; Lucy E. Dunne
An effective way to monitor body movements and positions (including physiological signals like breathing) without causing discomfort is through integration of sensors and electronics into base layers of clothing. However, for many applications (including sports and fitness), such sensors must be washable. Here, we present results of experiments evaluating the impact of washing on an e-textile stretch and bend sensor. Two cases are investigated: un-insulated sensors and sensors insulated with a fusible polymer film. Results show small-scale drift in the un-insulated sensor, which is magnified by machine washing and further by machine drying. Similar results are observed in delamination effects for the insulating film.
Human joint angle estimation with an e-textile sensor BIBAFull-Text 129-130
  Yu Enokibori; Kenji Mase
We describe the results of human joint angle estimation with an e-textile-based stretch sensor. Joint angle estimation is necessary to predict the details of human posture and implement posture instruction to prevent caregiver's injury in health and medical care. In this study, we focused on the elbow angle. We installed e-textile sensors on an elbow support to maintain an ideal setting and a knitted shirt to simulate a daily-use setting. With the elbow support, the correlation coefficient and the rooted-mean-square error in the degree of arc were 0.99 and 5.73 for complete bends. With the knitted shirt, in the same way, the coefficients and errors were 0.92 and 16.52 for complete bends and 0.71 and 7.30 for a daily action sequence. The range of motion of the subject's elbow was about 132°. Thus, our proposed system showed the potentials to detect up to 11 elbow angle patterns with an ideal setting and 9 patterns with the daily-use settings.
Lower-limb goniometry using stitched sensors: effects of manufacturing and wear variables BIBAFull-Text 131-132
  Guido Gioberto; Cheol-Hong Min; Crystal Compton; Lucy E. Dunne
Smart fabrics allow for convenient wearable sensing solutions to monitor body movements during our daily life. However, garment-integrated sensing presents challenges for accurate sensing, due many variables including those presented by variability in garment and sensor dimensions due to cut-and-sew manufacturing processes, and those introduced by re-positioning of integrated sensors when the garment is donned and doffed. Here, we measure the effect of variability in garment positioning due to donning and doffing, garment dimension due to manufacturing tolerances, and sensor dimension due to manufacturing defects on the accuracy of a stitched goniometer used to measure flexion of the knee and hip. Results show that variability in garment positioning and garment dimension have a minimal effect on sensor accuracy, but sensor dimensions have a more significant influence on accuracy.
Sensors vs. human: comparing sensor based state monitoring with questionnaire based self-assessment in bipolar disorder patients BIBAFull-Text 133-134
  Agnes Gruenerbl; Gernot Bahle; Stefan Oehler; Raphaela Banzer; Chrisitan Haring; Paul Lukowicz
We compare the performance of a smart phone based state and state change detection system to self-assessment and show that the automatic detection is much closer to the objective psychiatric diagnosis. Our work is based on a large, real life dataset collected with 9 real patients with a total of 800 days of data. It consists of smart phone sensor data, a daily self-assessment questionnaire filled out by the patients and is validated by standardized psychiatric scale tests.
MagNail: augmenting nails with a magnet to detect user actions using a smart device BIBAFull-Text 135-136
  Azusa Kadomura; Itiro Siio
In this study, we design and implement MagNail, a nail augmented with a magnet, that allows user actions to be detected via the magnetic sensor integrated in smart devices such as a smartphone or a tablet PC. By using this system, therefore, the user can intuitively use the smart device using finger motions. We also develop a drawing application with an intuitive mode switching function and evaluate its performance.
Single capacitive touch sensor that detects multi-touch gestures BIBAFull-Text 137-138
  Hiroyuki Manabe; Hiroshi Inamura
A technique that allows a single capacitive touch sensor to recognize multi-touch gestures is proposed. Touch, multi-finger swipe and swipe direction are recognized. It does not need a multiplexer or complicated wiring and well suits wearable devices. An experiment with 8 subjects confirms that the proposed technique can recognize multi-touch gestures.
Examination of human factors for wearable line-of-sight detection system BIBAFull-Text 139-140
  Miho Ogawa; Kota Sampei; Carlos Cortes; Norihisa Miki
We proposed a wearable line-of-sight detection system that utilizes micro-fabricated transparent optical sensors on eyeglasses. These sensors can detect the reflection of light from the eye, in which the intensity from the white of the eye is stronger than that of the pupil, and can thus deduce the position of the pupil. LOS detection was successfully demonstrated by using the proposed system, but careful calibration was required for each user. Therefore, in the current study, we investigated the dominant factors that affected the LOS detection accuracy. It was experimentally found that the distance between the sensors on the eyeglasses and the pupil was a dominant factor. Thus, we designed a frame that can be adjusted according to this distance, which enabled LOS detection for all subjects.
Canine reachability of snout-based wearable inputs BIBAFull-Text 141-142
  Giancarlo Valentin; Joelle Alcaidinho; Larry Freil; Clint Zeagler; Melody Jackson; Thad Starner
We designed an experiment with the goal of assessing wearable reachability for canines. We investigated the effect of placement on the ability of dogs to reach on-body interfaces with their snouts. In our pilot study, seven placements along the front legs, rib cage, hip and chest are tested with six dogs. The results showed that the front leg placements are reachable with the least amount of training and are also the most invariant to small changes in location. With training, the lower half of the rib cage area had the fastest access times across subjects. We hope that these results may be useful in mapping the constraint space of placements for snout interactions.