HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 05060708091011-111-212-112-213-113-214-114-215-115-2

Proceedings of the 2013 ACM Symposium on User Interface Software and Technology

Fullname:Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology
Editors:Shahram Izadi; Aaron Quigley; Ivan Poupyrev; Takeo Igarashi
Location:St. Andrews, United Kingdom
Dates:2013-Oct-08 to 2013-Oct-11
Volume:1
Publisher:ACM
Standard No:ISBN: 978-1-4503-2268-3; ACM DL: Table of Contents; hcibib: UIST13-1
Papers:63
Pages:544
Links:Conference Website
  1. UIST 2013-10-08 Volume 1
    1. Keynote address
    2. Hardware
    3. Mobile
    4. Visualization & video
    5. Crowd & creativity
    6. Sensing
    7. Vision
    8. GUI
    9. Applications and games
    10. Tangible and fabrication
    11. Development
    12. Haptics

UIST 2013-10-08 Volume 1

Keynote address

Humans and the coming machine revolution BIBAFull-Text 1-2
  Raffaello D'Andrea
The key components of feedback control systems -- sensors, actuators, computation, power, and communication -- are continually becoming smaller, lighter, more robust, higher performance, and less expensive. By using appropriate algorithms and system architectures, it is thus becoming possible to "close the loop" on almost any machine, and to create new capabilities that fully exploit their dynamic potential. In this talk I will discuss various projects -- involving mobile robots, flying machines, an autonomous table, and actuated wingsuits -- where these new machine competencies are interfaced with the ultimate dynamic entities: human beings.

Hardware

Lumitrack: low cost, high precision, high speed tracking with projected m-sequences BIBAFull-Text 3-12
  Robert Xiao; Chris Harrison; Karl D. D. Willis; Ivan Poupyrev; Scott E. Hudson
We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.
PneUI: pneumatically actuated soft composite materials for shape changing interfaces BIBAFull-Text 13-22
  Lining Yao; Ryuma Niiyama; Jifei Ou; Sean Follmer; Clark Della Silva; Hiroshi Ishii
This paper presents PneUI, an enabling technology to build shape-changing interfaces through pneumatically-actuated soft composite materials. The composite materials integrate the capabilities of both input sensing and active shape output. This is enabled by the composites' multi-layer structures with different mechanical or electrical properties. The shape changing states are computationally controllable through pneumatics and pre-defined structure. We explore the design space of PneUI through four applications: height changing tangible phicons, a shape changing mobile, a transformable tablet case and a shape shifting lamp.
Paper generators: harvesting energy from touching, rubbing and sliding BIBAFull-Text 23-30
  Mustafa Emre Karagozler; Ivan Poupyrev; Gary K. Fedder; Yuri Suzuki
We present a new energy harvesting technology that generates electrical energy from a user's interactions with paper-like materials. The energy harvesters are flexible, light, and inexpensive, and they utilize a user's gestures such as tapping, touching, rubbing and sliding to generate electrical energy. The harvested energy is then used to actuate LEDs, e-paper displays and various other devices to create novel interactive applications, such as enhancing books and other printed media with interactivity.
Touch & activate: adding interactivity to existing objects using active acoustic sensing BIBAFull-Text 31-40
  Makoto Ono; Buntarou Shizuki; Jiro Tanaka
In this paper, we present a novel acoustic touch sensing technique called Touch & Activate. It recognizes a rich context of touches including grasp on existing objects by attaching only a vibration speaker and a piezo-electric microphone paired as a sensor. It provides easy hardware configuration for prototyping interactive objects that have touch input capability. We conducted a controlled experiment to measure the accuracy and trade-off between the accuracy and number of training rounds for our technique. From its results, per-user recognition accuracies with five touch gestures for a plastic toy as a simple example and six hand postures for the posture recognition as a complex example were 99.6% and 86.3%, respectively. Walk up user recognition accuracies for the two applications were 97.8% and 71.2%, respectively. Since the results of our experiment showed a promising accuracy for the recognition of touch gestures and hand postures, Touch & Activate should be feasible for prototype interactive objects that have touch input capability.
Fiberio: a touchscreen that senses fingerprints BIBAFull-Text 41-50
  Christian Holz; Patrick Baudisch
We present Fiberio, a rear-projected multitouch table that identifies users biometrically based on their fingerprints during each touch interaction. Fiberio accomplishes this using a new type of screen material: a large fiber optic plate. The plate diffuses light on transmission, thereby allowing it to act as projection surface. At the same time, the plate reflects light specularly, which produces the contrast required for fingerprint sensing. In addition to offering all the functionality known from traditional diffused illumination systems, Fiberio is the first interactive tabletop system that authenticates users during touch interaction-unobtrusively and securely using the biometric features of fingerprints, which eliminates the need for users to carry any identification tokens.

Mobile

Bayesian touch: a statistical criterion of target selection with finger touch BIBAFull-Text 51-60
  Xiaojun Bi; Shumin Zhai
To improve the accuracy of target selection for finger touch, we conceptualize finger touch input as an uncertain process, and derive a statistical target selection criterion, Bayesian Touch Criterion, by combining the basic Bayes' rule of probability with the generalized dual Gaussian distribution hypothesis of finger touch. The Bayesian Touch Criterion selects the intended target as the candidate with the shortest Bayesian Touch Distance to the touch point, which is computed from the touch point to the target center distance and the target size. We give the derivation of the Bayesian Touch Criterion and its empirical evaluation with two experiments. The results showed that for 2-dimensional circular target selection, the Bayesian Touch Criterion is significantly more accurate than the commonly used Visual Boundary Criterion (i.e., a target is selected if and only if the touch point falls within its boundary) and its two variants.
Touch scrolling transfer functions BIBAFull-Text 61-70
  Philip Quinn; Sylvain Malacria; Andy Cockburn
Touch scrolling systems use a transfer function to transform gestures on a touch sensitive surface into scrolling output. The design of these transfer functions is complex as they must facilitate precise direct manipulation of the underlying content as well as rapid scrolling through large datasets. However, researchers' ability to refine them is impaired by: (1) limited understanding of how users express scrolling intentions through touch gestures; (2) lack of knowledge on proprietary transfer functions, causing researchers to evaluate techniques that may misrepresent the state of the art; and (3) a lack of tools for examining existing transfer functions. To address these limitations, we examine how users express scrolling intentions in a human factors experiment; we describe methods to reverse engineer existing 'black box' transfer functions, including use of an accurate robotic arm; and we use the methods to expose the functions of Apple iOS and Google Android, releasing data tables and software to assist replication. We discuss how this new understanding can improve experimental rigour and assist iterative improvement of touch scrolling.
Controlling widgets with one power-up button BIBAFull-Text 71-74
  Daniel Spelmezan; Caroline Appert; Olivier Chapuis; Emmanuel Pietriga
The Power-up Button is a physical button that combines pressure and proximity sensing to enable gestural interaction with one thumb. Combined with a gesture recognizer that takes the hand's anatomy into account, the Power-up Button can recognize six different mid-air gestures performed on the side of a mobile device. This gives it, for instance, enough expressive power to provide full one-handed control of interface widgets displayed on screen. This technology can complement touch input, and can be particularly useful when interacting eyes-free. It also opens up a larger design space for widget organization on screen: the button enables a more compact layout of interface components than what touch input alone would allow. This can be useful when, e.g., filling the numerous fields of a long Web form, or for very small devices.
Improving structured data entry on mobile devices BIBAFull-Text 75-84
  Kerry Shih-Ping Chang; Brad A. Myers; Gene M. Cahill; Soumya Simanta; Edwin Morris; Grace Lewis
Structure makes data more useful, but also makes data entry more cumbersome. Studies have found that this is especially true on mobile devices, as mobile users often reject structured personal information management tools because the structure is too restrictive and makes entering data slower. To overcome these problems, we introduce a new data entry technique that lets users create customized structured data in an unstructured manner. We use a novel notepad-like editing interface with built-in data detectors that allow users to specify structured data implicitly and reuse the structures when desired. To minimize the amount of typing, it provides intelligent, context-sensitive autocomplete suggestions using personal and public databases that contain candidate information to be entered. We implemented these mechanisms in an example application called Listpad. Our evaluation shows that people using Listpad create customized structured data 16% faster than using a conventional mobile database tool. The speed further increases to 42% when the fields can be autocompleted.
DigiTaps: eyes-free number entry on touchscreens with minimal audio feedback BIBAFull-Text 85-90
  Shiri Azenkot; Cynthia L. Bennett; Richard E. Ladner
Eyes-free input usually relies on audio feedback that can be difficult to hear in noisy environments. We present DigiTaps, an eyes-free number entry method for touchscreen devices that requires little auditory attention. To enter a digit, users tap or swipe anywhere on the screen with one, two, or three fingers. The 10 digits are encoded by combinations of these gestures that relate to the digits' semantics. For example, the digit 2 is input with a 2-finger tap. We conducted a longitudinal evaluation with 16 people and found that DigiTaps with no audio feedback was faster but less accurate than with audio feedback after every input. Throughout the study, participants entered numbers with no audio feedback at an average rate of 0.87 characters per second, with an uncorrected error rate of 5.63%.
Haptic feedback design for a virtual button along force-displacement curves BIBAFull-Text 91-96
  Sunjun Kim; Geehyuk Lee
In this paper, we present a haptic feedback method for a virtual button based on the force-displacement curves of a physical button. The original feature of the proposed method is that it provides haptic feedback, not only for the "click" sensation but also for the moving sensation before and after transition points in a force-displacement curve. The haptic feedback is by vibrotactile stimulations only and does not require a force feedback mechanism. We conducted user experiments to show that the resultant haptic feedback is realistic and distinctive. Participants were able to distinguish among six different virtual buttons, with 94.1% accuracy even in a noisy environment. In addition, participants were able to associate four virtual buttons with their physical counterparts, with a correct answer rate of 79.2%.

Visualization & video

Transmogrification: causal manipulation of visualizations BIBAFull-Text 97-106
  John Brosz; Miguel A. Nacenta; Richard Pusch; Sheelagh Carpendale; Christophe Hurter
A transmogrifier is a novel interface that enables quick, on-the-fly graphic transformations. A region of a graphic can be specified by a shape and transformed into a destination shape with real-time, visual feedback. Both origin and destination shapes can be circles, quadrilaterals or arbitrary shapes defined through touch. Transmogrifiers are flexible, fast and simple to create and invite use in casual InfoVis scenarios, opening the door to alternative ways of exploring and displaying existing visualizations (e.g., rectifying routes or rivers in maps), and enabling free-form prototyping of new visualizations (e.g., lenses).
TextTearing: opening white space for digital ink annotation BIBAFull-Text 107-112
  Dongwook Yoon; Nicholas Chen; François Guimbretière
Having insufficient space for making annotations is a problem that afflicts both paper and digital documents. We introduce the TextTearing technique for in situ expansion of inter-line whitespace and pair it with a lightweight interaction for margin expansion as a way to address this problem. The full system leverages the dynamism of digital documents and employs a bimanual design that combines the precision of pen with the fluidity of touch. Our evaluation found that a simpler unimanual variant of TextTearing was preferred over direct annotation and margin-only expansion. Direct annotation in naturally occurring whitespace was least preferred.
Content-based tools for editing audio stories BIBAFull-Text 113-122
  Steve Rubin; Floraine Berthouzoz; Gautham J. Mysore; Wilmot Li; Maneesh Agrawala
Audio stories are an engaging form of communication that combine speech and music into compelling narratives. Existing audio editing tools force story producers to manipulate speech and music tracks via tedious, low-level waveform editing. In contrast, we present a set of tools that analyze the audio content of the speech and music and thereby allow producers to work at much higher level. Our tools address several challenges in creating audio stories, including (1) navigating and editing speech, (2) selecting appropriate music for the score, and (3) editing the music to complement the speech. Key features include a transcript-based speech editing tool that automatically propagates edits in the transcript text to the corresponding speech track; a music browser that supports searching based on emotion, tempo, key, or timbral similarity to other songs; and music retargeting tools that make it easy to combine sections of music with the speech. We have used our tools to create audio stories from a variety of raw speech sources, including scripted narratives, interviews and political speeches. Informal feedback from first-time users suggests that our tools are easy to learn and greatly facilitate the process of editing raw footage into a final story.
Panopticon: a parallel video overview system BIBAFull-Text 123-130
  Dan Jackson; James Nicholson; Gerrit Stoeckigt; Rebecca Wrobel; Anja Thieme; Patrick Olivier
Panopticon is a video surrogate system that displays multiple sub-sequences in parallel to present a rapid overview of the entire sequence to the user. A novel, precisely animated arrangement slides thumbnails to provide a consistent spatiotemporal layout while allowing any sub-sequence of the original video to be watched without interruption. Furthermore, this output can be generated offline as a highly efficient repeated animation loop, making it suitable for resource-constrained environments, such as web-based interaction. Two versions of Panopticon were evaluated using three different types of video footage with the aim of determining the usability of the proposed system. Results demonstrated an advantage over another surrogate with surveillance footage in terms of search times and this advantage was further improved with Panopticon 2. Eye tracking data suggests that Panopticon's advantage stems from the animated timeline that users heavily rely on.
Video collections in panoramic contexts BIBAFull-Text 131-140
  James Tompkin; Fabrizio Pece; Rajvi Shah; Shahram Izadi; Jan Kautz; Christian Theobalt
Video collections of places show contrasts and changes in our world, but current interfaces to video collections make it hard for users to explore these changes. Recent state-of-the-art interfaces attempt to solve this problem for 'outside->in' collections, but cannot connect 'inside->out' collections of the same place which do not visually overlap. We extend the focus+context paradigm to create a video-collections+context interface by embedding videos into a panorama. We build a spatio-temporal index and tools for fast exploration of the space and time of the video collection. We demonstrate the flexibility of our representation with interfaces for desktop and mobile flat displays, and for a spherical display with joypad and tablet controllers. We study with users the effect of our video-collection+context system to spatio-temporal localization tasks, and find significant improvements to accuracy and completion time in visual search tasks compared to existing systems. We measure the usability of our interface with System Usability Scale (SUS) and task-specific questionnaires, and find our system scores higher.
DemoCut: generating concise instructional videos for physical demonstrations BIBAFull-Text 141-150
  Pei-Yu Chi; Joyce Liu; Jason Linder; Mira Dontcheva; Wilmot Li; Bjoern Hartmann
Amateur instructional videos often show a single uninterrupted take of a recorded demonstration without any edits. While easy to produce, such videos are often too long as they include unnecessary or repetitive actions as well as mistakes. We introduce DemoCut, a semi-automatic video editing system that improves the quality of amateur instructional videos for physical tasks. DemoCut asks users to mark key moments in a recorded demonstration using a set of marker types derived from our formative study. Based on these markers, the system uses audio and video analysis to automatically organize the video into meaningful segments and apply appropriate video editing effects. To understand the effectiveness of DemoCut, we report a technical evaluation of seven video tutorials created with DemoCut. In a separate user evaluation, all eight participants successfully created a complete tutorial with a variety of video editing effects using our system.

Crowd & creativity

Chorus: a crowd-powered conversational assistant BIBAFull-Text 151-162
  Walter S. Lasecki; Rachel Wesley; Jeffrey Nichols; Anand Kulkarni; James F. Allen; Jeffrey P. Bigham
Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.
CrowdLearner: rapidly creating mobile recognizers using crowdsourcing BIBAFull-Text 163-172
  Shahriyar Amini; Yang Li
Mobile applications can offer improved user experience through the use of novel modalities and user context. However, these new input dimensions often require recognition-based techniques, with which mobile app developers or designers may not be familiar. Furthermore, the recruiting, data collection and labeling, necessary for using these techniques, are usually time-consuming and expensive. We present CrowdLearner, a framework based on crowdsourcing to automatically generate recognizers using mobile sensor input such as accelerometer or touchscreen readings. CrowdLearner allows a developer to easily create a recognition task, distribute it to the crowd, and monitor its progress as more data becomes available. We deployed CrowdLearner to a crowd of 72 mobile users over a period of 2.5 weeks. We evaluated the system by experimenting with 6 recognition tasks concerning motion gestures, touchscreen gestures, and activity recognition. The experimental results indicated that CrowdLearner enables a developer to quickly acquire a usable recognizer for their specific application by spending a moderate amount of money, often less than $10, in a short period of time, often in the order of 2 hours. Our exploration also revealed challenges and provided insights into the design of future crowdsourcing systems for machine learning tasks.
Cobi: a community-informed conference scheduling tool BIBAFull-Text 173-182
  Juho Kim; Haoqi Zhang; Paul André; Lydia B. Chilton; Wendy Mackay; Michel Beaudouin-Lafon; Robert C. Miller; Steven P. Dow
Effectively planning a large multi-track conference requires an understanding of the preferences and constraints of organizers, authors, and attendees. Traditionally, the onus of scheduling the program falls on a few dedicated organizers. Resolving conflicts becomes difficult due to the size and complexity of the schedule and the lack of insight into community members' needs and desires. Cobi presents an alternative approach to conference scheduling that engages the entire community in the planning process. Cobi comprises (a) community-sourcing applications that collect preferences, constraints, and affinity data from community members, and (b) a visual scheduling interface that combines community-sourced data and constraint-solving to enable organizers to make informed improvements to the schedule. This paper describes Cobi's scheduling tool and reports on a live deployment for planning CHI 2013, where organizers considered input from 645 authors and resolved 168 scheduling conflicts. Results show the value of integrating community input with an intelligent user interface to solve complex planning tasks.
The drawing assistant: automated drawing guidance and feedback from photographs BIBAFull-Text 183-192
  Emmanuel Iarussi; Adrien Bousseau; Theophanis Tsandilas
We present an interactive drawing tool that provides automated guidance over model photographs to help people practice traditional drawing-by-observation techniques. The drawing literature describes a number of techniques to help people gain consciousness of the shapes in a scene and their relationships. We compile these techniques and derive a set of construction lines that we automatically extract from a model photograph. We then display these lines over the model to guide its manual reproduction by the user on the drawing canvas. Finally, we use shape-matching to register the user's sketch with the model guides. We use this registration to provide corrective feedback to the user. Our user studies show that automatically extracted construction lines can help users draw more accurately. Furthermore, users report that guidance and corrective feedback help them better understand how to draw.
Attribit: content creation with semantic attributes BIBAFull-Text 193-202
  Siddhartha Chaudhuri; Evangelos Kalogerakis; Stephen Giguere; Thomas Funkhouser
We present AttribIt, an approach for people to create visual content using relative semantic attributes expressed in linguistic terms. During an off-line processing step, AttribIt learns semantic attributes for design components that reflect the high-level intent people may have for creating content in a domain (e.g. adjectives such as "dangerous", "scary" or "strong") and ranks them according to the strength of each learned attribute. Then, during an interactive design session, a person can explore different combinations of visual components using commands based on relative attributes (e.g. "make this part more dangerous"). Novel designs are assembled in real-time as the strengths of selected attributes are varied, enabling rapid, in-situ exploration of candidate designs. We applied this approach to 3D modeling and web design. Experiments suggest this interface is an effective alternative for novices performing tasks with high-level design goals.
dePEDd: augmented handwriting system using ferromagnetism of a ballpoint pen BIBAFull-Text 203-210
  Junichi Yamaoka; Yasuaki Kakehi
This paper presents dePENd, a novel interactive system that assists in sketching using regular pens and paper. Our system utilizes the ferromagnetic feature of the metal tip of a regular ballpoint pen. The computer controlling the X and Y positions of the magnet under the surface of the table provides entirely new drawing experiences. By controlling the movements of a pen and presenting haptic guides, the system allows a user to easily draw diagrams and pictures consisting of lines and circles, which are difficult to create by free-hand drawing. Moreover, the system also allows users to freely edit and arrange prescribed pictures. This is expected to reduce the resistance to drawing and promote users' creativity. In addition, we propose a communication tool using two dePENd systems that is expected to enhance the drawing skills of users. The functions of this system enable users to utilize interactive applications such as copying and redrawing drafted pictures or scaling the pictures using a digital pen. Furthermore, we implement the system and evaluate its technical features. In this paper, we describe the details of the design and implementations of the device, along with applications, technical evaluations, and future prospects.

Sensing

Mirage: exploring interaction modalities using off-body static electric field sensing BIBAFull-Text 211-220
  Adiyan Mujibiya; Jun Rekimoto
Mirage proposes an effective non body contact technique to infer the amount and type of body motion, gesture, and activity. This approach involves passive measurement of static electric field of the environment flowing through sense electrode. This sensing method leverages electric field distortion by the presence of an intruder (e.g. human body). Mirage sensor has simple analog circuitry and supports ultra-low power operation. It requires no instrumentation to the user, and can be configured as environmental, mobile, and peripheral-attached sensor. We report on a series of experiments with 10 participants showing robust activity and gesture recognition, as well as promising results for robust location classification and multiple user differentiation. To further illustrate the utility of our approach, we demonstrate real-time interactive applications including activity monitoring, and two games which allow the users to interact with a computer using body motion and gestures.
StickEar: making everyday objects respond to sound BIBAFull-Text 221-226
  Kian Peen Yeo; Suranga Nanayakkara; Shanaka Ransiri
This paper presents StickEar, a system consisting of a network of distributed 'Sticker-like' sound-based sensor nodes to propose a means of enabling sound-based interactions on everyday objects. StickEar encapsulates wireless sensor network technology into a form factor that is intuitive to reuse and redeploy. Each StickEar sensor node consists of a miniature sized microphone and speaker to provide sound-based input/output capabilities. We provide a discussion of interaction design space and hardware design space of StickEar that cuts across domains such as remote sound monitoring, remote triggering of sound, autonomous response to sound events, and controlling of digital devices using sound. We implemented three applications to demonstrate the unique interaction capabilities of StickEar.
Mime: compact, low power 3D gesture sensing for interaction with head mounted displays BIBAFull-Text 227-236
  Andrea Colaço; Ahmed Kirmani; Hye Soo Yang; Nan-Wei Gong; Chris Schmandt; Vivek K. Goyal
We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction.
   Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.
uTrack: 3D input using two magnetic sensors BIBAFull-Text 237-244
  Ke-Yu Chen; Kent Lyons; Sean White; Shwetak Patel
While much progress has been made in wearable computing in recent years, input techniques remain a key challenge. In this paper, we introduce uTrack, a technique to convert the thumb and fingers into a 3D input system using magnetic field (MF) sensing. A user wears a pair of magnetometers on the back of their fingers and a permanent magnet affixed to the back of the thumb. By moving the thumb across the fingers, we obtain a continuous input stream that can be used for 3D pointing. Specifically, our novel algorithm calculates the magnet's 3D position and tilt angle directly from the sensor readings. We evaluated uTrack as an input device, showing an average tracking accuracy of 4.84 mm in 3D space -- sufficient for subtle interaction. We also demonstrate a real-time prototype and example applications allowing users to interact with the computer using 3D finger input.
A cuttable multi-touch sensor BIBAFull-Text 245-254
  Simon Olberding; Nan-Wei Gong; John Tiab; Joseph A. Paradiso; Jürgen Steimle
We propose cutting as a novel paradigm for ad-hoc customization of printed electronic components. As a first instantiation, we contribute a printed capacitive multi-touch sensor, which can be cut by the end-user to modify its size and shape. This very direct manipulation allows the end-user to easily make real-world objects and surfaces touch-interactive, to augment physical prototypes and to enhance paper craft. We contribute a set of technical principles for the design of printable circuitry that makes the sensor more robust against cuts, damages and removed areas. This includes novel physical topologies and printed forward error correction. A technical evaluation compares different topologies and shows that the sensor remains functional when cut to a different shape.
FingerPad: private and subtle interaction using fingertips BIBAFull-Text 255-260
  Liwei Chan; Rong-Hao Liang; Ming-Chang Tsai; Kai-Yin Cheng; Chao-Huai Su; Mike Y. Chen; Wen-Huang Cheng; Bing-Yu Chen
We present FingerPad, a nail-mounted device that turns the tip of the index finger into a touchpad, allowing private and subtle interaction while on the move. FingerPad enables touch input using magnetic tracking, by adding a Hall sensor grid on the index fingernail, and a magnet on the thumbnail. Since it permits input through the pinch gesture, FingerPad is suitable for private use because the movements of the fingers in a pinch are subtle and are naturally hidden by the hand. Functionally, FingerPad resembles a touchpad, and also allows for eyes-free use. Additionally, since the necessary devices are attached to the nails, FingerPad preserves natural haptic feedback without affecting the native function of the fingertips. Through user study, we analyze the three design factors, namely posture, commitment method and target size, to assess the design of the FingerPad. Though the results show some trade-off among the factors, generally participants achieve 93% accuracy for very small targets (1.2mm-width) in the seated condition, and 92% accuracy for 2.5mm-width targets in the walking condition.

Vision

Pursuit calibration: making gaze calibration less tedious and more flexible BIBAFull-Text 261-270
  Ken Pfeuffer; Melodie Vidal; Jayson Turner; Andreas Bulling; Hans Gellersen
Eye gaze is a compelling interaction modality but requires user calibration before interaction can commence. State of the art procedures require the user to fixate on a succession of calibration markers, a task that is often experienced as difficult and tedious. We present pursuit calibration, a novel approach that, unlike existing methods, is able to detect the user's attention to a calibration target. This is achieved by using moving targets, and correlation of eye movement and target trajectory, implicitly exploiting smooth pursuit eye movement. Data for calibration is then only sampled when the user is attending to the target. Because of its ability to detect user attention, pursuit calibration can be performed implicitly, which enables more flexible designs of the calibration task. We demonstrate this in application examples and user studies, and show that pursuit calibration is tolerant to interruption, can blend naturally with applications and is able to calibrate users without their awareness.
Gaze locking: passive eye contact detection for human-object interaction BIBAFull-Text 271-280
  Brian A. Smith; Qi Yin; Steven K. Feiner; Shree K. Nayar
Eye contact plays a crucial role in our everyday social interactions. The ability of a device to reliably detect when a person is looking at it can lead to powerful human-object interfaces. Today, most gaze-based interactive systems rely on gaze tracking technology. Unfortunately, current gaze tracking techniques require active infrared illumination, calibration, or are sensitive to distance and pose. In this work, we propose a different solution-a passive, appearance-based approach for sensing eye contact in an image. By focusing on gaze *locking* rather than gaze tracking, we exploit the special appearance of direct eye gaze, achieving a Matthews correlation coefficient (MCC) of over 0.83 at long distances (up to 18 m) and large pose variations (up to ±30° of head yaw rotation) using a very basic classifier and without calibration. To train our detector, we also created a large publicly available gaze data set: 5,880 images of 56 people over varying gaze directions and head poses. We demonstrate how our method facilitates human-object interaction, user analytics, image filtering, and gaze-triggered photography.
Open project: a lightweight framework for remote sharing of mobile applications BIBAFull-Text 281-290
  Matei Negulescu; Yang Li
The form factor of mobile devices remains small while their computing power grows at an accelerated rate. Prior work has explored expanding the output space by leveraging free displays in the environment. However, existing solutions often do not scale. In this paper we discuss Open Project, an end-to-end framework that allows a user to "project" a native mobile application onto a display using a phone camera, leveraging interaction spaces ranging from a PC monitor to a public wall-sized display. Any display becomes projectable instantaneously by simply accessing the lightweight Open Project server via a web browser. By distributing computation load onto each projecting mobile device, our framework easily scales for hosting many projection sessions and devices simultaneously. Our performance experiments and user studies indicated that Open Project supported a variety of useful collaborative, sharing scenarios and performed reliably in diverse settings.
Surround-see: enabling peripheral vision on smartphones during active use BIBAFull-Text 291-300
  Xing-Dong Yang; Khalad Hasan; Neil Bruce; Pourang Irani
Mobile devices are endowed with significant sensing capabilities. However, their ability to 'see' their surroundings, during active use, is limited. We present Surround-See, a self-contained smartphone equipped with an omni-directional camera that enables peripheral vision around the device to augment daily mobile tasks. Surround-See provides mobile devices with a field-of-view collinear to the device screen. This capability facilitates novel mobile tasks such as, pointing at objects in the environment to interact with content, operating the mobile device at a physical distance and allowing the device to detect user activity, even when the user is not holding it. We describe Surround-See's architecture, and demonstrate applications that exploit peripheral 'seeing' capabilities during active use of a mobile device. Users confirm the value of embedding peripheral vision capabilities on mobile devices and offer insights for novel usage methods.
GIST: a gestural interface for remote nonvisual spatial perception BIBAFull-Text 301-310
  Vinitha Khambadkar; Eelke Folmer
Spatial perception is a challenging task for people who are blind due to the limited functionality and sensing range of hands. We present GIST, a wearable gestural interface that offers spatial perception functionality through the novel appropriation of the user's hands into versatile sensing rods. Using a wearable depth-sensing camera, GIST analyzes the visible physical space and allows blind users to access spatial information about this space using different hand gestures. By allowing blind users to directly explore the physical space using gestures, GIST allows for the closest mapping between augmented and physical reality, which facilitates spatial interaction. A user study with eight blind users evaluates GIST in its ability to help perform everyday tasks that rely on spatial perception, such as grabbing an object or interacting with a person. Results of our study may help develop new gesture based assistive applications.
YouMove: enhancing movement training with an augmented reality mirror BIBAFull-Text 311-320
  Fraser Anderson; Tovi Grossman; Justin Matejka; George Fitzmaurice
YouMove is a novel system that allows users to record and learn physical movement sequences. The recording system is designed to be simple, allowing anyone to create and share training content. The training system uses recorded data to train the user using a large-scale augmented reality mirror. The system trains the user through a series of stages that gradually reduce the user's reliance on guidance and feedback. This paper discusses the design and implementation of YouMove and its interactive mirror. We also present a user study in which YouMove was shown to improve learning and short-term retention by a factor of 2 compared to a traditional video demonstration.

GUI

Skillometers: reflective widgets that motivate and help users to improve performance BIBAFull-Text 321-330
  Sylvain Malacria; Joey Scarr; Andy Cockburn; Carl Gutwin; Tovi Grossman
Applications typically provide ways for expert users to increase their performance, such as keyboard shortcuts or customization, but these facilities are frequently ignored. To help address this problem, we introduce skillometers -- lightweight displays that visualize the benefits available through practicing, adopting a better technique, or switching to a faster mode of interaction. We present a general framework for skillometer design, then discuss the design and implementation of a real-world skillometer intended to increase hotkey use. A controlled experiment shows that our skillometer successfully encourages earlier and faster learning of hotkeys. Finally, we discuss general lessons for future development and deployment of skillometers.
MenuOptimizer: interactive optimization of menu systems BIBAFull-Text 331-342
  Gilles Bailly; Antti Oulasvirta; Timo Kötzing; Sabrina Hoppe
Menu systems are challenging to design because design spaces are immense, and several human factors affect user behavior. This paper contributes to the design of menus with the goal of interactively assisting designers with an optimizer in the loop. To reach this goal, 1) we extend a predictive model of user performance to account for expectations as to item groupings; 2) we adapt an ant colony optimizer that has been proven efficient for this class of problems; and 3) we present MenuOptimizer, a set of inter-actions integrated into a real interface design tool (QtDesigner). MenuOptimizer supports designers' abilities to cope with uncertainty and recognize good solutions. It allows designers to delegate combinatorial problems to the optimizer, which should solve them quickly enough without disrupting the design process. We show evidence that satisfactory menu designs can be produced for complex problems in minutes.
The auckland layout editor: an improved GUI layout specification process BIBAFull-Text 343-352
  Clemens Zeidler; Christof Lutteroth; Wolfgang Sturzlinger; Gerald Weber
Layout managers are used to control the placement of widgets in graphical user interfaces (GUIs). Constraint-based layout managers are among the most powerful. However, they are also more complex and their layouts are prone to problems such as over-constrained specifications and widget overlap. This poses challenges for GUI builder tools, which ideally should address these issues automatically. We present a new GUI builder -- the Auckland Layout Editor (ALE) -- that addresses these challenges by enabling GUI designers to specify constraint-based layouts using simple, mouse-based operations. We give a detailed description of ALE's edit operations, which do not require direct constraint editing. ALE guarantees that all edit operations lead to sound specifications, ensuring solvable and non-overlapping layouts. To achieve that, we present a new algorithm that automatically generates the constraints necessary to keep a layout non-overlapping. Furthermore, we discuss how our innovations can be combined with manual constraint editing in a sound way. Finally, to aid designers in creating layouts with good resize behavior, we propose a novel automatic layout preview. This displays the layout at its minimum and in an enlarged size, which allows visualizing potential resize issues directly. All these features permit GUI developers to focus more on the overall UI design.
SeeSS: seeing what i broke -- visualizing change impact of cascading style sheets (css) BIBAFull-Text 353-356
  Hsiang-Sheng Liang; Kuan-Hung Kuo; Po-Wei Lee; Yu-Chien Chan; Yu-Chin Lin; Mike Y. Chen
Cascading Style Sheet (CSS) is a fundamental web language for describing the presentation of web pages. CSS rules are often reused across multiple parts of a page and across multiple pages throughout a site to reduce repetition and to provide a consistent look and feel. When a CSS rule is modified, developers currently have to manually track and visually inspect all possible parts of the site that may be impacted by that change. We present SeeSS, a system that automatically tracks CSS change impact across a site and enables developers to easily visualize all of them. The impacted page fragments are sorted by severity and the differences before and after the change are highlighted using animation.

Applications and games

Capturing on site laser annotations with smartphones to document construction work BIBAFull-Text 357-362
  Jörg Schweitzer; Ralf Dörner
In the process of construction work, taking notes of real world objects like walls, pipes, cables and others is an important task. The ad hoc capturing of small information pieces on such objects on site can be challenging when there is no specialized technology available. Handwritten or hand drawn notes on paper are good for textual information like measurements whereas images are better to capture the physical state of objects. Without a proper combination however the benefit is limited. In this paper we present an interaction system for taking ad hoc notes on real world objects by using a combination of a smartphone and a laserpointer as input device. Our interface enables the user to directly annotate objects by drawing on them and to store these annotations for later reviewing. The deictic gestures of the user are then replayed on a stitched image of the scene. The users voice input is captured and analyzed to integrate additional Information. The user can mark positions and place hand taken measurements by pointing on the objects and speaking the corresponding voice commands.
Crowd-scale interactive formal reasoning and analytics BIBAFull-Text 363-372
  Ethan Fast; Colleen Lee; Alex Aiken; Michael S. Bernstein; Daphne Koller; Eric Smith
Large online courses often assign problems that are easy to grade because they have a fixed set of solutions (such as multiple choice), but grading and guiding students is more difficult in problem domains that have an unbounded number of correct answers. One such domain is derivations: sequences of logical steps commonly used in assignments for technical, mathematical and scientific subjects. We present DeduceIt, a system for creating, grading, and analyzing derivation assignments in any formal domain. DeduceIt supports assignments in any logical formalism, provides students with incremental feedback, and aggregates student paths through each proof to produce instructor analytics. DeduceIt benefits from checking thousands of derivations on the web: it introduces a proof cache, a novel data structure which leverages a crowd of students to decrease the cost of checking derivations and providing real-time, constructive feedback. We evaluate DeduceIt with 990 students in an online compilers course, finding students take advantage of its incremental feedback and instructors benefit from its structured insights into course topics. Our work suggests that automated reasoning can extend online assignments and large-scale education to many new domains.
A tongue training system for children with down syndrome BIBAFull-Text 373-376
  Masato Miyauchi; Takashi Kimura; Takuya Nojima
Children with Down syndrome have a variety of symptoms including speech and swallowing disorders. To improve these symptoms, tongue training is thought to be beneficial. However, inducing children with Down syndrome to do such training is not easy because tongue training can be an unpleasant experience for children. In addition, with no supporting technology for such training, teachers and families around such children must make efforts to induce them to undergo the training. In this research, we develop an interactive tongue training system especially for children with Down syndrome using SITA (Simple Interface for Tongue motion Acquisition) system. In this paper, we describe in detail our preliminary evaluations of SITA, and present the results of user tests.
A mixed-initiative tool for designing level progressions in games BIBAFull-Text 377-386
  Eric Butler; Adam M. Smith; Yun-En Liu; Zoran Popovic
Creating game content requires balancing design considerations at multiple scales: each level requires effort and iteration to produce, and broad-scale constraints such as the order in which game concepts are introduced must be respected. Game designers currently create informal plans for how the game's levels will fit together, but they rarely keep these plans up-to-date when levels change during iteration and testing. This leads to violations of constraints and makes changing the high-level plans expensive. To address these problems, we explore the creation of mixed-initiative game progression authoring tools which explicitly model broad-scale design considerations. These tools let the designer specify constraints on progressions, and keep the plan synchronized when levels are edited. This enables the designer to move between broad and narrow-scale editing and allows for automatic detection of problems caused by edits to levels. We further leverage advances in procedural content generation to help the designer rapidly explore and test game progressions. We present a prototype implementation of such a tool for our actively-developed educational game, Refraction. We also describe how this system could be extended for use in other games and domains, specifically for the domains of math problem sets and interactive programming tutorials.
BodyAvatar: creating freeform 3D avatars using first-person body gestures BIBAFull-Text 387-396
  Yupeng Zhang; Teng Han; Zhimin Ren; Nobuyuki Umetani; Xin Tong; Yang Liu; Takaaki Shiratori; Xiang Cao
BodyAvatar is a Kinect-based interactive system that allows users without professional skills to create freeform 3D avatars using body gestures. Unlike existing gesture-based 3D modeling tools, BodyAvatar centers around a first-person "you're the avatar" metaphor, where the user treats their own body as a physical proxy of the virtual avatar. Based on an intuitive body-centric mapping, the user performs gestures to their own body as if wanting to modify it, which in turn results in corresponding modifications to the avatar. BodyAvatar provides an intuitive, immersive, and playful creation experience for the user. We present a formative study that leads to the design of BodyAvatar, the system's interactions and underlying algorithms, and results from initial user trials.
ViziCal: accurate energy expenditure prediction for playing exergames BIBAFull-Text 397-404
  Miran Kim; Jeff Angermann; George Bebis; Eelke Folmer
In recent years, exercise games have been criticized for not being able to engage their players into levels of physical activity that are high enough to yield health benefits. A major challenge in the design of exergames, however, is that it is difficult to assess the amount of physical activity an exergame yields due to limitations of existing techniques to assess energy expenditure of exergaming activities. With recent advances in commercial depth sensing technology to accurately track players' motions in 3D, we present a technique called ViziCal that uses a non-linear regression approach to accurately predict energy expenditure in real-time. ViziCal may allow for creating exergames that can report energy expenditure while playing, and whose intensity can be adjusted in real-time to stimulate larger health benefits.
Imaginary reality gaming: ball games without a ball BIBAFull-Text 405-410
  Patrick Baudisch; Henning Pohl; Stefanie Reinicke; Emilia Wittmers; Patrick Lühne; Marius Knaust; Sven Köhler; Patrick Schmidt; Christian Holz
We present imaginary reality games, i.e., games that mimic the respective real world sport, such as basketball or soccer, except that there is no visible ball. The ball is virtual and players learn about its position only from watching each other act and a small amount of occasional auditory feed-back, e.g., when a person is receiving the ball. Imaginary reality games maintain many of the properties of physical sports, such as unencumbered play, physical exertion, and immediate social interaction between players. At the same time, they allow introducing game elements from video games, such as power-ups, non-realistic physics, and player balancing. Most importantly, they create a new game dynamic around the notion of the invisible ball. To allow players to successfully interact with the invisible ball, we have created a physics engine that evaluates all plausible ball trajectories in parallel, allowing the game engine to select the trajectory that leads to the most enjoyable game play while still favoring skillful play.

Tangible and fabrication

MagGetz: customizable passive tangible controllers on and around conventional mobile devices BIBAFull-Text 411-416
  Sungjae Hwang; Myungwook Ahn; Kwang-yun Wohn
This paper proposes user-customizable passive control widgets, called MagGetz, which enable tangible interaction on and around mobile devices without requiring power or wireless connections. This is achieved by tracking and analyzing the magnetic field generated by controllers attached on and around the device through a single magnetometer, which is commonly integrated in smartphones today. The proposed method provides users with a broader interaction area, customizable input layouts, richer physical clues, and higher input expressiveness without the need for hardware modifications. We have presented a software toolkit and several applications using MagGetz.
inFORM: dynamic physical affordances and constraints through shape and object actuation BIBAFull-Text 417-426
  Sean Follmer; Daniel Leithinger; Alex Olwal; Akimitsu Hogge; Hiroshi Ishii
Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.
Traxion: a tactile interaction device with virtual force sensation BIBAFull-Text 427-432
  Jun Rekimoto
This paper introduces a new mechanism to induce a virtual force based on human illusory sensations. An asymmetric signal is applied to a tactile actuator consisting of an electromagnetic coil, a metal weight, and a spring, such that the user feels that the device is being pulled (or pushed) in a particular direction, although it is not supported by any mechanical connection to other objects or the ground. The proposed tactile device is smaller (35.0 mm x 5.0 mm x 7.5 mm) and lighter (5.2 g) than any previous force-feedback devices, which have to be connected to the ground with mechanical links. This small form factor allows the device to be implemented in several novel interactive applications, such as a pedestrian navigation system that includes a finger-mounted tactile device or an (untethered) input device that features virtual force. Our experimental results indicate that this illusory sensation actually exists and the proposed device can switch the virtual force direction within a short period. We combined this new technology with visible light transmission via a digital micromirror device (DMD) projector and developed a position guiding input device with force perception.
Human-computer interaction for hybrid carving BIBAFull-Text 433-440
  Amit Zoran; Roy Shilkrot; Joseph Paradiso
In this paper we explore human-computer interaction for carving, building upon our previous work with the FreeD digital sculpting device. We contribute a new tool design (FreeD V2), with a novel set of interaction techniques for the fabrication of static models: personalized tool paths, manual overriding, and physical merging of virtual models. We also present techniques for fabricating dynamic models, which may be altered directly or parametrically during fabrication. We demonstrate a semi-autonomous operation and evaluate the performance of the tool. We end by discussing synergistic cooperation between human and machine to ensure accuracy while preserving the expressiveness of manual practice.
PacCAM: material capture and interactive 2D packing for efficient material usage on CNC cutting machines BIBAFull-Text 441-446
  Daniel Saakes; Thomas Cambazard; Jun Mitani; Takeo Igarashi
The availability of low-cost digital fabrication devices enables new groups of users to participate in the design and fabrication of things. However, software to assist in the transition from design to actual fabrication is currently overlooked. In this paper, we introduce PacCAM, a system for packing 2D parts within a given source material for fabrication using 2D cutting machines. Our solution combines computer vision to capture the source material shape with a user interface that incorporates 2D rigid body simulation and snapping. A user study demonstrated that participants could make layouts faster with our system compared with using traditional drafting tools. PacCAM caters to a variety of 2D fabrication applications and can contribute to the reduction of material waste.
Sauron: embedded single-camera sensing of printed physical user interfaces BIBAFull-Text 447-456
  Valkyrie Savage; Colin Chang; Björn Hartmann
3D printers enable designers and makers to rapidly produce physical models of future products. Today these physical prototypes are mostly passive. Our research goal is to enable users to turn models produced on commodity 3D printers into interactive objects with a minimum of required assembly or instrumentation. We present Sauron, an embedded machine vision-based system for sensing human input on physical controls like buttons, sliders, and joysticks. With Sauron, designers attach a single camera with integrated ring light to a printed prototype. This camera observes the interior portions of input components to determine their state. In many prototypes, input components may be occluded or outside the viewing frustum of a single camera. We introduce algorithms that generate internal geometry and calculate mirror placements to redirect input motion into the visible camera area. To investigate the space of designs that can be built with Sauron along with its limitations, we built prototype devices, evaluated the suitability of existing models for vision sensing, and performed an informal study with three CAD users. While our approach imposes some constraints on device design, results suggest that it is expressive and accessible enough to enable constructing a useful variety of devices.
PAPILLON: designing curved display surfaces with printed optics BIBAFull-Text 457-462
  Eric Brockmeyer; Ivan Poupyrev; Scott Hudson
We present a technology for designing curved display surfaces that can both display information and sense two dimensions of human touch. It is based on 3D printed optics, where the surface of the display is constructed as a bundle of printed light pipes, that direct images from an arbitrary planar image source to the surface of the display. This effectively decouples the display surface and image source, allowing to iterate the design of displays without requiring changes to the complex electronics and optics of the device. In addition, the same optical elements also direct light from the surface of the display back to the image sensor allowing for touch input and proximity detection of a hand relative to the display surface. The resulting technology is effective in designing compact, efficient displays of a small size; this has been applied in the design of interactive animated eyes.

Development

The dog programming language BIBAFull-Text 463-472
  Salman Ahmad; Sepandar Kamvar
Today, most popular software applications are deployed in the cloud, interact with many users, and run on multiple platforms from Web browsers to mobile operating systems. While these applications confer a number of benefits to their users, building them brings many challenges: manually managing state between asynchronous user actions, creating and maintaining separate code bases for each desired client platform and gracefully scaling to handle a large number of concurrent users. Dog is a new programming language that provides a solution to these challenges and others through a unique runtime model that allows developers to model scalable cross-client applications as an imperative control-flow -- simplifying many development tasks. In this paper we describe the key features of Dog and show its utility through several applications that are difficult and time-consuming to write in existing languages, but are simple and easily written in Dog in a few lines of code.
Interactive record/replay for web application debugging BIBAFull-Text 473-484
  Brian Burg; Richard Bailey; Andrew J. Ko; Michael D. Ernst
During debugging, a developer must repeatedly and manually reproduce faulty behavior in order to inspect different facets of the program's execution. Existing tools for reproducing such behaviors prevent the use of debugging aids such as breakpoints and logging, and are not designed for interactive, random-access exploration of recorded behavior. This paper presents Timelapse, a tool for quickly recording, reproducing, and debugging interactive behaviors in web applications. Developers can use Timelapse to browse, visualize, and seek within recorded program executions while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use Timelapse to demonstrate failures in situ and share recorded behaviors with developers, improving bug report quality by obviating the need for detailed reproduction steps. Timelapse is built on Dolos, a novel record/replay infrastructure that ensures deterministic execution by capturing and reusing program inputs both from the user and from external sources such as the network. Dolos introduces negligible overhead and does not interfere with breakpoints and logging. In a small user evaluation, participants used Timelapse to accelerate existing reproduction activities, but were not significantly faster or more successful in completing the larger tasks at hand. Together, the Dolos infrastructure and Timelapse developer tool support systematic bug reporting and debugging practices.
Authoring multi-stage code examples with editable code histories BIBAFull-Text 485-494
  Shiry Ginosar; Luis Fernando De Pombo; Maneesh Agrawala; Bjorn Hartmann
Multi-stage code examples present multiple versions of a program where each stage increases the overall complexity of the code. In order to acquire strategies of program construction using a new language or API, programmers consult multi-stage code examples in books, tutorials and online videos. Authoring multi-stage code examples is currently a tedious process, as it involves keeping several stages of code synchronized in the face of edits and error corrections. We document these difficulties with a formative study examining how programmers author multi-stage code examples. We then present an IDE extension that helps authors create multi-stage code examples by propagating changes (insertions, deletions and modifications) to multiple saved versions of their code. Our system adapts revision control algorithms to the specific task of evolving example code. An informal evaluation finds that taking snapshots of a program as it is being developed and editing these snapshots in hindsight help users in creating multi-stage code examples.
A colorful approach to text processing by example BIBAFull-Text 495-504
  Kuat Yessenov; Shubham Tulsiani; Aditya Menon; Robert C. Miller; Sumit Gulwani; Butler Lampson; Adam Kalai
Text processing, tedious and error-prone even for programmers, remains one of the most alluring targets of Programming by Example. An examination of real-world text processing tasks found on help forums reveals that many such tasks, beyond simple string manipulation, involve latent hierarchical structures.
   We present STEPS, a programming system for processing structured and semi-structured text by example. STEPS users create and manipulate hierarchical structure by example. In a between-subject user study on fourteen computer scientists, STEPS compares favorably to traditional programming.

Haptics

UltraHaptics: multi-point mid-air haptic feedback for touch surfaces BIBAFull-Text 505-514
  Tom Carter; Sue Ann Seah; Benjamin Long; Bruce Drinkwater; Sriram Subramanian
We introduce UltraHaptics, a system designed to provide multi-point haptic feedback above an interactive surface. UltraHaptics employs focused ultrasound to project discrete points of haptic feedback through the display and directly on to users' unadorned hands. We investigate the desirable properties of an acoustically transparent display and demonstrate that the system is capable of creating multiple localised points of feedback in mid-air. Through psychophysical experiments we show that feedback points with different tactile properties can be identified at smaller separations. We also show that users are able to distinguish between different vibration frequencies of non-contact points with training. Finally, we explore a number of exciting new interaction possibilities that UltraHaptics provides.
Good vibrations: an evaluation of vibrotactile impedance matching for low power wearable applications BIBAFull-Text 515-520
  Jack I. C. Lindsay; Iris Jiang; Eric Larson; Richard Adams; Shwetak N. Patel; Blake Hannaford
Vibrotactile devices suffer from poor energy efficiency, arising from a mismatch between the device and the impedance of the human skin. This results in over-sized actuators and excessive power consumption, and prevents development of more sophisticated, miniaturized and low-power mobile tactile devices. In this paper, we present the experimental evaluation of a vibrotactile system designed to match the impedance of the skin to the impedance of the actuator. This system is able to quadruple the motion of the skin without increasing power consumption, and produce sensations equivalent to a standard system while consuming 1/2 of the power. By greatly reducing the size and power constraints of vibrotactile actuators, this technology offers a means to realize more sophisticated, smaller haptic devices for the user interface community.
The skweezee system: enabling the design and the programming of squeeze interactions BIBAFull-Text 521-530
  Karen Vanderloock; Vero Vanden Abeele; Johan A. K. Suykens; Luc Geurts
The Skweezee System is an easy, flexible and open system for designing and developing squeeze-based, gestural interactions. It consists of Skweezees, which are soft objects, filled with conductive padding, that can be deformed or squeezed by applying pressure. These objects contain a number of electrodes that are dispersed over the shape. The electrodes sense the shape shifting of the conductive filling by measuring the changing resistance between every possible pair of electrodes. In addition, the Skweezee System contains user-friendly software that allows end-users to define and to record their own squeeze gestures. These gestures are distinguished using a Support Vector Machine (SVM) classifier. In this paper we introduce the concept and the underlying technology of the Skweezee System and we demonstrate the robustness of the SVM based classifier via two experimental user studies. The results of these studies demonstrate accuracies of 81% (8 gestures, user-defined) to 97% (3 gestures, user-defined), with an accuracy of 90% for 7 pre-defined gestures.
Tactile rendering of 3D features on touch surfaces BIBAFull-Text 531-538
  Seung-Chan Kim; Ali Israr; Ivan Poupyrev
We present a tactile-rendering algorithm for simulating 3D geometric features, such as bumps, on touch screen surfaces. This is achieved by modulating friction forces between the user's finger and the touch screen, instead of physically moving the touch surface. We proposed that the percept of a 3D bump is created when local gradients of the rendered virtual surface are mapped to lateral friction forces. To validate this approach, we first establish a psychophysical model that relates the perceived friction force to the controlled voltage applied to the tactile feedback device. We then use this model to demonstrate that participants are three times more likely to prefer gradient force profiles than other commonly used rendering profiles. Finally, we present a generalized algorithm and conclude the paper with a set of applications using our tactile rendering technology.
SenSkin: adapting skin as a soft interface BIBAFull-Text 539-544
  Masa Ogata; Yuta Sugiura; Yasutoshi Makino; Masahiko Inami; Michita Imai
We present a sensing technology and input method that uses skin deformation estimated through a thin band-type device attached to the human body, the appearance of which seems socially acceptable in daily life. An input interface usually requires feedback. SenSkin provides tactile feedback that enables users to know which part of the skin they are touching in order to issue commands. The user, having found an acceptable area before beginning the input operation, can continue to input commands without receiving explicit feedback. We developed an experimental device with two armbands to sense three-dimensional pressure applied to the skin. Sensing tangential force on uncovered skin without haptic obstacles has not previously been achieved. SenSkin is also novel in that quantitative tangential force applied to the skin, such as that of the forearm or fingers, is measured. An infrared (IR) reflective sensor is used since its durability and inexpensiveness make it suitable for everyday human sensing purposes. The multiple sensors located on the two armbands allow the tangential and normal force applied to the skin dimension to be sensed. The input command is learned and recognized using a Support Vector Machine (SVM). Finally, we show an application in which this input method is implemented.