HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 091011-111-212-112-213-113-214-114-215-115-2

Proceedings of the 2015 ACM Symposium on User Interface Software and Technology

Fullname:UIST'15: Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology
Editors:Celine Latulipe; Bjoern Hartmann; Tovi Grossman
Location:Charlotte, North Carolina
Dates:2015-Nov-08 to 2015-Nov-11
Volume:1
Publisher:ACM
Standard No:ISBN: 978-1-4503-3779-3; ACM DL: Table of Contents; hcibib: UIST15-1
Papers:72
Pages:665
Links:Conference Website

Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology

Fullname:UIST'15: Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology
Editors:Celine Latulipe; Bjoern Hartmann; Tovi Grossman
Location:Charlotte, North Carolina
Dates:2015-Nov-08 to 2015-Nov-11
Volume:2
Publisher:ACM
Standard No:ISBN: 978-1-4503-3780-9; ACM DL: Table of Contents; hcibib: UIST15-2
Papers:44
Pages:106
Links:Conference Website
  1. UIST 2015-11-05 Volume 1
    1. Opening Keynote Address
    2. Session 1A: Tactile Feedback
    3. Session 1B: Large Displays, Large Movements
    4. Session 2A: Fabrication 1 -- Augmentation
    5. Session 2B: 3D & Augmented Reality
    6. Session 3A: Sensing Techniques
    7. Session 3B: Intelligent Information Interfaces
    8. Session 4A: Fabrication 2 -- Flexible and Printed Electronics
    9. Session 4B: Tools for Programmers
    10. Session 5A: Touch Input
    11. Session 5B: Tangibles
    12. Session 6A: Gaze
    13. Session 6B: Pushing Virtual and Physical Envelopes
    14. Session 7A: Wearable and Mobile Interactions
    15. Session 7B: Neurons, Affect, Ambiguity
    16. Session 8A: Hands and Fingers
    17. Session 8B: Fabrication 3 -- Complex Shapes and Properties
    18. Session 9A: Online Education
    19. Session 9B: Pens, Mice and Sensor Strips
    20. Closing Keynote Address
  2. UIST 2005-11-08 Volume 2
    1. Doctoral Symposium
    2. Demonstrations
    3. Posters

UIST 2015-11-05 Volume 1

Opening Keynote Address

Extreme Computational Photography BIBAKFull-Text 1
  Ramesh Raskar
The Camera Culture Group at the MIT Media Lab aims to create a new class of imaging platforms. This talk will discuss three tracks of research: femto photography, retinal imaging, and 3D displays. Femto Photography consists of femtosecond laser illumination, picosecond-accurate detectors and mathematical reconstruction techniques allowing researchers to visualize propagation of light. Direct recording of reflected or scattered light at such a frame rate with sufficient brightness is nearly impossible. Using an indirect 'stroboscopic' method that records millions of repeated measurements by careful scanning in time and viewpoints we can rearrange the data to create a 'movie' of a nanosecond long event. Femto photography and a new generation of nano-photography (using ToF cameras) allow powerful inference with computer vision in presence of scattering. EyeNetra is a mobile phone attachment that allows users to test their own eyesight. The device reveals corrective measures thus bringing vision to billions of people who would not have had access otherwise. Another project, eyeMITRA, is a mobile retinal imaging solution that brings retinal exams to the realm of routine care, by lowering the cost of the imaging device to a 10th of its current cost and integrating the device with image analysis software and predictive analytics. This provides early detection of Diabetic Retinopathy that can change the arc of growth of the world's largest cause of blindness. Finally the talk will describe novel lightfield cameras and lightfield displays that require a compressive optical architecture to deal with high bandwidth requirements of 4D signals.
Keywords: femto photography, retinal imaging, 3D displays, lightfield

Session 1A: Tactile Feedback

GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel BIBAFull-Text 3-10
  Viktor Miruchna; Robert Walter; David Lindlbauer; Maren Lehmann; Regine von Klitzing; Jörg Müller
We present GelTouch, a gel-based layer that can selectively transition between soft and stiff to provide tactile multi-touch feedback. It is flexible, transparent when not activated, and contains no mechanical, electromagnetic, or hydraulic components, resulting in a compact form factor (a 2mm thin touchscreen layer for our prototype). The activated areas can be morphed freely and continuously, without being limited to fixed, predefined shapes. GelTouch consists of a poly(N-isopropylacrylamide) gel layer which alters its viscoelasticity when activated by applying heat (>32°C). We present three different activation techniques: 1) Indium Tin Oxide (ITO) as a heating element that enables tactile feedback through individually addressable taxels; 2) predefined tactile areas of engraved ITO, that can be layered and combined; 3) complex arrangements of resistance wire that create thin tactile edges. We present a tablet with 6x4 tactile areas, enabling a tactile numpad, slider, and thumbstick. We show that the gel is up to 25 times stiffer when activated and that users detect tactile features reliably (94.8%).
Impacto: Simulating Physical Impact by Combining Tactile Stimulation with Electrical Muscle Stimulation BIBAFull-Text 11-19
  Pedro Lopes; Alexandra Ion; Patrick Baudisch
We present impacto, a device designed to render the haptic sensation of hitting or being hit in virtual reality. The key idea that allows the small and light impacto device to simulate a strong hit is that it decomposes the stimulus: it renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impact to the hit by thrusting the user's arm backwards using electrical muscle stimulation. The device is self-contained, wireless, and small enough for wearable use, thus leaves the user unencumbered and able to walk around freely in a virtual environment. The device is of generic shape, allowing it to also be worn on legs, so as to enhance the experience of kicking, or merged into props, such as a baseball bat. We demonstrate how to assemble multiple impacto units into a simple haptic suit. Participants of our study rated impact simulated using impacto's combination of solenoid hit and electrical muscle stimulation as more realistic than either technique in isolation.
Tactile Animation by Direct Manipulation of Grid Displays BIBAFull-Text 21-30
  Oliver S. Schneider; Ali Israr; Karon E. MacLean
Chairs, wearables, and handhelds have become popular sites for spatial tactile display. Visual animators, already expert in using time and space to portray motion, could readily transfer their skills to produce rich haptic sensations if given the right tools. We introduce the tactile animation object, a directly manipulated phantom tactile sensation. This abstraction has two key benefits: 1) efficient, creative, iterative control of spatiotemporal sensations, and 2) the potential to support a variety of tactile grids, including sparse displays. We present Mango, an editing tool for animators, including its rendering pipeline and perceptually-optimized interpolation algorithm for sparse vibrotactile grids. In our evaluation, professional animators found it easy to create a variety of vibrotactile patterns, with both experts and novices preferring the tactile animation object over controlling actuators individually.
Improving Haptic Feedback on Wearable Devices through Accelerometer Measurements BIBAFull-Text 31-36
  Jeffrey R. Blum; Ilja Frissen; Jeremy R. Cooperstock
Many variables have been shown to impact whether a vibration stimulus will be perceived. We present a user study that takes into account not only previously investigated predictors such as vibration intensity and duration along with the age of the person receiving the stimulus, but also the amount of motion, as measured by an accelerometer, at the site of vibration immediately preceding the stimulus. This is a more specific measure than in previous studies showing an effect on perception due to gross conditions such as walking. We show that a logistic regression model including prior acceleration is significantly better at predicting vibration perception than a model including only vibration intensity, duration and participant age. In addition to the overall regression, we discuss individual participant differences and measures of classification performance for real-world applications. Our expectation is that haptic interface designers will be able to use such results to design better vibrations that are perceivable under the user's current activity conditions, without being annoyingly loud or jarring, eventually approaching "perceptually equivalent" feedback independent of motion.

Session 1B: Large Displays, Large Movements

cLuster: Smart Clustering of Free-Hand Sketches on Large Interactive Surfaces BIBAFull-Text 37-46
  Florian Perteneder; Martin Bresler; Eva-Maria Grossauer; Joanne Leong; Michael Haller
Structuring and rearranging free-hand sketches on large interactive surfaces typically requires making multiple stroke selections. This can be both time-consuming and fatiguing in the absence of well-designed selection tools. Investigating the concept of automated clustering, we conducted a background study which highlighted the fact that people have varying perspectives on how elements in sketches can and should be grouped. In response to these diverse user expectations, we present cLuster, a flexible, domain-independent clustering approach for free-hand sketches. Our approach is designed to accept an initial user selection, which is then used to calculate a linear combination of pre-trained perspectives in real-time. The remaining elements are then clustered. An initial evaluation revealed that in many cases, only a few corrections were necessary to achieve the desired clustering results. Finally, we demonstrate the utility of our approach in a variety of application scenarios.
GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues BIBAFull-Text 47-56
  Florian Alt; Andreas Bulling; Gino Gravanis; Daniel Buschek
Users tend to position themselves in front of interactive public displays in such a way as to best perceive its content. Currently, this sweet spot is implicitly defined by display properties, content, the input modality, as well as space constraints in front of the display. We present GravitySpot -- an approach that makes sweet spots flexible by actively guiding users to arbitrary target positions in front of displays using visual cues. Such guidance is beneficial, for example, if a particular input technology only works at a specific distance or if users should be guided towards a non-crowded area of a large display. In two controlled lab studies (n=29) we evaluate different visual cues based on color, shape, and motion, as well as position-to-cue mapping functions. We show that both the visual cues and mapping functions allow for fine-grained control over positioning speed and accuracy. Findings are complemented by observations from a 3-month real-world deployment.
Tiltcasting: 3D Interaction on Large Displays using a Mobile Device BIBAFull-Text 57-62
  Krzysztof Pietroszek; James R. Wallace; Edward Lank
We develop and formally evaluate a metaphor for smartphone interaction with 3D environments: Tiltcasting. Under the Tiltcasting metaphor, users interact within a rotatable 2D plane that is "cast" from their phone's interactive display into 3D space. Through an empirical validation, we show that Tiltcasting supports efficient pointing, interaction with occluded objects, disambiguation between nearby objects, and object selection and manipulation in fully addressable 3D space. Our technique out-performs existing target agnostic pointing implementations, and approaches the performance of physical pointing with an off-the-shelf smartphone.
Gunslinger: Subtle Arms-down Mid-air Interaction BIBAFull-Text 63-71
  Mingyu Liu; Mathieu Nancel; Daniel Vogel
We describe Gunslinger, a mid-air interaction technique using barehand postures and gestures. Unlike past work, we explore a relaxed arms-down position with both hands interacting at the sides of the body. It features "hand-cursor" feedback to communicate recognized hand posture, command mode and tracking quality; and a simple, but flexible hand posture recognizer. Although Gunslinger is suitable for many usage contexts, we focus on integrating mid-air gestures with large display touch input. We show how the Gunslinger form factor enables an interaction language that is equivalent, coherent, and compatible with large display touch input. A four-part study evaluates Midas Touch, posture recognition feedback, pointing and clicking, and general usability.

Session 2A: Fabrication 1 -- Augmentation

Encore: 3D Printed Augmentation of Everyday Objects with Printed-Over, Affixed and Interlocked Attachments BIBAFull-Text 73-82
  Xiang 'Anthony' Chen; Stelian Coros; Jennifer Mankoff; Scott E. Hudson
One powerful aspect of 3D printing is its ability to extend, repair, or more generally modify everyday objects. However, nearly all existing work implicitly assumes that whole objects are to be printed from scratch. Designing objects as extensions or enhancements of existing ones is a laborious process in most of today's 3D authoring tools. This paper presents a framework for 3D printing to augment existing objects that covers a wide range of attachment options. We illustrate the framework through three exemplar attachment techniques -- print-over, print-to-affix and print-through, implemented in Encore, a design tool that supports a set of analysis metrics relating to viability, durability and usability that are visualized for the user to explore design options and tradeoffs. Encore also generates 3D models for production, addressing issues such as support jigs and contact geometry between the attached part and the original object. Our validation helps to illustrate the strengths and weaknesses of each technique. For example, print-over is stronger than print-to-affix with adhesives, and all the techniques' strengths are affected by surface curvature.
Patching Physical Objects BIBAFull-Text 83-91
  Alexander Teibrich; Stefanie Mueller; François Guimbretière; Robert Kovacs; Stefan Neubert; Patrick Baudisch
Personal fabrication is currently a one-way process: Once an object has been fabricated with a 3D printer, it cannot be changed anymore; any change requires printing a new version from scratch. The problem is that this approach ignores the nature of design iteration, i.e. that in subsequent iterations large parts of an object stay the same and only small parts change. This makes fabricating from scratch feel unnecessary and wasteful.
   In this paper, we propose a different approach: instead of re-printing the entire object from scratch, we suggest patching the existing object to reflect the next design iteration. We built a system on top of a 3D printer that accomplishes this: Users mount the existing object into the 3D printer, then load both the original and the modified 3D model into our software, which in turn calculates how to patch the object. After identifying which parts to remove and what to add, our system locates the existing object in the printer using the system's built-in 3D scanner. After calibrating the orientation, a mill first removes the outdated geometry, then a print head prints the new geometry in place.
   Since only a fraction of the entire object is refabricated, our approach reduces material consumption and plastic waste (for our example objects by 82% and 93% respectively).
ReForm: Integrating Physical and Digital Design through Bidirectional Fabrication BIBAFull-Text 93-102
  Christian Weichel; John Hardy; Jason Alexander; Hans Gellersen
Digital fabrication machines such as 3D printers and laser-cutters allow users to produce physical objects based on virtual models. The creation process is currently unidirectional: once an object is fabricated it is separated from its originating virtual model. Consequently, users are tied into digital modeling tools, the virtual design must be completed before fabrication, and once fabricated, re-shaping the physical object no longer influences the digital model. To provide a more flexible design process that allows objects to iteratively evolve through both digital and physical input, we introduce bidirectional fabrication. To demonstrate the concept, we built ReForm, a system that integrates digital modeling with shape input, shape output, annotation for machine commands, and visual output. By continually synchronizing the physical object and digital model it supports object versioning to allow physical changes to be undone. Through application examples, we demonstrate the benefits of ReForm to the digital fabrication process.
Makers' Marks: Physical Markup for Designing and Fabricating Functional Objects BIBAFull-Text 103-108
  Valkyrie Savage; Sean Follmer; Jingyi Li; Björn Hartmann
To fabricate functional objects, designers create assemblies combining existing parts (e.g., mechanical hinges, electronic components) with custom-designed geometry (e.g., enclosures). Modeling complex assemblies is outside the reach of the growing number of novice "makers" with access to digital fabrication tools. We aim to allow makers to design and 3D print functional mechanical and electronic assemblies. Based on a formative exploration, we created Makers' Marks, a system based on physically authoring assemblies with sculpting materials and annotation stickers. Makers physically sculpt the shape of an object and attach stickers to place existing parts or high-level features (such as parting lines). Our tool extracts the 3D pose of these annotations from a scan of the design, then synthesizes the geometry needed to support integrating desired parts using a library of clearance and mounting constraints. The resulting designs can then be easily 3D printed and assembled. Our approach enables easy creation of complex objects such as TUIs, and leverages physical materials for tangible manipulation and understanding scale. We validate our tool through several design examples: a custom game controller, an animated toy figure, a friendly baby monitor, and a hinged box with integrated alarm.

Session 2B: 3D & Augmented Reality

Procedural Modeling Using Autoencoder Networks BIBAFull-Text 109-118
  Mehmet Ersin Yumer; Paul Asente; Radomir Mech; Levent Burak Kara
Procedural modeling systems allow users to create high quality content through parametric, conditional or stochastic rule sets. While such approaches create an abstraction layer by freeing the user from direct geometry editing, the nonlinear nature and the high number of parameters associated with such design spaces result in arduous modeling experiences for non-expert users. We propose a method to enable intuitive exploration of such high dimensional procedural modeling spaces within a lower dimensional space learned through autoencoder network training. Our method automatically generates a representative training dataset from the procedural modeling rule set based on shape similarity features. We then leverage the samples in this dataset to train an autoencoder neural network, while also structuring the learned lower dimensional space for continuous exploration with respect to shape features. We demonstrate the efficacy our method with user studies where designers create content with more than 10-fold faster speeds using our system compared to the classic procedural modeling interface.
SHOCam: A 3D Orbiting Algorithm BIBAFull-Text 119-128
  Michael Ortega; Wolfgang Stuerzlinger; Doug Scheurich
In this paper we describe a new orbiting algorithm, called SHOCam, which enables simple, safe and visually attractive control of a camera moving around 3D objects. Compared with existing methods, SHOCam provides a more consistent mapping between the user's interaction and the path of the camera by substantially reducing variability in both camera motion and look direction. Also, we present a new orbiting method that prevents the camera from penetrating object(s), making the visual feedback -- and with it the user experience -- more pleasing and also less error prone. Finally, we present new solutions for orbiting around multiple objects and multi-scale environments.
FoveAR: Combining an Optically See-Through Near-Eye Display with Projector-Based Spatial Augmented Reality BIBAFull-Text 129-135
  Hrvoje Benko; Eyal Ofek; Feng Zheng; Andrew D. Wilson
Optically see-through (OST) augmented reality glasses can overlay spatially-registered computer-generated content onto the real world. However, current optical designs and weight considerations limit their diagonal field of view to less than 40 degrees, making it difficult to create a sense of immersion or give the viewer an overview of the augmented reality space. We combine OST glasses with a projection-based spatial augmented reality display to achieve a novel display hybrid, called FoveAR, capable of greater than 100 degrees field of view, view dependent graphics, extended brightness and color, as well as interesting combinations of public and personal data display. We contribute details of our prototype implementation and an analysis of the interactive design space that our system enables. We also contribute four prototype experiences showcasing the capabilities of FoveAR as well as preliminary user feedback providing insights for enhancing future FoveAR experiences.
Projectibles: Optimizing Surface Color For Projection BIBAFull-Text 137-146
  Brett R. Jones; Rajinder Sodhi; Pulkit Budhiraja; Kevin Karsch; Brian Bailey; David Forsyth
Typically video projectors display images onto white screens, which can result in a washed out image. Projectibles algorithmically control the display surface color to increase the contrast and resolution. By combining a printed image with projected light, we can create animated, high resolution, high dynamic range visual experiences for video sequences. We present two algorithms for separating an input video sequence into a printed component and projected component, maximizing the combined contrast and resolution while minimizing any visual artifacts introduced from the decomposition. We present empirical measurements of real-world results of six example video sequences, subjective viewer feedback ratings, and we discuss the benefits and limitations of Projectibles. This is the first approach to combine a static display with a dynamic display for the display of video, and the first to optimize surface color for projection of video.

Session 3A: Sensing Techniques

Tracko: Ad-hoc Mobile 3D Tracking Using Bluetooth Low Energy and Inaudible Signals for Cross-Device Interaction BIBAFull-Text 147-156
  Haojian Jin; Christian Holz; Kasper Hornbæk
While current mobile devices detect the presence of surrounding devices, they lack a truly spatial awareness to bring them into the user's natural 3D space. We present Tracko, a 3D tracking system between two or more commodity devices without added components or device synchronization. Tracko achieves this by fusing three signal types. 1) Tracko infers the presence of and rough distance to other devices from the strength of Bluetooth low energy signals. 2) Tracko exchanges a series of inaudible stereo sounds and derives a set of accurate distances between devices from the difference in their arrival times. A Kalman filter integrates both signal cues to place collocated devices in a shared 3D space, combining the robustness of Bluetooth with the accuracy of audio signals for relative 3D tracking. 3) Tracko incorporates inertial sensors to refine 3D estimates and support quick interactions. Tracko robustly tracks devices in 3D with a mean error of 6.5 cm within 0.5 m and a 15.3 cm error within 1 m, which validates Trackoffs suitability for cross-device interactions.
EM-Sense: Touch Recognition of Uninstrumented, Electrical and Electromechanical Objects BIBAFull-Text 157-166
  Gierad Laput; Chouchang Yang; Robert Xiao; Alanson Sample; Chris Harrison
Most everyday electrical and electromechanical objects emit small amounts of electromagnetic (EM) noise during regular operation. When a user makes physical contact with such an object, this EM signal propagates through the user, owing to the conductivity of the human body. By modifying a small, low-cost, software-defined radio, we can detect and classify these signals in real-time, enabling robust on-touch object detection. Unlike prior work, our approach requires no instrumentation of objects or the environment; our sensor is self-contained and can be worn unobtrusively on the body. We call our technique EM-Sense and built a proof-of-concept smartwatch implementation. Our studies show that discrimination between dozens of objects is feasible, independent of wearer, time and local environment.
Tomo: Wearable, Low-Cost Electrical Impedance Tomography for Hand Gesture Recognition BIBAFull-Text 167-173
  Yang Zhang; Chris Harrison
We present Tomo, a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user's arm. This is achieved by measuring the cross-sectional impedances between all pairs of eight electrodes resting on a user's skin. Our approach is sufficiently compact and low-powered that we integrated the technology into a prototype wrist- and armband, which can monitor and classify gestures in real-time. We conducted a user study that evaluated two gesture sets, one focused on gross hand gestures and another using thumb-to-finger pinches. Our wrist location achieved 97% and 87% accuracies on these gesture sets respectively, while our arm location achieved 93% and 81%. We ultimately envision this technique being integrated into future smartwatches, allowing hand gestures and direct touch manipulation to work synergistically to support interactive tasks on small screens.
Corona: Positioning Adjacent Device with Asymmetric Bluetooth Low Energy RSSI Distributions BIBAFull-Text 175-179
  Haojian Jin; Cheng Xu; Kent Lyons
We introduce Corona, a novel spatial sensing technique that implicitly locates adjacent mobile devices in the same plane by examining asymmetric Bluetooth Low Energy RSSI distributions. The underlying phenomenon is that the off-center BLE antenna and asymmetric radio frequency topology create a characteristic Bluetooth RSSI distribution around the device. By comparing the real-time RSSI readings against a RSSI distribution model, each device can derive the relative position of the other adjacent device. Our experiments using an iPhone and iPad Mini show that Corona yields position estimation at 50% accuracy within a 2cm range, or 85% for the best two candidates. We developed an application to combine Corona with accelerometer readings to mitigate ambiguity and enable cross-device interactions on adjacent devices.

Session 3B: Intelligent Information Interfaces

SceneSkim: Searching and Browsing Movies Using Synchronized Captions, Scripts and Plot Summaries BIBAFull-Text 181-190
  Amy Pavel; Dan B. Goldman; Björn Hartmann; Maneesh Agrawala
Searching for scenes in movies is a time-consuming but crucial task for film studies scholars, film professionals, and new media artists. In pilot interviews we have found that such users search for a wide variety of clips -- e.g., actions, props, dialogue phrases, character performances, locations -- and they return to particular scenes they have seen in the past. Today, these users find relevant clips by watching the entire movie, scrubbing the video timeline, or navigating via DVD chapter menus. Increasingly, users can also index films through transcripts -- however, dialogue often lacks visual context, character names, and high level event descriptions. We introduce SceneSkim, a tool for searching and browsing movies using synchronized captions, scripts and plot summaries. Our interface integrates information from such sources to allow expressive search at several levels of granularity: Captions provide access to accurate dialogue, scripts describe shot-by-shot actions and settings, and plot summaries contain high-level event descriptions. We propose new algorithms for finding word-level caption to script alignments, parsing text scripts, and aligning plot summaries to scripts. Film studies graduate students evaluating SceneSkim expressed enthusiasm about the usability of the proposed system for their research and teaching.
Capture-Time Feedback for Recording Scripted Narration BIBAFull-Text 191-199
  Steve Rubin; Floraine Berthouzoz; Gautham J. Mysore; Maneesh Agrawala
Well-performed audio narrations are a hallmark of captivating podcasts, explainer videos, radio stories, and movie trailers. To record these narrations, professional voiceover actors follow guidelines that describe how to use low-level vocal components -- volume, pitch, timbre, and tempo -- to deliver performances that emphasize important words while maintaining variety, flow and diction. Yet, these techniques are not well-known outside the professional voiceover community, especially among hobbyist producers looking to create their own narrations. We present Narration Coach, an interface that assists novice users in recording scripted narrations. As a user records her narration, our system synchronizes the takes to her script, provides text feedback about how well she is meeting the expert voiceover guidelines, and resynthesizes her recordings to help her hear how she can speak better.
Improving Automated Email Tagging with Implicit Feedback BIBAFull-Text 201-211
  Mohammad S. Sorower; Michael Slater; Thomas G. Dietterich
Tagging email is an important tactic for managing information overload. Machine learning methods can help the user with this task by predicting tags for incoming email messages. The natural user interface displays the predicted tags on the email message, and the user doesn't need to do anything unless those predictions are wrong (in which case, the user can delete the incorrect tags and add the missing tags). From a machine learning perspective, this means that the learning algorithm never receives confirmation that its predictions are correct -- it only receives feedback when it makes a mistake. This can lead to slower learning, particularly when the predictions were not very confident, and hence, the learning algorithm would benefit from positive feedback. One could assume that if the user never changes any tag, then the predictions are correct, but users sometimes forget to correct the tags, presumably because they are focused on the content of the email messages and fail to notice incorrect and missing tags. The aim of this paper is to determine whether implicit feedback can provide useful additional training examples to the email prediction subsystem of TaskTracer, known as EP2 (Email Predictor 2). Our hypothesis is that the more time a user spends working on an email message, the more likely it is that the user will notice tag errors and correct them. If no corrections are made, then perhaps it is safe for the learning system to treat the predicted tags as being correct and train accordingly. This paper proposes three algorithms (and two baselines) for incorporating implicit feedback into the EP2 tag predictor. These algorithms are then evaluated using email interaction and tag correction events collected from 14 user-study participants as they performed email-directed tasks while using TaskTracer EP2. The results show that implicit feedback produces important increases in training feedback, and hence, significant reductions in subsequent prediction errors despite the fact that the implicit feedback is not perfect. We conclude that implicit feedback mechanisms can provide a useful performance boost for email tagging systems.
Codo: Fundraising with Conditional Donations BIBAFull-Text 213-222
  Juan Felipe Beltran; Aysha Siddique; Azza Abouzied; Jay Chen
Crowdfunding websites like Kickstarter and Indiegogo offer project organizers the ability to market, fund, and build a community around their campaign. While offering support and flexibility for organizers, crowdfunding sites provide very little control to donors. In this paper, we investigate the idea of empowering donors by allowing them to specify conditions for their crowdfunding contributions. We introduce a crowdfunding system, Codo, that allows donors to specify conditional donations. Codo allow donors to contribute to a campaign but hold off on their contribution until certain specific conditions are met (e.g. specific members or groups contribute a certain amount). We begin with a micro study to assess several specific conditional donations based on their comprehensibility and usage likelihood. Based on this study, we formalize conditional donations into a general grammar that captures a broad set of useful conditions. We demonstrate the feasibility of resolving conditions in our grammar by elegantly transforming conditional donations into a system of linear inequalities that are efficiently resolved using off-the-shelf linear program solvers. Finally, we designed a user-friendly crowdfunding interface that supports conditional donations for an actual fund raising campaign and assess the potential of conditional donations through this campaign. We find preliminary evidence that roughly 1 in 3 donors make conditional donations and that conditional donors donate more compared to direct donors.

Session 4A: Fabrication 2 -- Flexible and Printed Electronics

Foldio: Digital Fabrication of Interactive and Shape-Changing Objects With Foldable Printed Electronics BIBAFull-Text 223-232
  Simon Olberding; Sergio Soto Ortega; Klaus Hildebrandt; Jürgen Steimle
Foldios are foldable interactive objects with embedded input sensing and output capabilities. Foldios combine the advantages of folding for thin, lightweight and shape-changing objects with the strengths of thin-film printed electronics for embedded sensing and output. To enable designers and end-users to create highly custom interactive foldable objects, we contribute a new design and fabrication approach. It makes it possible to design the foldable object in a standard 3D environment and to easily add interactive high-level controls, eliminating the need to manually design a fold pattern and low-level circuits for printed electronics. Second, we contribute a set of printable user interface controls for touch input and display output on folded objects. Moreover, we contribute controls for sensing and actuation of shape-changeable objects. We demonstrate the versatility of the approach with a variety of interactive objects that have been fabricated with this framework.
uniMorph: Fabricating Thin Film Composites for Shape-Changing Interfaces BIBAFull-Text 233-242
  Felix Heibeck; Basheer Tome; Clark Della Silva; Hiroshi Ishii
Researchers have been investigating shape-changing interfaces, however technologies for thin, reversible shape change remain complicated to fabricate. uniMorph is an enabling technology for rapid digital fabrication of customized thin-film shape-changing interfaces. By combining the thermoelectric characteristics of copper with the high thermal expansion rate of ultra-high molecular weight polyethylene, we are able to actuate the shape of flexible circuit composites directly. The shape-changing actuation is enabled by a temperature driven mechanism and reduces the complexity of fabrication for thin shape-changing interfaces. In this paper we describe how to design and fabricate thin uniMorph composites. We present composites that are actuated by either environmental temperature changes or active heating of embedded structures and provide a systematic overview of shape-changing primitives. Finally, we present different sensing techniques that leverage the existing copper structures or can be seamlessly embedded into the uniMorph composite. To demonstrate the wide applicability of uniMorph, we present several applications in ubiquitous and mobile computing.
Printem: Instant Printed Circuit Boards with Standard Office Printers & Inks BIBAFull-Text 243-251
  Varun Perumal C; Daniel Wigdor
Printem film, a novel method for the fabrication of Printed Circuit Boards (PCBs) for small batch/prototyping use, is presented. Printem film enables a standard office inkjet or laser printer, using standard inks, to produce a PCB: the user prints a negative of the PCB onto the film, exposes it to UV or sunlight, and then tears-away the unneeded portion of the film, leaving-behind a copper PCB. PCBs produced with Printem film are as conductive as PCBs created using standard industrial methods. Herein, the composition of Printem film is described, and advantages of various materials discussed. Sample applications are also described, each of which demonstrates some unique advantage of Printem film over current prototyping methods: conductivity, flexibility, the ability to be cut with a pair of scissors, and the ability to be mounted to a rigid backplane.
   NOTE: publication of full-text held until November 9, 2015.
Capricate: A Fabrication Pipeline to Design and 3D Print Capacitive Touch Sensors for Interactive Objects BIBAFull-Text 253-258
  Martin Schmitz; Mohammadreza Khalilbeigi; Matthias Balwierz; Roman Lissermann; Max Mühlhäuser; Jürgen Steimle
3D printing is widely used to physically prototype the look and feel of 3D objects. Interaction possibilities of these prototypes, however, are often limited to mechanical parts or post-assembled electronics. In this paper, we present Capricate, a fabrication pipeline that enables users to easily design and 3D print highly customized objects that feature embedded capacitive multi-touch sensing. The object is printed in a single pass using a commodity multi-material 3D printer. To enable touch input on a wide variety of 3D printable surfaces, we contribute two techniques for designing and printing embedded sensors of custom shape. The fabrication pipeline is technically validated by a series of experiments and practically validated by a set of example applications. They demonstrate the wide applicability of Capricate for interactive objects.

Session 4B: Tools for Programmers

Explaining Visual Changes in Web Interfaces BIBAFull-Text 259-268
  Brian Burg; Andrew J. Ko; Michael D. Ernst
Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior.
   The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.
Unravel: Rapid Web Application Reverse Engineering via Interaction Recording, Source Tracing, and Library Detection BIBAFull-Text 270-279
  Joshua Hibschman; Haoqi Zhang
Professional websites with complex UI features provide real world examples for developers to learn from. Yet despite the availability of source code, it is still difficult to understand how these features are implemented. Existing tools such as the Chrome Developer Tools and Firebug offer debugging and inspection, but reverse engineering is still a time consuming task. We thus present Unravel, an extension of the Chrome Developer Tools for quickly tracking and visualizing HTML changes, JavaScript method calls, and JavaScript libraries. Unravel injects an observation agent into websites to monitor DOM interactions in real-time without functional interference or external dependencies. To manage potentially large observations of events, the Unravel UI provides affordances to reduce, sort, and scope observations. Testing Unravel with 13 web developers on 5 large-scale websites, we found a 53% decrease in time to discovering the first key source behind a UI feature and a 32% decrease in time to understanding how to fully recreate a feature.
Webstrates: Shareable Dynamic Media BIBAFull-Text 280-290
  Clemens N. Klokmose; James R. Eagan; Siemen Baader; Wendy Mackay; Michel Beaudouin-Lafon
We revisit Alan Kay's early vision of dynamic media that blurs the distinction between documents and applications. We introduce shareable dynamic media that are malleable by users, who may appropriate them in idiosyncratic ways; shareable among users, who collaborate on multiple aspects of the media; and distributable across diverse devices and platforms. We present Webstrates, an environment for exploring shareable dynamic media. Webstrates augment web technology with real-time sharing. They turn web pages into substrates, i.e. software entities that act as applications or documents depending upon use. We illustrate Webstrates with two implemented case studies: users collaboratively author an article with functionally and visually different editors that they can personalize and extend at run-time; and they orchestrate its presentation and audience participation with multiple devices. We demonstrate the simplicity and generative power of Webstrates with three additional prototypes and evaluate it from a systems perspective.
User Interaction Models for Disambiguation in Programming by Example BIBAFull-Text 291-301
  Mikaël Mayer; Gustavo Soares; Maxim Grechkin; Vu Le; Mark Marron; Oleksandr Polozov; Rishabh Singh; Benjamin Zorn; Sumit Gulwani
Programming by Examples (PBE) has the potential to revolutionize end-user programming by enabling end users, most of whom are non-programmers, to create small scripts for automating repetitive tasks. However, examples, though often easy to provide, are an ambiguous specification of the user's intent. Because of that, a key impedance in adoption of PBE systems is the lack of user confidence in the correctness of the program that was synthesized by the system. We present two novel user interaction models that communicate actionable information to the user to help resolve ambiguity in the examples. One of these models allows the user to effectively navigate between the huge set of programs that are consistent with the examples provided by the user. The other model uses active learning to ask directed example-based questions to the user on the test input data over which the user intends to run the synthesized program. Our user studies show that each of these models significantly reduces the number of errors in the performed task without any difference in completion time. Moreover, both models are perceived as useful, and the proactive active-learning based model has a slightly higher preference regarding the users' confidence in the result.

Session 5A: Touch Input

Biometric Touch Sensing: Seamlessly Augmenting Each Touch with Continuous Authentication BIBAFull-Text 303-312
  Christian Holz; Marius Knaust
Current touch devices separate user authentication from regular interaction, for example by displaying modal login screens before device usage or prompting for in-app passwords, which interrupts the interaction flow. We propose biometric touch sensing, a new approach to representing touch events that enables commodity devices to seamlessly integrate authentication into interaction: From each touch, the touchscreen senses the 2D input coordinates and at the same time obtains biometric features that identify the user. Our approach makes authentication during interaction transparent to the user, yet ensures secure interaction at all times. To implement this on today's devices, our watch prototype Bioamp senses the impedance profile of the user's wrist and modulates a signal onto the user's body through skin using a periodic electric signal. This signal affects the capacitive values touchscreens measure upon touch, allowing devices to identify users on each touch. We integrate our approach into Windows 8 and discuss and demonstrate it in the context of various use cases, including access permissions and protecting private screen contents on personal and shared devices.
Push-Push: A Drag-like Operation Overlapped with a Page Transition Operation on Touch Interfaces BIBAFull-Text 313-322
  Jaehyun Han; Geehyuk Lee
A page transition operation on touch interfaces is a common and frequent subtask when one conducts a drag-like operation such as selecting text and dragging an icon. Traditional page transition gestures such as scrolling and flicking gestures, however, cannot be conducted while conducting the drag-like operation since they have a confliction. We proposed Push-Push that is a new drag-like operation not in conflict with page transition operations. Thus, page transition operations could be conducted while performing Push-Push. To design Push-Push, we utilized the hover and pressed states as additional input states of touch interfaces. The results from two experiments showed that Push-Push has an advantage on increasing performance and qualitative opinions of users while reducing the subjective overload.
Pin-and-Cross: A Unimanual Multitouch Technique Combining Static Touches with Crossing Selection BIBAFull-Text 323-332
  Yuexing Luo; Daniel Vogel
We define, explore, and demonstrate a new multitouch interaction space called "pin-and-cross." It combines one or more static touches ("pins") with another touch to cross a radial target, all performed with one hand. A formative study reveals pin-and-cross kinematic characteristics and evaluates fundamental performance and preference for target angles. These results are used to form design guidelines and recognition heuristics for pin-and-cross menus invoked with one and two pin fingers on first touch or after a drag. These guidelines are used to implement different pin-and-cross techniques. A controlled experiment compares a one finger pin-and-cross contextual menu to a Marking Menu and partial Pie Menu: pin-and-cross is just as accurate and 27% faster when invoked on a draggable object. A photo app demonstrates more pin-and-cross variations for extending two-finger scrolling, selecting modes while drawing, constraining two-finger transformations, and combining pin-and-cross with a Marking Menu.

Session 5B: Tangibles

LineFORM: Actuated Curve Interfaces for Display, Interaction, and Constraint BIBAFull-Text 333-339
  Ken Nakagaki; Sean Follmer; Hiroshi Ishii
In this paper we explore the design space of actuated curve interfaces, a novel class of shape changing-interfaces. Physical curves have several interesting characteristics from the perspective of interaction design: they have a variety of inherent affordances; they can easily represent abstract data; and they can act as constraints, boundaries, or borderlines. By utilizing such aspects of lines and curves, together with the added capability of shape-change, new possibilities for display, interaction and body constraint are possible. In order to investigate these possibilities we have implemented two actuated curve interfaces at different scales. LineFORM, our implementation, inspired by serpentine robotics, is comprised of a series chain of 1DOF servo motors with integrated sensors for direct manipulation. To motivate this work we present various applications such as shape changing cords, mobiles, body constraints, and data manipulation tools.
Kinetic Blocks: Actuated Constructive Assembly for Interaction and Display BIBAFull-Text 341-349
  Philipp Schoessler; Daniel Windham; Daniel Leithinger; Sean Follmer; Hiroshi Ishii
Pin-based shape displays not only give physical form to digital information, they have the inherent ability to accurately move and manipulate objects placed on top of them. In this paper we focus on such object manipulation: we present ideas and techniques that use the underlying shape change to give kinetic ability to otherwise inanimate objects. First, we describe the shape display's ability to assemble, disassemble, and reassemble structures from simple passive building blocks through stacking, scaffolding, and catapulting. A technical evaluation demonstrates the reliability of the presented techniques. Second, we introduce special kinematic blocks that are actuated and sensed through the underlying pins. These blocks translate vertical pin movements into other degrees of freedom like rotation or horizontal movement. This interplay of the shape display with objects on its surface allows us to render otherwise inaccessible forms, like overhangs, and enables richer input and output.
PERCs: Persistently Trackable Tangibles on Capacitive Multi-Touch Displays BIBAFull-Text 351-356
  Simon Voelker; Christian Cherek; Jan Thar; Thorsten Karrer; Christian Thoresen; Kjell Ivar Øvergård; Jan Borchers
Tangible objects on capacitive multi-touch surfaces are usually only detected while the user is touching them. When the user lets go of such a tangible, the system cannot distinguish whether the user just released the tangible, or picked it up and removed it from the surface. We introduce PERCs, persistent capacitive tangibles that "know" whether they are currently on a capacitive touch surface or not. This is achieved by adding a small field sensor to the tangible to detect the touch screen's own, weak electromagnetic touch detection probing signal. Thus, unlike previous designs, PERCs do not get filtered out over time by the adaptive signal filters of the touch screen. We provide a technical overview of the theory behind PERCs and our prototype construction, and we evaluate detection rates, timing performance, and positional and angular accuracy for PERCs on a variety of unmodified, commercially available multi-touch devices. Through their affordable circuitry and high accuracy, PERCs open up the potential for a variety of new applications that use tangibles on today's ubiquitous multi-touch devices.
SmartTokens: Embedding Motion and Grip Sensing in Small Tangible Objects BIBAFull-Text 357-362
  Mathieu Le Goc; Pierre Dragicevic; Samuel Huron; Jeremy Boy; Jean-Daniel Fekete
SmartTokens are small-sized tangible tokens that can sense multiple types of motion, multiple types of touch/grip, and send input events wirelessly as state-machine transitions. By providing an open platform for embedding basic sensing capabilities within small form-factors, SmartTokens extend the design space of tangible user interfaces. We describe the design and implementation of SmartTokens and illustrate how they can be used in practice by introducing a novel TUI design for event notification and personal task management.

Session 6A: Gaze

Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency BIBAFull-Text 363-372
  Yusuke Sugano; Andreas Bulling
Head-mounted eye tracking has significant potential for gaze-based applications such as life logging, mental health monitoring, or the quantified self. A neglected challenge for the long-term recordings required by these applications is that drift in the initial person-specific eye tracker calibration, for example caused by physical activity, can severely impact gaze estimation accuracy and thus system performance and user experience. We first analyse calibration drift on a new dataset of natural gaze data recorded using synchronised video-based and Electrooculography-based eye trackers of 20 users performing everyday activities in a mobile setting. Based on this analysis we present a method to automatically self-calibrate head-mounted eye trackers based on a computational model of bottom-up visual saliency. Through evaluations on the dataset we show that our method 1) is effective in reducing calibration drift in calibrated eye trackers and 2) given sufficient data, can achieve gaze estimation accuracy competitive with that of a calibrated eye tracker, without any manual calibration.
Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze BIBAFull-Text 373-383
  Ken Pfeuffer; Jason Alexander; Ming Ki Chong; Yanxia Zhang; Hans Gellersen
Modalities such as pen and touch are associated with direct input but can also be used for indirect input. We propose to combine the two modes for direct-indirect input modulated by gaze. We introduce gaze-shifting as a novel mechanism for switching the input mode based on the alignment of manual input and the user's visual attention. Input in the user's area of attention results in direct manipulation whereas input offset from the user's gaze is redirected to the visual target. The technique is generic and can be used in the same manner with different input modalities. We show how gaze-shifting enables novel direct-indirect techniques with pen, touch, and combinations of pen and touch input.
Gaze vs. Mouse: A Fast and Accurate Gaze-Only Click Alternative BIBAFull-Text 385-394
  Christof Lutteroth; Moiz Penkar; Gerald Weber
Eye gaze tracking is a promising input method which is gradually finding its way into the mainstream. An obvious question to arise is whether it can be used for point-and-click tasks, as an alternative for mouse or touch. Pointing with gaze is both fast and natural, although its accuracy is limited. There are still technical challenges with gaze tracking, as well as inherent physiological limitations. Furthermore, providing an alternative to clicking is challenging.
   We are considering use cases where input based purely on gaze is desired, and the click targets are discrete user interface (UI) elements which are too small to be reliably resolved by gaze alone, e.g., links in hypertext. We present Actigaze, a new gaze-only click alternative which is fast and accurate for this scenario. A clickable user interface element is selected by dwelling on one of a set of confirm buttons, based on two main design contributions: First, the confirm buttons stay on fixed positions with easily distinguishable visual identifiers such as colors, enabling procedural learning of the confirm button position. Secondly, UI elements are associated with confirm buttons through the visual identifiers in a way which minimizes the likelihood of inadvertent clicks. We evaluate two variants of the proposed click alternative, comparing them against the mouse and another gaze-only click alternative.
GazeProjector: Accurate Gaze Estimation and Seamless Gaze Interaction Across Multiple Displays BIBAFull-Text 395-404
  Christian Lander; Sven Gehring; Antonio Krüger; Sebastian Boring; Andreas Bulling
Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy in such situations remains a significant challenge. In this paper, we present GazeProjector, a system that combines (1) natural feature tracking on displays to determine the mobile eye tracker's position relative to a display with (2) accurate point-of-gaze estimation. GazeProjector allows for seamless gaze estimation and interaction on multiple displays of arbitrary sizes independently of the user's position and orientation to the display. In a user study with 12 participants we compare GazeProjector to established methods (here: visual on-screen markers and a state-of-the-art video-based motion capture system). We show that our approach is robust to varying head poses, orientations, and distances to the display, while still providing high gaze estimation accuracy across multiple displays without recalibration for each variation. Our system represents an important step towards the vision of pervasive gaze-based interfaces.

Session 6B: Pushing Virtual and Physical Envelopes

Virtual Replicas for Remote Assistance in Virtual and Augmented Reality BIBAFull-Text 405-415
  Ohan Oda; Carmine Elvezio; Mengu Sukan; Steven Feiner; Barbara Tversky
In many complex tasks, a remote subject-matter expert may need to assist a local user to guide actions on objects in the local user's environment. However, effective spatial referencing and action demonstration in a remote physical environment can be challenging. We introduce two approaches that use Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display. Both approaches allow the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. This can be especially useful for parts that are occluded or difficult to access. In one approach, the expert points in 3D to portions of virtual replicas to annotate them. In another approach, the expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains, comparing both approaches to an approach in which the expert uses a 2D tablet-based drawing system similar to ones developed for prior work on remote assistance. The study showed the 3D demonstration approach to be faster than the others. In addition, the 3D pointing approach was faster than the 2D tablet in the case of a highly trained expert.
TurkDeck: Physical Virtual Reality Based on People BIBAFull-Text 417-426
  Lung-Pan Cheng; Thijs Roumen; Hannes Rantzsch; Sven Köhler; Patrick Schmidt; Robert Kovacs; Johannes Jasper; Jonas Kemper; Patrick Baudisch
TurkDeck is an immersive virtual reality system that reproduces not only what users see and hear, but also what users feel. TurkDeck produces the haptic sensation using props, i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. Unlike previous work on prop-based virtual reality, however, TurkDeck allows creating arbitrarily large virtual worlds in finite space and using a finite set of physical props. The key idea behind TurkDeck is that it creates these physical representations on the fly by making a group of human workers present and operate the props only when and where the user can actually reach them. TurkDeck manages these so-called "human actuators" by displaying visual instructions that tell the human actuators when and where to place props and how to actuate them. We demonstrate TurkDeck at the example of an immersive 300m2 experience in 25m2 physical space. We show how to simulate a wide range of physical objects and effects, including walls, doors, ledges, steps, beams, switches, stompers, portals, zip lines, and wind. In a user study, participants rated the realism/immersion of TurkDeck higher than a traditional prop-less baseline condition (4.9 vs. 3.6 on 7 item Likert).
Protopiper: Physically Sketching Room-Sized Objects at Actual Scale BIBAFull-Text 427-436
  Harshit Agrawal; Udayan Umapathi; Robert Kovacs; Johannes Frohnhofen; Hsiang-Ting Chen; Stefanie Mueller; Patrick Baudisch
Physical sketching of 3D wireframe models, using a hand-held plastic extruder, allows users to explore the design space of 3D models efficiently. Unfortunately, the scale of these devices limits users' design explorations to small-scale objects. We present protopiper, a computer aided, hand-held fabrication device, that allows users to sketch room-sized objects at actual scale. The key idea behind protopiper is that it forms adhesive tape into tubes as its main building material, rather than extruded plastic or photopolymer lines. Since the resulting tubes are hollow they offer excellent strength-to-weight ratio, thus scale well to large structures. Since the tape is pre-coated with adhesive it allows connecting tubes quickly, unlike extruded plastic that would require heating and cooling in the kilowatt range. We demonstrate protopiper's use through several demo objects, ranging from more constructive objects, such as furniture, to more decorative objects, such as statues. In our exploratory user study, 16 participants created objects based on their own ideas. They rated the device as being "useful for creative exploration", "its ability to sketch at actual scale helped judge fit", and "fun to use."
RevoMaker: Enabling Multi-directional and Functionally-embedded 3D printing using a Rotational Cuboidal Platform BIBAFull-Text 437-446
  Wei Gao; Yunbo Zhang; Diogo C. Nazzetta; Karthik Ramani; Raymond J. Cipra
In recent years, 3D printing has gained significant attention from the maker community, academia, and industry to support low-cost and iterative prototyping of designs. Current unidirectional extrusion systems require printing sacrificial material to support printed features such as overhangs. Furthermore, integrating functions such as sensing and actuation into these parts requires additional steps and processes to create "functional enclosures", since design functionality cannot be easily embedded into prototype printing. All of these factors result in relatively high design iteration times. We present "RevoMaker", a self-contained 3D printer that creates direct out-of-the-printer functional prototypes, using less build material and with substantially less reliance on support structures. By modifying a standard low-cost FDM printer with a revolving cuboidal platform and printing partitioned geometries around cuboidal facets, we achieve a multidirectional additive prototyping process to reduce the print and support material use. Our optimization framework considers various orientations and sizes for the cuboidal base. The mechanical, electronic, and sensory components are preassembled on the flattened laser-cut facets and enclosed inside the cuboid when closed. We demonstrate RevoMaker directly printing a variety of customized and fully-functional product prototypes, such as computer mice and toys, thus illustrating the new affordances of 3D printing for functional product design.

Session 7A: Wearable and Mobile Interactions

NanoStylus: Enhancing Input on Ultra-Small Displays with a Finger-Mounted Stylus BIBAFull-Text 447-456
  Haijun Xia; Tovi Grossman; George Fitzmaurice
Due to their limited input area, ultra-small devices, such as smartwatches, are even more prone to occlusion or the fat finger problem, than their larger counterparts, such as smart phones, tablets, and tabletop displays. We present NanoStylus -- a finger-mounted fine-tip stylus that enables fast and accurate pointing on a smartwatch with almost no occlusion. The NanoStylus is built from the circuitry of an active capacitive stylus, and mounted within a custom 3D-printed thimble-shaped housing unit. A sensor strip is mounted on each side of the device to enable additional gestures. A user study shows that NanoStylus reduces error rate by 80%, compared to traditional touch interaction and by 45%, compared to a traditional stylus. This high precision pointing capability, coupled with the implemented gesture sensing, gives us the opportunity to explore a rich set of interactive applications on a smartwatch form factor.
Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements BIBAFull-Text 457-466
  Augusto Esteves; Eduardo Velloso; Andreas Bulling; Hans Gellersen
We introduce Orbits, a novel gaze interaction technique that enables hands-free input on smart watches. The technique relies on moving controls to leverage the smooth pursuit movements of the eyes and detect whether and at which control the user is looking at. In Orbits, controls include targets that move in a circular trajectory in the face of the watch, and can be selected by following the desired one for a small amount of time. We conducted two user studies to assess the technique's recognition and robustness, which demonstrated how Orbits is robust against false positives triggered by natural eye movements and how it presents a hands-free, high accuracy way of interacting with smart watches using off-the-shelf devices. Finally, we developed three example interfaces built with Orbits: a music player, a notifications face plate and a missed call menu. Despite relying on moving controls -- very unusual in current HCI interfaces -- these were generally well received by participants in a third and final study.
Candid Interaction: Revealing Hidden Mobile and Wearable Computing Activities BIBAFull-Text 467-476
  Barrett Ens; Tovi Grossman; Fraser Anderson; Justin Matejka; George Fitzmaurice
The growth of mobile and wearable technologies has made it often difficult to understand what people in our surroundings are doing with their technology. In this paper, we introduce the concept of candid interaction: techniques for providing awareness about our mobile and wearable device usage to others in the vicinity. We motivate and ground this exploration through a survey on current attitudes toward device usage during interpersonal encounters. We then explore a design space for candid interaction through seven prototypes that leverage a wide range of technological enhancements, such as Augmented Reality, shape memory muscle wire, and wearable projection. Preliminary user feedback of our prototypes highlights the trade-offs between the benefits of sharing device activity and the need to protect user privacy.
Sensing Tablet Grasp + Micro-mobility for Active Reading BIBAFull-Text 477-487
  Dongwook Yoon; Ken Hinckley; Hrvoje Benko; François Guimbretière; Pourang Irani; Michel Pahud; Marcel Gavriliu
The orientation and repositioning of physical artefacts (such as paper documents) to afford shared viewing of content, or to steer the attention of others to specific details, is known as micro-mobility. But the role of grasp in micro-mobility has rarely been considered, much less sensed by devices. We therefore employ capacitive grip sensing and inertial motion to explore the design space of combined grasp + micro-mobility by considering three classes of technique in the context of active reading. Single user, single device techniques support grip-influenced behaviors such as bookmarking a page with a finger, but combine this with physical embodiment to allow flipping back to a previous location. Multiple user, single device techniques, such as passing a tablet to another user or working side-by-side on a single device, add fresh nuances of expression to co-located collaboration. And single user, multiple device techniques afford facile cross-referencing of content across devices. Founded on observations of grasp and micro-mobility, these techniques open up new possibilities for both individual and collaborative interaction with electronic documents.

Session 7B: Neurons, Affect, Ambiguity

DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization BIBAFull-Text 489-500
  Tong Gao; Mira Dontcheva; Eytan Adar; Zhicheng Liu; Karrie G. Karahalios
Answering questions with data is a difficult and time-consuming process. Visual dashboards and templates make it easy to get started, but asking more sophisticated questions often requires learning a tool designed for expert analysts. Natural language interaction allows users to ask questions directly in complex programs without having to learn how to use an interface. However, natural language is often ambiguous. In this work we propose a mixed-initiative approach to managing ambiguity in natural language interfaces for data visualization. We model ambiguity throughout the process of turning a natural language query into a visualization and use algorithmic disambiguation coupled with interactive ambiguity widgets. These widgets allow the user to resolve ambiguities by surfacing system decisions at the point where the ambiguity matters. Corrections are stored as constraints and influence subsequent queries. We have implemented these ideas in a system, DataTone. In a comparative study, we find that DataTone is easy to learn and lets users ask questions without worrying about syntax and proper question form.
On Sounder Ground: CAAT, a Viable Widget for Affective Reaction Assessment BIBAFull-Text 501-510
  Bruno Cardoso; Osvaldo Santos; Teresa Romão
The reliable assessment of affective reactions to stimuli is paramount in a variety of scientific fields, including HCI (Human-Computer Interaction). Variation of emotional states over time, however, warrants the need for quick measurements of emotions. To address it, new tools for quick assessments of affective states have been developed. In this work, we explore the CAAT (Circumplex Affective Assessment Tool), an instrument with a unique design in the scope of affect assessment -- a graphical control element -- that makes it amenable to seamless integration in user interfaces. We briefly describe the CAAT and present a multi-dimensional evaluation that evidences the tool's viability. We have assessed its test-retest reliability, construct validity and quickness of use, by collecting data through an unsupervised, web-based user study. Results show high test-retest reliability, evidence the tool's construct validity and confirm its quickness of use, making it a good fit for longitudinal studies and systems requiring quick assessments of emotional reactions.
Anger-based BCI Using fNIRS Neurofeedback BIBAFull-Text 511-521
  Gabor Aranyi; Fred Charles; Marc Cavazza
Functional near-infrared spectroscopy (fNIRS) holds increasing potential for Brain-Computer Interfaces (BCI) due to its portability, ease of application, robustness to movement artifacts, and relatively low cost. The use of fNIRS to support the development of affective BCI has received comparatively less attention, despite the role played by the prefrontal cortex in affective control, and the appropriateness of fNIRS to measure prefrontal activity. We present an active, fNIRS-based neurofeedback (NF) interface, which uses differential changes in oxygenation between the left and right sides of the dorsolateral prefrontal cortex to operationalize BCI input. The system is activated by users generating a state of anger, which has been previously linked to increased left prefrontal asymmetry. We have incorporated this NF interface into an experimental platform adapted from a virtual 3D narrative, in which users can express anger at a virtual character perceived as evil, causing the character to disappear progressively. Eleven subjects used the system and were able to successfully perform NF despite minimal training. Extensive analysis confirms that success was associated with the intent to express anger. This has positive implications for the design of affective BCI based on prefrontal asymmetry.
Leveraging Dual-Observable Input for Fine-Grained Thumb Interaction Using Forearm EMG BIBAFull-Text 523-528
  Donny Huang; Xiaoyi Zhang; T. Scott Saponas; James Fogarty; Shyamnath Gollakota
We introduce the first forearm-based EMG input system that can recognize fine-grained thumb gestures, including left swipes, right swipes, taps, long presses, and more complex thumb motions. EMG signals for thumb motions sensed from the forearm are quite weak and require significant training data to classify. We therefore also introduce a novel approach for minimally-intrusive collection of labeled training data for always-available input devices. Our dual-observable input approach is based on the insight that interaction observed by multiple devices allows recognition by a primary device (e.g., phone recognition of a left swipe gesture) to create labeled training examples for another (e.g., forearm-based EMG data labeled as a left swipe). We implement a wearable prototype with dry EMG electrodes, train with labeled demonstrations from participants using their own phones, and show that our prototype can recognize common fine-grained thumb gestures and user-defined complex gestures.

Session 8A: Hands and Fingers

Improving Virtual Keyboards When All Finger Positions Are Known BIBAFull-Text 529-538
  Daewoong Choi; Hyeonjoong Cho; Joono Cheong
Current virtual keyboards are known to be slower and less convenient than physical QWERTY keyboards because they simply imitate the traditional QWERTY keyboards on touchscreens. In order to improve virtual keyboards, we consider two reasonable assumptions based on the observation of skilled typists. First, the keys are already assigned to each finger for typing. Based on this assumption, we suggest restricting each finger to entering pre-allocated keys only. Second, non-touching fingers move in correlation with the touching finger because of the intrinsic structure of human hands. To verify of our assumptions, we conducted two experiments with skilled typists. In the first experiment, we statistically verified the second assumption. We then suggest a novel virtual keyboard using our observations. In the second experiment, we show that our suggested keyboard outperforms existing virtual keyboards.
ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data BIBAFull-Text 539-548
  Xin Yi; Chun Yu; Mingrui Zhang; Sida Gao; Ke Sun; Yuanchun Shi
Ten-finger freehand mid-air typing is a potential solution for post-desktop interaction. However, the absence of tactile feedback as well as the inability to accurately distinguish tapping finger or target keys exists as the major challenge for mid-air typing. In this paper, we present ATK, a novel interaction technique that enables freehand ten-finger typing in the air based on 3D hand tracking data. Our hypothesis is that expert typists are able to transfer their typing ability from physical keyboards to mid-air typing. We followed an iterative approach in designing ATK. We first empirically investigated users' mid-air typing behavior, and examined fingertip kinematics during tapping, correlated movement among fingers and 3D distribution of tapping endpoints. Based on the findings, we proposed a probabilistic tap detection algorithm, and augmented Goodman's input correction model to account for the ambiguity in distinguishing tapping finger. We finally evaluated the performance of ATK with a 4-block study. Participants typed 23.0 WPM with an uncorrected word-level error rate of 0.3% in the first block, and later achieved 29.2 WPM in the last block without sacrificing accuracy.
CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a Fisheye Ring BIBAFull-Text 549-556
  Liwei Chan; Yi-Ling Chen; Chi-Hao Hsieh; Rong-Hao Liang; Bing-Yu Chen
This paper presents CyclopsRing, a ring-style fisheye imaging wearable device that can be worn on hand webbings to enable whole-hand and context-aware interactions. Observing from a central position of the hand through a fisheye perspective, CyclopsRing sees not only the operating hand, but also the environmental contexts that involve with the hand-based interactions. Since CyclopsRing is a finger-worn device, it also allows users to fully preserve skin feedback of the hands. This paper demonstrates a proof-of-concept device, reports the performance in hand-gesture recognition using random decision forest (RDF) method, and, upon the gesture recognizer, presents a set of interaction techniques including on-finger pinch-and-slide input, in-air pinch-and-motion input, palm-writing input, and their interactions with the environmental contexts. The experiment obtained an 84.75% recognition rate of hand gesture input from a database of seven hand gestures collected from 15 participants. To our knowledge, CyclopsRing is the first ring-wearable device that supports whole-hand and context-aware interactions.
BackHand: Sensing Hand Gestures via Back of the Hand BIBAFull-Text 557-564
  Jhe-Wei Lin; Chiuan Wang; Yi Yao Huang; Kuan-Ting Chou; Hsuan-Yu Chen; Wei-Luan Tseng; Mike Y. Chen
In this paper, we explore using the back of hands for sensing hand gestures, which interferes less than glove-based approaches and provides better recognition than sensing at wrists and forearms. Our prototype, BackHand, uses an array of strain gauge sensors affixed to the back of hands, and applies machine learning techniques to recognize a variety of hand gestures. We conducted a user study with 10 participants to better understand gesture recognition accuracy and the effects of sensing locations. Results showed that sensor reading patterns differ significantly across users, but are consistent for the same user. The leave-one-user-out accuracy is low at an average of 27.4%, but reaches 95.8% average accuracy for 16 popular hand gestures when personalized for each participant. The most promising location spans the 1/8~1/4 area between the metacarpophalangeal joints (MCP, the knuckles between the hand and fingers) and the head of ulna (tip of the wrist).

Session 8B: Fabrication 3 -- Complex Shapes and Properties

MoveableMaker: Facilitating the Design, Generation, and Assembly of Moveable Papercraft BIBAFull-Text 565-574
  Michelle Annett; Tovi Grossman; Daniel Wigdor; George Fitzmaurice
In this work, we explore moveables, i.e., interactive papercraft that harness user interaction to generate visual effects. First, we present a survey of children's books that captured the state of the art of moveables. The results of this survey were synthesized into a moveable taxonomy and informed MoveableMaker, a new tool to assist users in designing, generating, and assembling moveable papercraft. MoveableMaker supports the creation and customization of a number of moveable effects and employs moveable-specific features including animated tooltips, automatic instruction generation, constraint-based rendering, techniques to reduce material waste, and so on. To understand how MoveableMaker encourages creativity and enhances the workflow when creating moveables, a series of exploratory workshops were conducted. The results of these explorations, including the content participants created and their impressions, are discussed, along with avenues for future research involving moveables.
LaserStacker: Fabricating 3D Objects by Laser Cutting and Welding BIBAFull-Text 575-582
  Udayan Umapathi; Hsiang-Ting Chen; Stefanie Mueller; Ludwig Wall; Anna Seufert; Patrick Baudisch
Laser cutters are useful for rapid prototyping because they are fast. However, they only produce planar 2D geometry. One approach to creating non-planar objects is to cut the object in horizontal slices and to stack and glue them. This approach, however, requires manual effort for the assembly and time for the glue to set, defeating the purpose of using a fast fabrication tool. We propose eliminating the assembly step with our system LaserStacker. The key idea is to use the laser cutter to not only cut but also to weld. Users place not one acrylic sheet, but a stack of acrylic sheets into their cutter. In a single process, LaserStacker cuts each individual layer to shape (through all layers above it), welds layers by melting material at their interface, and heals undesired cuts in higher layers. When users take out the object from the laser cutter, it is already assembled. To allow users to model stacked objects efficiently, we built an extension to a commercial 3D editor (SketchUp) that provides tools for defining which parts should be connected and which remain loose. When users hit the export button, LaserStacker converts the 3D model into cutting, welding, and healing instructions for the laser cutter. We show how LaserStacker does not only allow making static objects, such as architectural models, but also objects with moving parts and simple mechanisms, such as scissors, a simple pinball machine, and a mechanical toy with gears.
HapticPrint: Designing Feel Aesthetics for Digital Fabrication BIBAFull-Text 583-591
  Cesar Torres; Tim Campbell; Neil Kumar; Eric Paulos
Digital fabrication has enabled massive creativity in hobbyist communities and professional product design. These emerging technologies excel at realizing an arbitrary shape or form; however these objects are often rigid and lack the feel desired by designers. We aim to enable physical haptic design in passive 3D printed objects. This paper identifies two core areas for extending physical design into digital fabrication: designing the external and internal haptic characteristics of an object. We present HapticPrint as a pair of design tools to easily modify the feel of a 3D model. Our external tool maps textures and UI elements onto arbitrary shapes, and our internal tool modifies the internal geometry of models for novel compliance and weight characteristics. We demonstrate the value of HapticPrint with a range of applications that expand the aesthetics of feel, usability, and interactivity in 3D artifacts.
3D Printed Hair: Fused Deposition Modeling of Soft Strands, Fibers, and Bristles BIBAFull-Text 593-597
  Gierad Laput; Xiang 'Anthony' Chen; Chris Harrison
We introduce a technique for furbricating 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling. Our approach offers a range of design parameters for controlling the properties of single strands and also of hair bundles. We further detail a list of post-processing techniques for refining the behavior and appearance of printed strands. We provide several examples of output, demonstrating the immediate feasibility of our approach using a low cost, commodity printer. Overall, this technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware.

Session 9A: Online Education

Codeopticon: Real-Time, One-To-Many Human Tutoring for Computer Programming BIBAFull-Text 599-608
  Philip J. Guo
One-on-one tutoring from a human expert is an effective way for novices to overcome learning barriers in complex domains such as computer programming. But there are usually far fewer experts than learners. To enable a single expert to help more learners at once, we built Codeopticon, an interface that enables a programming tutor to monitor and chat with dozens of learners in real time. Each learner codes in a workspace that consists of an editor, compiler, and visual debugger. The tutor sees a real-time view of each learner's actions on a dashboard, with each learner's workspace summarized in a tile. At a glance, the tutor can see how learners are editing and debugging their code, and what errors they are encountering. The dashboard automatically reshuffles tiles so that the most active learners are always in the tutor's main field of view. When the tutor sees that a particular learner needs help, they can open an embedded chat window to start a one-on-one conversation. A user study showed that 8 first-time Codeopticon users successfully tutored anonymous learners from 54 countries in a naturalistic online setting. On average, in a 30-minute session, each tutor monitored 226 learners, started 12 conversations, exchanged 47 chats, and helped 2.4 learners.
Foobaz: Variable Name Feedback for Student Code at Scale BIBAFull-Text 609-617
  Elena L. Glassman; Lyla Fischer; Jeremy Scott; Robert C. Miller
Current traditional feedback methods, such as hand-grading student code for substance and style, are labor intensive and do not scale. We created a user interface that addresses feedback at scale for a particular and important aspect of code quality: variable names. We built this user interface on top of an existing back-end that distinguishes variables by their behavior in the program. Therefore our interface not only allows teachers to comment on poor variable names, they can comment on names that mislead the reader about the variable's role in the program. We ran two user studies in which 10 teachers and 6 students created and received feedback, respectively. The interface helped teachers give personalized variable name feedback on thousands of student solutions from an edX introductory programming MOOC. In the second study, students composed solutions to the same programming assignments and immediately received personalized quizzes composed by teachers in the previous user study.
These Aren't the Commands You're Looking For: Addressing False Feedforward in Feature-Rich Software BIBAFull-Text 619-628
  Benjamin Lafreniere; Parmit K. Chilana; Adam Fourney; Michael A. Terry
The names, icons, and tooltips of commands in feature-rich software are an important source of guidance when locating and selecting amongst commands. Unfortunately, these cues can mislead users into believing that a command is appropriate for a given task, when another command would be more appropriate, resulting in wasted time and frustration. In this paper, we present command disambiguation techniques that inform the user of alternative commands before, during, and after an incorrect command has been executed. To inform the design of these techniques, we define categories of false-feedforward errors caused by misleading interface cues, and identify causes for each. Our techniques are the first designed explicitly to solve this problem in feature-rich software. A user study showed enthusiasm for the techniques, and revealed their potential to play a key role in learning of feature-rich software.

Session 9B: Pens, Mice and Sensor Strips

Looking through the Eye of the Mouse: A Simple Method for Measuring End-to-end Latency using an Optical Mouse BIBAFull-Text 629-636
  Géry Casiez; Stéphane Conversy; Matthieu Falce; Stéphane Huot; Nicolas Roussel
We present a simple method for measuring end-to-end latency in graphical user interfaces. The method works with most optical mice and allows accurate and real time latency measures up to 5 times per second. In addition, the technique allows easy insertion of probes at different places in the system I.e. mouse events listeners -- to investigate the sources of latency. After presenting the measurement method and our methodology, we detail the measures we performed on different systems, toolkits and applications. Results show that latency is affected by the operating system and system load. Substantial differences are found between C++/GLUT and C++/Qt or Java/Swing implementations, as well as between web browsers.
Joint 5D Pen Input for Light Field Displays BIBAFull-Text 637-647
  James Tompkin; Samuel Muff; James McCann; Hanspeter Pfister; Jan Kautz; Marc Alexa; Wojciech Matusik
Light field displays allow viewers to see view-dependent 3D content as if looking through a window; however, existing work on light field display interaction is limited. Yet, they have the potential to parallel 2D pen and touch screen systems, which present a joint input and display surface for natural interaction. We propose a 4D display and interaction space using a dual-purpose lenslet array, which combines light field display and light field pen sensing, and allows us to estimate the 3D position and 2D orientation of the pen. This method is simple, fast (150Hz), with position accuracy of 2-3mm and precision of 0.2-0.6mm from 0-350mm away from the lenslet array, and orientation accuracy of 2 degrees and precision of 0.2-0.3 degrees within a 45 degree field of view. Further, we 3D print the lenslet array with embedded baffles to reduce out-of-bounds cross-talk, and use an optical relay to allow interaction behind the focal plane. We demonstrate our joint display/sensing system with interactive light field painting.
SensorTape: Modular and Programmable 3D-Aware Dense Sensor Network on a Tape BIBAFull-Text 649-658
  Artem Dementyev; Hsin-Liu (Cindy) Kao; Joseph A. Paradiso
SensorTape is a modular and dense sensor network in a form factor of a tape. SensorTape is composed of interconnected and programmable sensor nodes on a flexible electronics substrate. Each node can sense its orientation with an inertial measurement unit, allowing deformation self-sensing of the whole tape. Also, nodes sense proximity using time-of-flight infrared. We developed network architecture to automatically determine the location of each sensor node, as SensorTape is cut and rejoined. Also, we made an intuitive graphical interface to program the tape. Our user study suggested that SensorTape enables users with different skill sets to intuitively create and program large sensor network arrays. We developed diverse applications ranging from wearables to home sensing, to show low deployment effort required by the user. We showed how SensorTape could be produced at scale using current technologies and we made a 2.3-meter long prototype.
FlexiBend: Enabling Interactivity of Multi-Part, Deformable Fabrications Using Single Shape-Sensing Strip BIBAFull-Text 659-663
  Chin-yu Chien; Rong-Hao Liang; Long-Fei Lin; Liwei Chan; Bing-Yu Chen
This paper presents FlexiBend, an easily installable shape-sensing strip that enables interactivity of multi-part, deformable fabrications. The flexible sensor strip is composed of a dense linear array of strain gauges, therefore it has shape sensing capability. After installation, FlexiBend can simultaneously sense user inputs in different parts of a fabrication or even capture the geometry of a deformable fabrication.

Closing Keynote Address

Machine Intelligence and Human Intelligence BIBAFull-Text 665
  Blaise Aguera y Arcas
There has been a stellar rise in computational power since 2006 in part thanks to GPUs, yet today, we are as an intelligent species essentially singular. There are of course some other brainy species, like chimpanzees, dolphins, crows and octopuses, but if anything they only emphasize our unique position on Earth -- as animals richly gifted with self-awareness, language, abstract thought, art, mathematical capability, science, technology and so on. Many of us have staked our entire self-concept on the idea that to be human is to have a mind, and that minds are the unique province of humans. For those of us who are not religious, this could be interpreted as the last bastion of dualism. Our economic, legal and ethical systems are also implicitly built around this idea. Now, we're well along the road to really understanding the fundamental principles of how a mind can be built, and Moore's Law will put brain-scale computing within reach this decade. (We need to put some asterisks next to Moore's Law, since we are already running up against certain limits in computational scale using our present-day approaches, but I'll stand behind the broader statement.) In this talk I will discuss the relationships between engineered neurally inspired systems and brains today, between humans and machines tomorrow, and how these relationships will alter user interfaces, software and technology.

UIST 2005-11-08 Volume 2

Doctoral Symposium

Responsive Facilitation of Experiential Learning Through Access to Attentional State BIBAFull-Text 1-4
  Scott W. Greenwald
The planned thesis presents a vision of the future of learning, where learners explore environments, physical and virtual, in a curiosity-driven or intrinsically motivated way, and receive contextual information from a companion facilitator or teacher. Learners are instrumented with sensors that convey their cognitive and attentional state to the companion, who can then accurately judge what is interesting or relevant, and when is a good moment to jump in. I provide a broad definition of the possible types of sensor input as well as the modalities of intervention, and then present a specific proof-of-concept system that uses gaze behavior as a means of communication between the learner and a human companion.
Reconfiguring and Fabricating Special-Purpose Tangible Controls BIBAFull-Text 5-8
  Raf Ramakers
Unlike regular interfaces on touch screens or desktop computers, tangible user interfaces allow for more physically rich interactions that better uses the capacity of our motor system. On the flipside, the physicality of tangibles comes with rigidity. This makes it hard to (1) use tangibles on systems that require a variety of controls and interaction styles, and (2) make changes to physical interfaces once manufactured. In my research, I explore techniques that allow users to reconfigure and fabricate tangible interfaces in order to mitigate these issues.
Supporting Collaborative Innovation at Scale BIBAFull-Text 9-12
  Pao Siangliulue
Emerging online innovation platforms have enabled large groups of people to collaborate and generate ideas together in ways that were not possible before. However, these platforms also introduce new challenges in finding inspiration from a large number of ideas, and coordinating the collective effort. In my dissertation, I address the challenges of large scale idea generation platforms by developing methods and systems for helping people make effective use of each other's ideas, and for orchestrating collective effort to reduce redundancy and increase the quality and breadth of generated ideas.
Wait-Learning: Leveraging Wait Time for Education BIBAFull-Text 13-16
  Carrie J. Cai
Competing priorities in daily life make it difficult for those with a casual interest in learning to set aside time for regular practice. Yet, learning often requires significant time and effort, with repeated exposures to learning material on a recurring basis. Despite the struggle to find time for learning, there are numerous times in a day that are wasted due to micro-waiting. In my research, I develop systems for wait-learning, leveraging wait time for education. Combining wait time with productive work opens up a new class of software systems that overcomes the problem of limited time while addressing the frustration often associated with waiting. My research tackles several challenges in learning and task management, such as identifying which waiting moments to leverage; how to encourage learning unobtrusively; how to integrate learning across a diversity of waiting moments; and how to extend wait-learning to more complex domains. In the development process, I hope to understand how to manage these waiting moments, and describe essential design principles for wait-learning systems.
From Papercraft to Paper Mechatronics: Exploring a New Medium and Developing a Computational Design Tool BIBAFull-Text 17-20
  Hyunjoo Oh
Paper Mechatronics is a novel interdisciplinary design medium, enabled by recent advances in craft technologies: the term refers to a reappraisal of traditional papercraft in combination with accessible mechanical, electronic, and computational elements. I am investigating the design space of paper mechatronics as a new hands-on medium by developing a series of examples and building a computational tool, FoldMecha, to support non-experts to design and construct their own paper mechatronics models. This paper describes how I used the tool to create two kinds of paper mechatronics models: walkers and flowers and discuss next steps.
Enriching Online Classroom Communication with Collaborative Multi-Modal Annotations BIBAFull-Text 21-24
  Dongwook Yoon
In massive open online courses, peer discussion is a scalable solution for offering interactive and engaging learning experiences to a large number of students. On the other hand, the quality of communication mediated through online discussion tools, such as discussion forums, is far less expressive than that of face-to-face communication. As a solution, I present RichReview, a multi-modal annotation system through which distant students can exchange ideas using versatile combinations of voice, text, and pointing gestures. A series of lab and deployment studies of RichReview promised that the expressive multimedia mixture and lightweight audio browsing feature help students better understand commentators' intention. For the large-scale deployment, I redesigned RichReview as a web applet in edX's courseware framework. By deploying the system at scale, I will investigate (1) the optimal group assignment scheme that maximizes overall diversities of group members, (2) educational data mining applications based on user-generated rich discussion data, and (3) the impact of the rich discussion to students' retention of knowledge. Throughout these studies, I will argue that a multi-modal anchored digital document annotation system enables rich online peer discussion at scale.
Using Personal Devices to Facilitate Multi-user Interaction with Large Display Walls BIBAFull-Text 25-28
  Ulrich von Zadow
Large display walls and personal devices such as Smartphones have complementary characteristics. While large displays are well-suited to multi-user interaction (potentially with complex data), they are inherently public and generally cannot present an interface adapted to the individual user. However, effective multi-user interaction in many cases depends on the ability to tailor the interface, to interact without interfering with others, and to access and possibly share private data. The combination with personal devices facilitates exactly this. Multi-device interaction concepts enable data transfer and include moving parts of UIs to the personal device. In addition, hand-held devices can be used to present personal views to the user. Our work will focus on using personal devices for true multi-user interaction with interactive display walls. It will cover appropriate interaction techniques as well as the technical foundation and will be validated with corresponding application cases.
Graphical Passwords for Older Computer Users BIBAFull-Text 29-32
  Nancy J. Carter
Computers and the internet have been challenging for many computer users over the age of 60. We conducted a survey of older users which revealed that the creation, management and recall of strong text passwords were some of the challenging aspects of modern technology. In practice, this user group based passwords on familiar facts such as family member names, pets, phone numbers and important personal dates. Graphical passwords formed from abstract graphical symbols or anonymous facial images are feasible, but harder for older computers users to grasp and recall. In this paper we describe initial results for our graphical password system based on recognition of culturally-familiar facial images that are age-relevant to the life experiences of older users. Our goals are to design an easy-to-memorize, graphical password system intended specifically for older users, and achieve a level of password entropy comparable to traditional PINs and text passwords. We are also conducting a user study to demonstrate our technique and capture performance and recall metrics for comparison with traditional password systems.

Demonstrations

Scope+: A Stereoscopic Video See-Through Augmented Reality Microscope BIBAFull-Text 33-34
  Yu-Hsuan Huang; Tzu-Chieh Yu; Pei-Hsuan Tsai; Yu-Xiang Wang; Wan-ling Yang; Ming Ouhyoung
During the process of using conventional stereo microscope, users need to move their head away from the eyepieces repeatedly to access more information, such as anatomy structures from atlas. It happens during microsurgery if surgeons want to check patient's data again. You might lose your target and your concentration after this kind of disruption. To solve this critical problem and to improve the user experience of stereo microscope, we present Scope+, a stereoscopic video see-through augmented reality system. Scope+ is designed for biological procedures, education and surgical training. While performing biological procedures, for example, dissection of a frog, anatomical atlas will show up inside the head mounted display (HMD) overlaid onto the magnified images. For education purpose, the specimens will no longer be silent under Scope+. When their body parts are pointed by a marked stick, related animation or transparent background video will merge with the real object and interact with observers. If surgeons want to improve their techniques of microsurgery, they can practice with Scope+ which provides complete foot pedal control functions identical to standard surgical microscope. Moreover, cooperating with special designed phantom models, this augmented reality system will guide you to perform some key steps of operation, such as Continuous Curvilinear Capsulorhexis in cataract surgery. Video see-through rather than optical see-through technology is adopt by Scope+ system, therefore remote observation via another Scope+ or web applications can be achieved. This feature can not only assist teachers during experiment classes, but also help researchers keep their eyes on the observables after work. Array mode is powered by the motor-driven stage plate which allows users to load multiple samples at the same time. Quick comparison between samples is possible when switching them by the foot pedal.
Creating a Mobile Head-mounted Display with Proprietary Controllers for Interactive Virtual Reality Content BIBAFull-Text 35-36
  Kunihiro Kato; Homei Miyashita
A method to create a mobile head-mounted display (HMD) a proprietary controller for interactive virtual reality (VR) content is proposed. The proposed method uses an interface cartridge printed with a conductive pattern. This allows the user to operate a smartphone by touching on the face of the mobile HMD. In addition, the user can easily create a mobile HMD and interface cartridge using a laser cutter and inkjet printer. Changing the form of the conductive pattern allows the user to create a variety of controllers. The proposed method can realize an environment that can deliver a variety of interactions with VR content.
Spotlights: Facilitating Skim Reading with Attention-Optimized Highlights BIBAFull-Text 37-38
  Byungjoo Lee; Antti Oulasvirta
This demo presents Spotlights, a technique to facilitate skim reading, or the activity of rapidly comprehending long documents such as webpages or PDFs. Users mainly use continuous rate-based scrolling to skim. However, visual attention fails when scrolling rapidly due to excessive number of objects and brief exposure per object. Spotlights supports continuous scrolling at high speeds. It selects a small number of objects and raises them to transparent overlays (spotlights) in the viewer. Spotlights stay static for a prolonged time and then fade away. The technical contribution is novel method for "brokering" user's attentional resources in a way that guarantees sufficient attentional resources for some objects, even at very high scrolling rates. It facilitates visual attention by (1) decreasing the number of objects competing for divided attention and (2) by ensuring sufficient processing time per object.
WearWrite: Orchestrating the Crowd to Complete Complex Tasks from Wearables BIBAFull-Text 39-40
  Michael Nebeling; Anhong Guo; Alexandra To; Steven Dow; Jaime Teevan; Jeffrey Bigham
Smartwatches are becoming increasingly powerful, but limited input makes completing complex tasks impractical. Our WearWrite system introduces a new paradigm for enabling a watch user to contribute to complex tasks, not through new hardware or input methods, but by directing a crowd to work on their behalf from their wearable device. WearWrite lets authors give writing instructions and provide bits of expertise and big picture directions from their smartwatch, while crowd workers actually write the document on more powerful devices. We used this approach to write three academic papers, and found it was effective at producing reasonable drafts.
Zensei: Augmenting Objects with Effortless User Recognition Capabilities through Bioimpedance Sensing BIBAFull-Text 41-42
  Munehiko Sato; Rohan S. Puri; Alex Olwal; Deepak Chandra; Ivan Poupyrev; Ramesh Raskar
As interactions with everyday handheld devices and objects become increasingly common, a more seamless and effortless identification and personalization technique will be essential to an uninterrupted user experience. In this paper, we present Zensei, a user identification and customization system using human body bioimpedance sensing through multiple electrodes embedded into everyday objects. Zensei provides for an uninterrupted user-device personalization experience that is difficult to forge because it uses both the unique physiological and behavioral characteristics of the user. We demonstrate our measurement system in three exemplary device configurations that showcase different levels of constraint via environment-based, whole-body-based, and handheld-based identification scenarios. We evaluated Zensei's classification accuracy among 12 subjects on each configuration over 22 days of collected data and report our promising results.
Form Follows Function(): An IDE to Create Laser-cut Interfaces and Microcontroller Programs from Single Code Base BIBAFull-Text 43-44
  Jun Kato; Masataka Goto
During the development of physical computing devices, physical object models and programs for microcontrollers are usually created with separate tools with distinct files. As a result, it is difficult to track the changes in hardware and software without discrepancy. Moreover, the software cannot directly access hardware metrics. Designing hardware interface cannot benefit from the source code information either. This demonstration proposes a browser-based IDE named f3.js that enables development of both as a single JavaScript code base. The demonstration allows audiences to play with the f3.js IDE and showcases example applications such as laser-cut interfaces generated from the same code but with different parameters. Programmers can experience the full feature and designers can interact with preset projects with a mouse or touch to customize laser-cut interfaces. More information is available at http://f3js.org.
RFlow: User Interaction Beyond Walls BIBAFull-Text 45-46
  Hisham Bedri; Otkrist Gupta; Andrew Temme; Micha Feigin; Gregory Charvat; Ramesh Raskar
Current user-interaction with optical gesture tracking technologies suffer from occlusions, limiting the functionality to direct line-of-sight. We introduce RFlow, a compact, medium-range interface based on Radio Frequency (RF) that enables camera-free tracking of the position of a moving hand through drywall and other occluders. Our system uses Time of Flight (TOF) RF sensors and speed-based segmentation to localize the hand of a single user with 5cm accuracy (as measured to the closest ground-truth point), enabling an interface which is not restricted to a training set.
MetaSpace: Full-body Tracking for Immersive Multiperson Virtual Reality BIBAFull-Text 47-48
  Misha Sra; Chris Schmandt
Most current virtual reality (VR) interactions are mediated by hand-held input devices or hand gestures and they usually display only a partial representation of the user in the synthetic environment. We believe, representing the user as a full avatar that is controlled by natural movements of the person in the real world will lead to a greater sense of presence in VR. Possible applications exist in various domains such as entertainment, therapy, travel, real estate, education, social interaction and professional assistance. In this demo, we present MetaSpace, a virtual reality system that allows co-located users to explore a VR world together by walking around in physical space. Each user's body is represented by an avatar that is dynamically controlled by their body movements. We achieve this by tracking each user's body with a Kinect device such that their physical movements are mirrored in the virtual world. Users can see their own avatar and the other person's avatar allowing them to perceive and act intuitively in the virtual environment.
GaussStarter: Prototyping Analog Hall-Sensor Grids with Breadboards BIBAFull-Text 49-50
  Rong-Hao Liang; Han-Chih Kuo; Bing-Yu Chen
This work presents GaussStarter, a pluggable and tileable analog Hall-sensor grid module for easy and scalable bread-board prototyping. In terms of ease-of-use, the graspable units allow users to easily plug them on or remove them from a breadboard. In terms of scalability, tiling the units on the breadboard can easily expand the sensing area. A software development kit is also provided for designing applications based on this hardware module.
Enhanced Motion Robustness from ToF-based Depth Sensing Cameras BIBAFull-Text 51-52
  Wataru Yamada; Hiroyuki Manabe; Hiroshi Inamura
Depth sensing cameras that can acquire RGB and depth information are being widely used. They can expand and enhance various camera-based applications and are cheap but strong tools for computer human interaction. RGB and depth sensing cameras have quite different key parameters, such as exposure time. We focus on the differences in their motion robustness; the RGB camera has relatively long exposure times while those of ToF (Time-of-flight) based depth sensing camera are relatively short. An experiment on visual tag reading, one typical application, shows that depth sensing cameras can robustly decode moving tags. The proposed technique will yield robust tag reading, indoor localization, and color image stabilization while walking and jogging or even glancing momentarily without requiring any special additional devices.
Workload Assessment with eye Movement Monitoring Aided by Non-invasive and Unobtrusive Micro-fabricated Optical Sensors BIBAFull-Text 53-54
  Carlos C. Cortes Torres; Kota Sampei; Munehiko Sato; Ramesh Raskar; Norihisa Miki
Mental state or workload of a person are very relevant when the person is executing delicate tasks such as piloting an aircraft, operating a crane because the high level of workload could prevent accomplishing the task and lead to disastrous results. Some frameworks have been developed to assess the workload and determine whether the person is capable of executing a new task. However, such methodologies are applied when the operator finished the task. Another feature that these methodologies share is that are based on paper and pencil tests. Therefore, human-friendly devices that could assess the workload in real time are in high demand. In this paper, we report a wearable device that can correlate physical eye behavior with the mental state for the workload assessment.
Multi-Modal Peer Discussion with RichReview on edX BIBAFull-Text 55-56
  Dongwook Yoon; Piotr Mitros
In this demo, we present RichReview, a multi-modal peer discussion system, implemented as an XBlock in the edX courseware platform. The system brings richness similar to face-to-face communication into online learning at scale. With this demonstration, we discuss the system's scalable back-end architecture, semantic voice editing user interface, and a future research plan for the profile based group-assignment scheme.
BitDrones: Towards Levitating Programmable Matter Using Interactive 3D Quadcopter Displays BIBAFull-Text 57-58
  Calvin Rubens; Sean Braley; Antonio Gomes; Daniel Goc; Xujing Zhang; Juan Pablo Carrascal; Roel Vertegaal
In this paper, we present BitDrones, a platform for the construction of interactive 3D displays that utilize nano quadcopters as self-levitating tangible building blocks. Our prototype is a first step towards supporting interactive mid-air, tangible experiences with physical interaction techniques through multiple building blocks capable of physically representing interactive 3D data.
Methods of 3D Printing Micro-pillar Structures on Surfaces BIBAFull-Text 59-60
  Jifei Ou; Chin-Yi Cheng; Liang Zhou; Gershon Dublon; Hiroshi Ishii
This work presents a method of 3D printing hair-like structures on both flat and curved surfaces. It allows a user to design and fabricate hair geometry that is smaller than 100 micron. We built a software platform to let one quickly define a hair's angle, thickness, density, and height. The ability to fabricate customized hair-like structures expands the library of 3D-printable shape. We then present several applications to show how the 3D-printed hair can be used for designing toy objects.
Dranimate: Rapid Real-time Gestural Rigging and Control of Animation BIBAFull-Text 61-62
  Ali Momeni; Zachary Rispoli
Dranimate is an interactive animation system that allows users to rapidly and intuitively rig and control animations based on a still image or drawing, using hand gestures. Dranimate combines two complementary methods of shape manipulation: bone-joint-based physics simulation, and the as-rigid-as-possible deformation algorithm. Dranimate also introduces a number of designed interactions that focus the users attention on the animated content, as opposed to computer keyboard or mouse.
Elastic Cursor and Elastic Edge: Applying Simulated Resistance to Interface Elements for Seamless Edge-scroll BIBAFull-Text 63-64
  Jinha Lee; Seungcheon Baek
We present elastic cursor and elastic edge, new interaction techniques for seamless edge-scroll. Through the use of light-weight physical simulations of elastic behavior on interface elements, we can improve precision, usability, and cueing on the use of edge-scroll in scrollable windows or screens, and make experiences more playful and easier to learn.

Posters

Hand Biometrics Using Capacitive Touchscreens BIBAFull-Text 67-68
  Robert Tartz; Ted Gooding
Biometric methods for authentication on mobile devices are becoming popular. Some methods such as face and voice biometrics are problematic in noisy mobile environments, while others such as fingerprint require specialized hardware to operate. We present a novel biometric authentication method that uses raw touch capacitance data captured from the hand touching a display. Performance results using a moderate sample size (N = 40) yielded an equal error rate (EER) of 2.5%, while a 1-month longitudinal study using a smaller sample (N = 10) yielded an EER = 2.3%. Overall, our results provide evidence for biometric uniqueness, permanence and user acceptance.
A Study on Grasp Recognition Independent of Users' Situations Using Built-in Sensors of Smartphones BIBAFull-Text 69-70
  Chanho Park; Takefumi Ogawa
There are many hand postures of smartphone according to the users' situations. In order to support appropriate inter-face, it is important to know user's hand posture. To recognize grasp postures, which is not depend on users' situations, we consider using smartphone's touchscreen and their built-in gyroscope and accelerometer and use support vector machine (SVM). In order to evaluate our system, we described the result of the experiments when users are using the devices in the room and on the train. We knew that our system could be feasible for personal use only system by improving the information from the accelerometer. We also collected users' data when users are sitting in the room. Results showed that grasp recognition accuracy under 5 and 4 hand postures were 87.7%, 92.4% respectively when training and testing on 6 users.
TMotion: Embedded 3D Mobile Input using Magnetic Sensing Technique BIBAFull-Text 71-72
  Sang Ho Yoon; Ke Huo; Karthik Ramani
We present TMotion, a self-contained 3D input that enables spatial interactions around mobile using a magnetic sensing technique. Using a single magnetometer from the mobile device, we can track the 3D position of the permanent magnet embedded in the prototype along with an inertial measurement unit. By numerically solving non-linear magnetic field equations with known orientation from inertial measurement unit (IMU), we attain a tracking rate greater than 30Hz based solely on the mobile device computation. We describe the working principle of TMotion and example applications illustrating its capability.
EMG Sensor-based Two-Hand Smart Watch Interaction BIBAFull-Text 73-74
  Yoonsik Yang; Seungho Chae; Jinwook Shim; Tack-Don Han
These days, smart watches have drawn more attention of users, and many smart watch products have been launched (Samsung Gear series, apple watch and etc.). Since a smart watch is put on the wrist, the device should be small and unobtrusive. Because of these features, display of the smart watch is small and there is a limitation to interaction. To overcome the limitation, many studies are conducted. In this paper, we propose a two-hand interaction technique that obtains posture information of a hand using electromyography (EMG) sensor attached to the arm and to make input interaction to a smart watch different depending on each posture. EMG sensors recognize information about a user's hand posture, and the non-dominant hand is used for smart watch inputs. In this way, different function is executed depending on postures. As a result, a smart watch that has limited input methods is given a variety of interaction functions with users.
Investigating the "Wisdom of Crowds" at Scale BIBAFull-Text 75-76
  Alok Shankar Mysore; Vikas S. Yaligar; Imanol Arrieta Ibarra; Camelia Simoiu; Sharad Goel; Ramesh Arvind; Chiraag Sumanth; Arvind Srikantan; Bhargav HS; Mayank Pahadia; Tushar Dobha; Atif Ahmed; Mani Shankar; Himani Agarwal; Rajat Agarwal; Sai Anirudh-Kondaveeti; Shashank Arun-Gokhale; Aayush Attri; Arpita Chandra; Yogitha Chilukur; Sharath Dharmaji; Deepak Garg; Naman Gupta; Paras Gupta; Glincy Mary Jacob; Siddharth Jain; Shashank Joshi; Tarun Khajuria; Sameeksha Khillan; Sandeep Konam; Praveen Kumar-Kolla; Sahil Loomba; Rachit Madan; Akshansh Maharaja; Vidit Mathur; Bharat Munshi; Mohammed Nawazish; Venkata Neehar-Kurukunda; Venkat Nirmal-Gavarraju; Sonali Parashar; Harsh Parikh; Avinash Paritala; Amit Patil; Rahul Phatak; Mandar Pradhan; Abhilasha Ravichander; Krishna Sangeeth; Sreecharan Sankaranarayanan; Vibhor Sehgal; Ashrith Sheshan; Suprajha Shibiraj; Aditya Singh; Anjali Singh; Prashant Sinha; Pushkin Soni; Bipin Thomas; Kasyap Varma-Dattada; Sukanya Venkataraman; Pulkit Verma; Ishan Yelurwar
In a variety of problem domains, it has been observed that the aggregate opinions of groups are often more accurate than those of the constituent individuals, a phenomenon that has been termed the "wisdom of the crowd." Yet, perhaps surprisingly, there is still little consensus on how generally the phenomenon holds, how best to aggregate crowd judgements, and how social influence affects estimates. We investigate these questions by taking a meta wisdom of crowds approach. With a distributed team of over 100 student researchers across 17 institutions in the United States and India, we develop a large-scale online experiment to systematically study the wisdom of crowds effect for 1,000 different tasks in 50 subject domains. These tasks involve various types of knowledge (e.g., explicit knowledge, tacit knowledge, and prediction), question formats (e.g., multiple choice and point estimation), and inputs (e.g., text, audio, and video). To examine the effect of social influence, participants are randomly assigned to one of three different experiment conditions in which they see varying degrees of information on the responses of others. In this ongoing project, we are now preparing to recruit participants via Amazon's Mechanical Turk.
Effective Interactions for Personalizing Spatial Visualizations of Collections BIBAFull-Text 77-78
  Kenneth C. Arnold; Krzysztof Z. Gajos
Interactive spatial visualizations powered by machine learning will help us explore and understand large collections in meaningful ways, but little is yet known about the design space of interactions. We ran a pilot user study to compare two different interaction techniques: a "grouping" interaction adapted from interactive clustering, and an existing "positioning" interaction. We identified three important dimensions of the interaction design space that inform future design of more intuitive and expressive interactions.
Fix and Slide: Caret Navigation with Movable Background BIBAFull-Text 79-80
  Kenji Suzuki; Kazumasa Okabe; Ryuuki Sakamoto; Daisuke Sakamoto
We present a "Fix and Slide" technique, which is a concept to use a movable background to place a caret insertion point and to select text on a mobile device. Standard approach to select text on the mobile devices is touching to the text where a user wants to select, and sometimes pop-up menu is displayed and s/he choose "select" mode and then start to specify an area to be selected. A big problem is that the user's finger hides the area to select; this is called a "fat finger problem." We use the movable background to navigate a caret. First a user places a caret by tapping on a screen and then moves the background by touching and dragging on a screen. In this situation, the caret is fixed on the screen so that the user can move the background to navigate the caret where the user wants to move the caret. We implement the Fix and Slide technique on iOS device (iPhone) to demonstrate the impact of this text selection technique on small mobile devices.
LegionTools: A Toolkit + UI for Recruiting and Routing Crowds to Synchronous Real-Time Tasks BIBAFull-Text 81-82
  Mitchell Gordon; Jeffrey P. Bigham; Walter S. Lasecki
We introduce LegionTools, a toolkit and interface for managing large, synchronous crowds of online workers for experiments. This poster contributes the design and implementation of a state-of-the-art crowd management tool, along with a publicly-available, open-source toolkit that future system builders can use to coordinate synchronous crowds of online workers for their systems and studies.
   We describe the toolkit itself, along with the underlying design rationale, in order to make it clear to the community of system builders at UIST when and how this tool may be beneficial to their project. We also describe initial deployments of the system in which workers were synchronously recruited to support real-time crowdsourcing systems, including the largest synchronous recruitment and routing of workers from Mechanical Turk that we are aware of. While the version of LegionTools discussed here focuses on Amazon's Mechanical Turk platform, it can be easily extended to other platforms as APIs become available.
KickSoul: A Wearable System for Feet Interactions with Digital Devices BIBAFull-Text 83-84
  Xavier Benavides; Chang Long Zhu Jin; Pattie Maes; Joseph Paradiso
In this paper we present a wearable device that maps natural feet movements into inputs for digital devices. KickSoul consists of an insole with sensors embedded that tracks movements and triggers actions in devices that surround us. We present a novel approach to use our feet as input devices in mobile situations when our hands are busy. We analyze natural feet's movements and their meaning before activating an action. This paper discusses different applications for this technology as well as the implementation of our prototype.
Capacitive Blocks: A Block System that Connects the Physical with the Virtual using Changes of Capacitance BIBAFull-Text 85-86
  Arika Yoshida; Buntarou Shizuki; Jiro Tanaka
We propose a block-stacking system based on capacitance. The system, called Capacitive Blocks, allows users to build 3D models in a virtual space by stacking physical blocks. The construction of the block-stacking system is simple, and fundamental components including physical blocks can be made with a 3D printer. The block is a capacitor that consists of two layers made of conductive plastic filament and between them a layer made of non-conductive plastic filament. In this paper, we present a prototype of this block-stacking system and the mechanism that detects the height of blocks (i.e., the number of stacked blocks) by measuring the capacitance of the stacked blocks, which changes in accordance with the number of stacked blocks.
Haptic-enabled Active Bone-Conducted Sound Sensing BIBAFull-Text 87-88
  Yuya Okawa; Kentaro Takemura
In this study, we propose active bone-conducted sound sensing for estimating a joint angle of a finger and simultaneous use as a haptic interface. For estimating the joint angle, an unnoticeable vibration is input to the finger, and a perceptible vibration is additionally input to the finger for providing haptic feedback. The joint angle is estimated by switching the estimation model depending on the haptic feedback and the average error of the estimation is within about seven degree.
Perspective-dependent Indirect Touch Input for 3D Polygon Extrusion BIBAFull-Text 89-90
  Henri Palleis; Julie Wagner; Heinrich Hussmann
We present a two-handed indirect touch interaction technique for the extrusion of polygons within a 3D modeling tool that we have built for a horizontal/vertical dual touch screen setup. In particular, we introduce perspective-dependent touch gestures: using several graphical input areas on the horizontal display, the non-dominant hand navigates the virtual camera and thus continuously updates the spatial frame of reference within which the dominant hand performs extrusions with dragging gestures.
FoldMecha: Design for Linkage-Based Paper Toys BIBAFull-Text 91-92
  Hyunjoo Oh; Mark D. Gross; Michael Eisenberg
We present FoldMecha, a computational tool to help non-experts design and build paper mechanical toys. By customizing templates a user can experiment with basic mechanisms, design their own model, print and cut out a folding net to construct the toy. We used the tool to build two kinds of paper automata models: walkers and flowers.
Juggling the Effects of Latency: Software Approaches to Minimizing Latency in Dynamic Projector-Camera Systems BIBAFull-Text 93-94
  Jarrod Knibbe; Hrvoje Benko; Andrew D. Wilson
Projector-camera (pro-cam) systems afford a wide range of interactive possibilities, combining both natural and mixed-reality 3D interaction. However, the latency inherent within these systems can cause the projection to 'slip' from any moving target, so pro-cam systems have typically shied away from truly dynamic scenarios. We explore software-only techniques to reduce latency; considering the best achievable results with widely adopted commodity devices (e.g. 30Hz depth cameras and 60Hz projectors). We achieve 50% projection alignment on objects in free flight (a 34% improvement) and 69% alignment on dynamic human movement (a 40% improvement).
Color Sommelier: Interactive Color Recommendation System Based on Community-Generated Color Palettes BIBAFull-Text 95-96
  KyoungHee Son; Seo Young Oh; Yongkwan Kim; Hayan Choi; Seok-Hyung Bae; Ganguk Hwang
We present Color Sommelier, an interactive color recommendation system based on community-generated color palettes that helps users to choose harmonious colors on the fly. We used an item-based collaborative filtering technique with Adobe Color CC palettes in order to take advantage of their ratings that reflect the general public's color harmony preferences. Every time a user chooses a color(s), Color Sommelier calculates how harmonious each of the remaining colors is with the chosen color(s). This interactive recommendation enables users to choose colors iteratively until they are satisfied. To illustrate the usefulness of the algorithm, we implemented a coloring application with a specially designed color chooser. With the chooser, users can intuitively recognize the harmony score of each color based on its bubble size and use the recommendations at their discretion. The Color Sommelier algorithm is flexible enough to be applicable to any color chooser in any software package and is easy to implement.
AirFlip-Undo: Quick Undo using a Double Crossing In-Air Gesture in Hover Zone BIBAFull-Text 97-98
  Keigo Shima; Ryosuke Takada; Kazusa Onishi; Takuya Adachi; Buntarou Shizuki; Jiro Tanaka
In this work, we use AirFlip to undo text input on mobile touchscreen devices. AirFlip involves a quick double crossing in-air gesture in the boundary surfaces of hover zone of devices that have hover sensing capability. To evaluate the effectiveness of undoing text input with AirFlip, we implemented two QWERTY soft keyboards (AirFlip keyboard and Typical keyboard). With these keyboards, we conducted a user study to investigate the users' workload and to collect subjective opinions. The results show that there is no significant difference in workload between keyboards.
Remot-IO: a System for Reaching into the Environment of a Remote Collaborator BIBAFull-Text 99-100
  Xavier Benavides; Judith Amores; Pattie Maes
In this paper we present Remot-IO, a system for mobile collaboration and remote assistance around Internet connected devices. The system uses two Head Mounted Displays, cameras and depth sensors to enable a remote expert to be immersed in a local user's point of view and control devices in that user's environment. The remote expert can provide guidance through the use of hand gestures that appear in real-time in the local user's field of view as superimposed 3D hands. In addition, the remote expert is able to operate devices in the novice's environment and bring about physical changes by using the same hand gestures the novice would use. We describe a smart radio where the knobs of the radio can be controlled by local and remote user alike. Moreover, the user can visualize, interact and modify properties of sound waves in real time by using intuitive hand gestures.
Daemo: A Self-Governed Crowdsourcing Marketplace BIBAFull-Text 101-102
  Snehal (Neil) Gaikwad; Durim Morina; Rohit Nistala; Megha Agarwal; Alison Cossette; Radhika Bhanu; Saiph Savage; Vishwajeet Narwal; Karan Rajpal; Jeff Regino; Aditi Mithal; Adam Ginzberg; Aditi Nath; Karolina R. Ziulkoski; Trygve Cossette; Dilrukshi Gamage; Angela Richmond-Fuller; Ryo Suzuki; Jeerel Herrejón; Kevin Le; Claudia Flores-Saviaga; Haritha Thilakarathne; Kajal Gupta; William Dai; Ankita Sastry; Shirish Goyal; Thejan Rajapakshe; Niki Abolhassani; Angela Xie; Abigail Reyes; Surabhi Ingle; Verónica Jaramillo; Martin Godínez; Walter Ángel; Carlos Toxtli; Juan Flores; Asmita Gupta; Vineet Sethia; Diana Padilla; Kristy Milland; Kristiono Setyadi; Nuwan Wajirasena; Muthitha Batagoda; Rolando Cruz; James Damon; Divya Nekkanti; Tejas Sarma; Mohamed Saleh; Gabriela Gongora-Svartzman; Soroosh Bateni; Gema Toledo Barrera; Alex Peña; Ryan Compton; Deen Aariff; Luis Palacios; Manuela Paula Ritter; A Nisha K.K.; Alan Kay; Jana Uhrmeister; Srivalli Nistala; Milad Esfahani; Elsa Bakiu; Christopher Diemert; Luca Matsumoto; Manik Singh; Krupa Patel; Ranjay Krishna; Geza Kovacs; Rajan Vaish; Michael Bernstein
Crowdsourcing marketplaces provide opportunities for autonomous and collaborative professional work as well as social engagement. However, in these marketplaces, workers feel disrespected due to unreasonable rejections and low payments, whereas requesters do not trust the results they receive. The lack of trust and uneven distribution of power among workers and requesters have raised serious concerns about sustainability of these marketplaces. To address the challenges of trust and power, this paper introduces Daemo, a self-governed crowdsourcing marketplace. We propose a prototype task to improve the work quality and open-governance model to achieve equitable representation. We envisage Daemo will enable workers to build sustainable careers and provide requesters with timely, quality labor for their businesses.
MagPad: A Near Surface Augmented Reading System for Physical Paper and Smartphone Coupling BIBAFull-Text 103-104
  Ding Xu; Ali Momeni; Eric Brockmeyer
In this paper, we present a novel near surface augmented reading system that brings digital content to physical papers. Our system allows a collocated mobile phone to provide augmented content based on its position on top of paper. Our system utilizes built-in magnetometer of a smartphone together with six constantly spinning magnets that generate designed patterns of magnetic flux, to detect 2D location of phone and render dynamic interactive content on the smartphone screen. The proposed technique could be implemented on most of mobile platforms without external sensing hardware.
Adding Body Motion and Intonation to Instant Messaging with Animation BIBAFull-Text 105-106
  Weston Gaylord; Vivian Hare; Ashley Ngu
Digital text communication (DTC) has transformed the way people communicate. Static typographical cues like emoticons, punctuation, letter case, and word lengthening (ie. Hellooo") are regularly employed to convey intonation and affect. However, DTC platforms like instant messaging still suffer from a lack of nonverbal communication cues. This paper introduces an Animated Text Instant Messenger (ATIM), which uses text animations to add another distinct layer of cues to existing plaintext. ATIM builds upon previous research by using kinetic typography in communication. This paper describes the design principles and features of ATIM and discusses how animated text can add more nuanced communication cues of intonation and body motion.