HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 0102030405060708091011-111-212-112-213-113-214-114-215-115-2

Proceedings of the 201 ACM Symposium on User Interface Software and Technology1

Fullname:Proceedings of the 24th Annual ACM Symposium on User Interface and Software Technology
Editors:Jeff Pierce; Maneesh Agrawala; Scott Klemmer
Location:Santa Barbara, California
Dates:2011-Oct-16 to 2011-Oct-19
Volume:1
Publisher:ACM
Standard No:ISBN 1-4503-0716-7, 978-1-4503-0716-1; ACM DL: Table of Contents hcibib: UIST11-1
Papers:69
Pages:636
Links:Conference Home Page
  1. UIST 2011-10-16 Volume 1
    1. Crowdsourcing
    2. Social information
    3. Social learning
    4. With a little help
    5. Keynote address
    6. Development
    7. Tactile/blind
    8. Tangible
    9. Sensing form and rhythm
    10. Keynote address 2
    11. Mobile
    12. Sensing
    13. 3D
    14. Pointing

UIST 2011-10-16 Volume 1

Crowdsourcing

PlateMate: crowdsourcing nutritional analysis from food photographs BIBAFull-Text 1-12
  Jon Noronha; Eric Hysen; Haoqi Zhang; Krzysztof Z. Gajos
We introduce PlateMate, a system that allows users to take photos of their meals and receive estimates of food intake and composition. Accurate awareness of this information can help people monitor their progress towards dieting goals, but current methods for food logging via self-reporting, expert observation, or algorithmic analysis are time-consuming, expensive, or inaccurate. PlateMate crowdsources nutritional analysis from photographs using Amazon Mechanical Turk, automatically coordinating untrained workers to estimate a meal's calories, fat, carbohydrates, and protein. We present the Management framework for crowdsourcing complex tasks, which supports PlateMate's nutrition analysis workflow. Results of our evaluations show that PlateMate is nearly as accurate as a trained dietitian and easier to use for most users than traditional self-reporting.
Instrumenting the crowd: using implicit behavioral measures to predict task performance BIBAFull-Text 13-22
  Jeffrey M. Rzeszotarski; Aniket Kittur
Detecting and correcting low quality submissions in crowdsourcing tasks is an important challenge. Prior work has primarily focused on worker outcomes or reputation, using approaches such as agreement across workers or with a gold standard to evaluate quality. We propose an alternative and complementary technique that focuses on the way workers work rather than the products they produce. Our technique captures behavioral traces from online crowd workers and uses them to predict outcome measures such quality, errors, and the likelihood of cheating. We evaluate the effectiveness of the approach across three contexts including classification, generation, and comprehension tasks. The results indicate that we can build predictive models of task performance based on behavioral traces alone, and that these models generalize to related tasks. Finally, we discuss limitations and extensions of the approach.
Real-time crowd control of existing interfaces BIBAFull-Text 23-32
  Walter S. Lasecki; Kyle I. Murray; Samuel White; Robert C. Miller; Jeffrey P. Bigham
Crowdsourcing has been shown to be an effective approach for solving difficult problems, but current crowdsourcing systems suffer two main limitations: (i) tasks must be repackaged for proper display to crowd workers, which generally requires substantial one-off programming effort and support infrastructure, and (ii) crowd workers generally lack a tight feedback loop with their task. In this paper, we introduce Legion, a system that allows end users to easily capture existing GUIs and outsource them for collaborative, real-time control by the crowd. We present mediation strategies for integrating the input of multiple crowd workers in real-time, evaluate these mediation strategies across several applications, and further validate Legion by exploring the space of novel applications that it enables.
Crowds in two seconds: enabling realtime crowd-powered interfaces BIBAFull-Text 33-42
  Michael S. Bernstein; Joel Brandt; Robert C. Miller; David R. Karger
Interactive systems must respond to user input within seconds. Therefore, to create realtime crowd-powered interfaces, we need to dramatically lower crowd latency. In this paper, we introduce the use of synchronous crowds for on-demand, realtime crowdsourcing. With synchronous crowds, systems can dynamically adapt tasks by leveraging the fact that workers are present at the same time. We develop techniques that recruit synchronous crowds in two seconds and use them to execute complex search tasks in ten seconds. The first technique, the retainer model, pays workers a small wage to wait and respond quickly when asked. We offer empirically derived guidelines for a retainer system that is low-cost and produces on-demand crowds in two seconds. Our second technique, rapid refinement, observes early signs of agreement in synchronous crowds and dynamically narrows the search space to focus on promising directions. This approach produces results that, on average, are of more reliable quality and arrive faster than the fastest crowd member working alone. To explore benefits and limitations of these techniques for interaction, we present three applications: Adrenaline, a crowd-powered camera where workers quickly filter a short video down to the best single moment for a photo; and Puppeteer and A|B, which examine creative generation tasks, communication with workers, and low-latency voting.
CrowdForge: crowdsourcing complex work BIBAFull-Text 43-52
  Aniket Kittur; Boris Smus; Susheel Khamkar; Robert E. Kraut
Micro-task markets such as Amazon's Mechanical Turk represent a new paradigm for accomplishing work, in which employers can tap into a large population of workers around the globe to accomplish tasks in a fraction of the time and money of more traditional methods. However, such markets have been primarily used for simple, independent tasks, such as labeling an image or judging the relevance of a search result. Here we present a general purpose framework for accomplishing complex and interdependent tasks using micro-task markets. We describe our framework, a web-based prototype, and case studies on article writing, decision making, and science journalism that demonstrate the benefits and limitations of the approach.
The Jabberwocky programming environment for structured social computing BIBAFull-Text 53-64
  Salman Ahmad; Alexis Battle; Zahan Malkani; Sepander Kamvar
We present Jabberwocky, a social computing stack that consists of three components: a human and machine resource management system called Dormouse, a parallel programming framework for human and machine computation called ManReduce, and a high-level programming language on top of ManReduce called Dog. Dormouse is designed to enable cross-platform programming languages for social computation, so, for example, programs written for Mechanical Turk can also run on other crowdsourcing platforms. Dormouse also enables a programmer to easily combine crowdsourcing platforms or create new ones. Further, machines and people are both first-class citizens in Dormouse, allowing for natural parallelization and control flows for a broad range of data-intensive applications. And finally and importantly, Dormouse includes notions of real identity, heterogeneity, and social structure. We show that the unique properties of Dormouse enable elegant programming models for complex and useful problems, and we propose two such frameworks. ManReduce is a framework for combining human and machine computation into an intuitive parallel data flow that goes beyond existing frameworks in several important ways, such as enabling functions on arbitrary communication graphs between human and machine clusters. And Dog is a high-level procedural language written on top of ManReduce that focuses on expressivity and reuse. We explore two applications written in Dog: bootstrapping product recommendations without purchase data, and expert labeling of medical images.

Social information

Proactive wrangling: mixed-initiative end-user programming of data transformation scripts BIBAFull-Text 65-74
  Philip J. Guo; Sean Kandel; Joseph M. Hellerstein; Jeffrey Heer
Analysts regularly wrangle data into a form suitable for computational tools through a tedious process that delays more substantive analysis. While interactive tools can assist data transformation, analysts must still conceptualize the desired output state, formulate a transformation strategy, and specify complex transforms. We present a model to proactively suggest data transforms which map input data to a relational format expected by analysis tools. To guide search through the space of transforms, we propose a metric that scores tables according to type homogeneity, sparsity and the presence of delimiters. When compared to "ideal" hand-crafted transformations, our model suggests over half of the needed steps; in these cases the top-ranked suggestion is preferred 77% of the time. User study results indicate that suggestions produced by our model can assist analysts' transformation tasks, but that users do not always value proactive assistance, instead preferring to maintain the initiative. We discuss some implications of these results for mixed-initiative interfaces.
MUSE: reviving memories using email archives BIBAFull-Text 75-84
  Sudheendra Hangal; Monica S. Lam; Jeffrey Heer
Email archives silently record our actions and thoughts over the years, forming a passively acquired and detailed life-log that contains rich material for reminiscing on our lives. However, exploratory browsing of archives containing thousands of messages is tedious without effective ways to guide the user towards interesting events and messages. We present Muse (Memories USing Email), a system that combines data mining techniques and an interactive interface to help users browse a long-term email archive. Muse analyzes the contents of the archive and generates a set of cues that help to spark users' memories: communication activity with inferred social groups, a summary of recurring named entities, occurrence of sentimental words, and image attachments. These cues serve as salient entry points into a browsing interface that enables faceted navigation and rapid skimming of email messages. In our user studies, we found that users generally enjoyed browsing their archives with Muse, and extracted a range of benefits, from summarizing work progress to renewing friendships and making serendipitous discoveries.
A design space analysis of availability-sharing systems BIBAFull-Text 85-96
  Juan David Hincapié-Ramos; Stephen Voida; Gloria Mark
Workplace collaboration often requires interruptions, which can happen at inopportune times. Designing a successful availability-sharing system requires finding the right balance to optimize the benefits and reduce costs for both the interrupter and interruptee. In this paper, we examine the design space of availability-sharing systems and identify six relevant design dimensions: abstraction, presentation, information delivery, symmetry, obtrusiveness and temporal gradient. We describe these dimensions in terms of the tensions between interrupters and interruptees revealed in previous studies of workplace collaboration and deployments of awareness systems. As a demonstration of the utility of our design space, we introduce InterruptMe, a novel availability-sharing system that represents a previously unexplored point in the design space and that balances the tensions between interrupters and interruptees. InterruptMe differs from previous systems in that it displays availability information only when needed by monitoring implicit inputs from the system's users, implements a traceable asymmetry structure, and introduces the notion of per-communications channel availability.
Injured person information management during second triage BIBAFull-Text 97-106
  Yuki Takahashi; Hiroaki Kojima; Ken-ichi Okada
In a large-scale disaster in which many persons are injured at the same time, triage has been introduced. Triage is a method that temporarily delays the treatment of people with mild to moderate injuries and symptoms and gives priority to those in a critical condition. In the process of multiple triage, more specific information is needed in the second triage compared to the first to accurately prioritize the persons injuries and state. To solve this problem we proposed and constructed a touch-based interface for managing information inserted during second triage. A touch-based tablet interface is introduced to specify wound areas and gestures for wound types. The information is shared wirelessly with all emergency personnel, giving medics shared, data-centric, visibility of overall triage status for the first time. The evaluation experiment shows that this proposed system enables to reduce input errors, speed up injured person care, and facilitate information sharing between medics efficiently. As a result, we believe that many more injured persons can and will be saved.
Yelling in the hall: using sidetone to address a problem with mobile remote presence systems BIBAFull-Text 107-116
  Andreas Paepcke; Bianca Soto; Leila Takayama; Frank Koenig; Blaise Gassend
In our field deployments of mobile remote presence (MRP) systems in offices, we observed that remote operators of MRPs often unintentionally spoke too loudly. This disrupted their local co-workers, who happened to be within earshot of the MRP system. To address this issue, we prototyped and empirically evaluated the effect of sidetone to help operators self regulate their speaking loudness. Sidetone is the intentional, attenuated feedback of speakers' voices to their ears while they are using a telecommunication device. In a 3-level (no sidetone vs. low sidetone vs. high sidetone) within-participants pair of experiments, people interacted with a confederate through an MRP system. The first experiment involved MRP operators using headsets with boom microphones (N=20). The second experiment involved MRP operators using loudspeakers and desktop microphones (N=14). While we detected the effects of the sidetone manipulation in our audio-visual context, the effect was attenuated in comparison to earlier audio-only studies. We hypothesize that the strong visual component of our MRP system interferes with the sidetone effect. We also found that engaging in more social tasks (e.g., a getting-to-know-you activity) and more intellectually demanding tasks (e.g., a creativity exercise) influenced how loudly people spoke. This suggests that testing such sidetone effects in the typical read-aloud setting is insufficient for generalizing to more interactive, communication tasks. We conclude that MRP application support must reach beyond the time honored audio-only technologies to solve the problem of excessive speaker loudness.
A tongue input device for creating conversations BIBAFull-Text 117-126
  Ronit Slyper; Jill Lehman; Jodi Forlizzi; Jessica Hodgins
We present a new tongue input device, the tongue joystick, for use by an actor inside an articulated-head character costume. Using our device, the actor can maneuver through a dialogue tree, selecting clips of prerecorded audio to hold a conversation in the voice of the character. The device is constructed of silicone sewn with conductive thread, a unique method for creating rugged, soft, low-actuation force devices. This method has application for entertainment and assistive technology. We compare our device against other portable mouth input devices, showing it to be the fastest and most accurate in tasks mimicking our target application. Finally, we show early results of an actor inside an articulated-head costume using the tongue joystick to interact with a child.

Social learning

ShowMeHow: translating user interface instructions between applications BIBAFull-Text 127-134
  Vidya Ramesh; Charlie Hsu; Maneesh Agrawala; Björn Hartmann
Many people learn how to use complex authoring applications through tutorials. However, user interfaces for authoring tools differ between versions, platforms, and competing products, limiting the utility of tutorials. Our goal is to make tutorials more useful by enabling users to repurpose tutorials between similar applications. We introduce UI translation interfaces which enable users to locate commands in one application using the interface language of another application. Our end-user tool, ShowMeHow, demonstrates two interaction techniques to accomplish translations: 1) direct manipulation of interface facades and 2) text search for commands using the vocabulary of another application. We discuss tools needed to construct the translation maps that enable these techniques. An initial study (n=12) shows that users can locate unfamiliar commands twice as fast with interface facades. A second study showed that users can work through tutorials written for one application in another application.
Pause-and-play: automatically linking screencast video tutorials with applications BIBAFull-Text 135-144
  Suporn Pongnumkul; Mira Dontcheva; Wilmot Li; Jue Wang; Lubomir Bourdev; Shai Avidan; Michael F. Cohen
Video tutorials provide a convenient means for novices to learn new software applications. Unfortunately, staying in sync with a video while trying to use the target application at the same time requires users to repeatedly switch from the application to the video to pause or scrub backwards to replay missed steps. We present Pause-and-Play, a system that helps users work along with existing video tutorials. Pause-and-Play detects important events in the video and links them with corresponding events in the target application as the user tries to replicate the depicted procedure. This linking allows our system to automatically pause and play the video to stay in sync with the user. Pause-and-Play also supports convenient video navigation controls that are accessible from within the target application and allow the user to easily replay portions of the video without switching focus out of the application. Finally, since our system uses computer vision to detect events in existing videos and leverages application scripting APIs to obtain real time usage traces, our approach is largely independent of the specific target application and does not require access or modifications to application source code. We have implemented Pause-and-Play for two target applications, Google SketchUp and Adobe Photoshop, and we report on a user study that shows our system improves the user experience of working with video tutorials.
Creating contextual help for GUIs using screenshots BIBAFull-Text 145-154
  Tom Yeh; Tsung-Hsiang Chang; Bo Xie; Greg Walsh; Ivan Watkins; Krist Wongsuphasawat; Man Huang; Larry S. Davis; Benjamin B. Bederson
Contextual help is effective for learning how to use GUIs by showing instructions and highlights on the actual interface rather than in a separate viewer. However, end-users and third-party tech support typically cannot create contextual help to assist other users because it requires programming skill and source code access. We present a creation tool for contextual help that allows users to apply common computer skills-taking screenshots and writing simple scripts. We perform pixel analysis on screenshots to make this tool applicable to a wide range of applications and platforms without source code access. We evaluated the tool's usability with three groups of participants: developers, instructors, and tech support. We further validated the applicability of our tool with 60 real tasks supported by the tech support of a university campus.
Real-time collaborative coding in a web IDE BIBAFull-Text 155-164
  Max Goldman; Greg Little; Robert C. Miller
This paper describes Collabode, a web-based Java integrated development environment designed to support close, synchronous collaboration between programmers. We examine the problem of collaborative coding in the face of program compilation errors introduced by other users which make collaboration more difficult, and describe an algorithm for error-mediated integration of program code. Concurrent editors see the text of changes made by collaborators, but the errors reported in their view are based only on their own changes. Editors may run the program at any time, using only error-free edits supplied so far, and ignoring incomplete or otherwise error-generating changes. We evaluate this algorithm and interface on recorded data from previous pilot experiments with Collabode, and via a user study with student and professional programmers. We conclude that it offers appreciable benefits over naive continuous synchronization without regard to errors and over manual version control.
d.tour: style-based exploration of design example galleries BIBAFull-Text 165-174
  Daniel Ritchie; Ankita Arvind Kejriwal; Scott R. Klemmer
In design, people often seek examples for inspiration. However, current example-finding practices suffer many drawbacks: templates present designs without a usage context; search engines can only examine the text on a page. This paper introduces exploratory techniques for finding relevant and inspiring design examples. These novel techniques include searching by stylistic similarity to a known example design and searching by stylistic keyword. These interactions are manifest in d.tour, a style-based design exploration tool. d.tour presents a curated database of Web pages as an explorable design gallery. It extracts and analyzes design features of these pages, allowing it to process style-based queries and recommend designs to the user. d.tour's gallery interface decreases the gulfs of execution and evaluation for design example-finding.

With a little help

IP-QAT: in-product questions, answers, & tips BIBAFull-Text 175-184
  Justin Matejka; Tovi Grossman; George Fitzmaurice
We present IP-QAT, a new community-based question and answer system for software users. Unlike most community forums, IP-QAT is integrated into the actual software application, allowing users to easily post questions, answers and tips without having to leave the application. Our in-product implementation is context-aware and shows relevant posts based on a user's recent activity. It is also designed with minimal transaction costs to encourage users to easily post, include annotated images and file attachments, as well as tag their posts with relevant UI components. We describe a robust cloud-based system implementation, which allowed us to release IP-QAT to 37 users for a 2 week field study. Our study showed that IP-QAT increased user contributions, and subjectively, users found our system more useful and easier to use, in comparison to the existing commercial discussion board.
TwitApp: in-product micro-blogging for design sharing BIBAFull-Text 185-194
  Wei Li; Tovi Grossman; Justin Matejka; George Fitzmaurice
We describe TwitApp, an enhanced micro-blogging system integrated within AutoCAD for design sharing. TwitApp integrates rich content and still keeps the sharing transaction cost low. In TwitApp, tweets are organized by their project, and users can follow or unfollow each individual project. We introduce the concept of automatic tweet drafting and other novel features such as enhanced real-time search and integrated live video streaming. The TwitApp system leverages the existing Twitter micro-blogging system. We also contribute a study which provides insights on these concepts and associated designs, and demonstrates potential user excitement of such tools.
Searching for software learning resources using application context BIBAFull-Text 195-204
  Michael Ekstrand; Wei Li; Tovi Grossman; Justin Matejka; George Fitzmaurice
Users of complex software applications frequently need to consult documentation, tutorials, and support resources to learn how to use the software and further their understanding of its capabilities. Existing online help systems provide limited context awareness through "what's this?" and similar techniques. We examine the possibility of making more use of the user's current context in a particular application to provide useful help resources. We provide an analysis and taxonomy of various aspects of application context and how they may be used in retrieving software help artifacts with web browsers, present the design of a context-aware augmented web search system, and describe a prototype implementation and initial user study of this system. We conclude with a discussion of open issues and an agenda for further research.

Keynote address

Breaking barriers with sound BIBAFull-Text 205-206
  Ge Wang
The computer, in its many shapes and sizes, is evolving rapidly and pervading our everyday lives like never before. Mobile computing devices have become much more than simply "mobile", increasingly serving as personal and "natural" extensions of us. Therein lies immense potential to reshape the way we think and interact, and especially in how we engage one another creatively, expressively, and socially. This talk explores interaction and social design for music through the computer, told through laptop orchestras, mobile phone orchestras, an audio programming language, designing the iPhone's Ocarina, ecosystems for crowd-sourcing musical creation, and an emerging social dimension where computer, music, and people interact.

Development

Query-feature graphs: bridging user vocabulary and system functionality BIBAFull-Text 207-216
  Adam Fourney; Richard Mann; Michael Terry
This paper introduces query-feature graphs, or QF-graphs. QF-graphs encode associations between high-level descriptions of user goals (articulated as natural language search queries) and the specific features of an interactive system relevant to achieving those goals. For example, a QF-graph for the GIMP graphics manipulation software links the query "GIMP black and white" to the commands "desaturate" and "grayscale." We demonstrate how QF-graphs can be constructed using search query logs, search engine results, web page content, and localization data from interactive systems. An analysis of QF-graphs shows that the associations produced by our approach exhibit levels of accuracy that make them eminently usable in a range of real-world applications. Finally, we present three hypothetical user interface mechanisms that illustrate the potential of QF-graphs: search-driven interaction, dynamic tooltips, and app-to-app analogy search.
Stacksplorer: call graph navigation helps increasing code maintenance efficiency BIBAFull-Text 217-224
  Thorsten Karrer; Jan-Peter Krämer; Jonathan Diehl; Björn Hartmann; Jan Borchers
We present Stacksplorer, a new tool to support source code navigation and comprehension. Stacksplorer computes the call graph of a given piece of code, visualizes relevant parts of it, and allows developers to interactively traverse it. This augments the traditional code editor by offering an additional layer of navigation. Stacksplorer is particularly useful to understand and edit unknown source code because branches of the call graph can be explored and backtracked easily. Visualizing the callers of a method reduces the risk of introducing unintended side effects. In a quantitative study, programmers using Stacksplorer performed three of four software maintenance tasks significantly faster and with higher success rates, and Stacksplorer received a System Usability Scale rating of 85.4 from participants.
Cracking the cocoa nut: user interface programming at runtime BIBAFull-Text 225-234
  James R. Eagan; Michel Beaudouin-Lafon; Wendy E. Mackay
This article introduces runtime toolkit overloading, a novel approach to help third-party developers modify the interaction and behavior of existing software applications without access to their underlying source code. We describe the abstractions provided by this approach as well as the mechanisms for implementing them in existing environments. We describe Scotty, a prototype implementation for Mac OS X Cocoa that enables developers to modify existing applications at runtime, and we demonstrate a collection of interaction and functional transformations on existing off-the-shelf applications. We show how Scotty helps a developer make sense of unfamiliar software, even without access to its source code. We further discuss what features of future environments would facilitate this kind of runtime software development.
Monte Carlo methods for managing interactive state, action and feedback under uncertainty BIBAFull-Text 235-244
  Julia Schwarz; Jennifer Mankoff; Scott Hudson
Current input handling systems provide effective techniques for modeling, tracking, interpreting, and acting on user input. However, new interaction technologies violate the standard assumption that input is certain. Touch, speech recognition, gestural input, and sensors for context often produce uncertain estimates of user inputs. Current systems tend to remove uncertainty early on. However, information available in the user interface and application can help to resolve uncertainty more appropriately for the end user. This paper presents a set of techniques for tracking the state of interactive objects in the presence of uncertain inputs. These techniques use a Monte Carlo approach to maintain a probabilistically accurate description of the user interface that can be used to make informed choices about actions. Samples are used to approximate the distribution of possible inputs, possible interactor states that result from inputs, and possible actions (callbacks and feedback) interactors may execute. Because each sample is certain, the developer can specify most of the behavior of interactors in a familiar, non-probabilistic fashion. This approach retains all the advantages of maintaining information about uncertainty while minimizing the need for the developer to work in probabilistic terms. We present a working implementation of our framework and illustrate the power of these techniques within a paint program that includes three different kinds of uncertain input.
Associating the visual representation of user interfaces with their internal structures and metadata BIBAFull-Text 245-256
  Tsung-Hsiang Chang; Tom Yeh; Rob Miller
Pixel-based methods are emerging as a new and promising way to develop new interaction techniques on top of existing user interfaces. However, in order to maintain platform independence, other available low-level information about GUI widgets, such as accessibility metadata, was neglected intentionally. In this paper, we present a hybrid framework, PAX, which associates the visual representation of user interfaces (i.e. the pixels) and their internal hierarchical metadata (i.e. the content, role, and value). We identify challenges to building such a framework. We also develop and evaluate two new algorithms for detecting text at arbitrary places on the screen, and for segmenting a text image into individual word blobs. Finally, we validate our framework in implementations of three applications. We enhance an existing pixel-based system, Sikuli Script, and preserve the readability of its script code at the same time. Further, we create two novel applications, Screen Search and Screen Copy, to demonstrate how PAX can be applied to development of desktop-level interactive systems.
Animating from markup code to rendered documents and vice versa BIBAFull-Text 257-262
  Pierre Dragicevic; Stéphane Huot; Fanny Chevalier
We present a quick preview technique that smoothly transitions between document markup code and its visual rendering. This technique allows users to regularly check the code they are editing in-place, without leaving the text editor. This method can complement classical preview windows by offering rapid overviews of code-to-document mappings and leaving more screen real-estate. We discuss the design and implementation of our technique.

Tactile/blind

RhythmLink: securely pairing I/O-constrained devices by tapping BIBAFull-Text 263-272
  Felix Xiaozhu Lin; Daniel Ashbrook; Sean White
We present RhythmLink, a system that improves the wireless pairing user experience. Users can link devices such as phones and headsets together by tapping a known rhythm on each device. In contrast to current solutions, RhythmLink does not require user interaction with the host device during the pairing process; and it only requires binary input on the peripheral, making it appropriate for small devices with minimal physical affordances. We describe the challenges in enabling this user experience and our solution, an algorithm that allows two devices to compare imprecisely-entered tap sequences while maintaining the secrecy of those sequences. We also discuss our prototype implementation of RhythmLink and review the results of initial user tests.
Access overlays: improving non-visual access to large touch screens for blind users BIBAFull-Text 273-282
  Shaun K. Kane; Meredith Ringel Morris; Annuska Z. Perkins; Daniel Wigdor; Richard E. Ladner; Jacob O. Wobbrock
Many touch screens remain inaccessible to blind users, and those approaches to providing access that do exist offer minimal support for interacting with large touch screens or spatial data. In this paper, we introduce a set of three software-based access overlays intended to improve the accessibility of large touch screen interfaces, specifically interactive tabletops. Our access overlays are called edge projection, neighborhood browsing, and touch-and-speak. In a user study, 14 blind users compared access overlays to an implementation of Apple's VoiceOver screen reader. Our results show that two of our techniques were faster than VoiceOver, that participants correctly answered more questions about the screen's layout using our techniques, and that participants overwhelmingly preferred our techniques. We developed several applications demonstrating the use of access overlays, including an accessible map kiosk and an accessible board game.
Imaginary phone: learning imaginary interfaces by transferring spatial memory from a familiar device BIBAFull-Text 283-292
  Sean Gustafson; Christian Holz; Patrick Baudisch
We propose a method for learning how to use an imaginary interface (i.e., a spatial non-visual interface) that we call "transfer learning". By using a physical device (e.g. an iPhone) a user inadvertently learns the interface and can then transfer that knowledge to an imaginary interface. We illustrate this concept with our Imaginary Phone prototype. With it users interact by mimicking the use of a physical iPhone by tapping and sliding on their empty non-dominant hand without visual feedback. Pointing on the hand is tracked using a depth camera and touch events are sent wirelessly to an actual iPhone, where they invoke the corresponding actions. Our prototype allows the user to perform everyday task such as picking up a phone call or launching the timer app and setting an alarm. Imaginary Phone thereby serves as a shortcut that frees users from the necessity of retrieving the actual physical device. We present two user studies that validate the three assumptions underlying the transfer learning method. (1) Users build up spatial memory automatically while using a physical device: participants knew the correct location of 68% of their own iPhone home screen apps by heart. (2) Spatial memory transfers from a physical to an imaginary inter-face: participants recalled 61% of their home screen apps when recalling app location on the palm of their hand. (3) Palm interaction is precise enough to operate a typical mobile phone: Participants could reliably acquire 0.95cm wide iPhone targets on their palm-sufficiently large to operate any iPhone standard widget.
NaviRadar: a novel tactile information display for pedestrian navigation BIBAFull-Text 293-302
  Sonja Rümelin; Enrico Rukzio; Robert Hardy
We introduce NaviRadar: an interaction technique for mobile phones that uses a radar metaphor in order to communicate the user's correct direction for crossings along a desired route. A radar sweep rotates clockwise and tactile feedback is provided where each sweep distinctly conveys the user's current direction and the direction in which the user must travel. In a first study, we evaluated the overall concept and tested five different tactile patterns to communicate the two different directions via a single tactor. The results show that people are able to easily understand the NaviRadar concept and can identify the correct direction with a mean deviation of 37° out of the full 360° provided. A second study shows that NaviRadar achieves similar results in terms of perceived usability and navigation performance when compared with spoken instructions. By using only tactile feedback, NaviRadar provides distinct advantages over current systems. In particular, no visual attention is required to navigate; thus, it can be spent on providing greater awareness of one's surroundings. Moreover, the lack of audio attention enables it to be used in noisy environments or this attention can be better spent on talking with others during navigation.
PocketTouch: through-fabric capacitive touch input BIBAFull-Text 303-308
  T. Scott Saponas; Chris Harrison; Hrvoje Benko
PocketTouch is a capacitive sensing prototype that enables eyes-free multitouch input on a handheld device without having to remove the device from the pocket of one's pants, shirt, bag, or purse. PocketTouch enables a rich set of gesture interactions, ranging from simple touch strokes to full alphanumeric text entry. Our prototype device consists of a custom multitouch capacitive sensor mounted on the back of a smartphone. Similar capabilities could be enabled on most existing capacitive touchscreens through low-level access to the capacitive sensor. We demonstrate how touch strokes can be used to initialize the device for interaction and how strokes can be processed to enable text recognition of characters written over the same physical area. We also contribute a comparative study that empirically measures how different fabrics attenuate touch inputs, providing insight for future investigations. Our results suggest that PocketTouch will work reliably with a wide variety of fabrics used in today's garments, and is a viable input method for quick eyes-free operation of devices in pockets.
Tap control for headphones without sensors BIBAFull-Text 309-314
  Hiroyuki Manabe; Masaaki Fukumoto
A tap control technique for headphones is proposed. A simple circuit is used to detect tapping of the headphone shell by using the speaker unit in the headphone as a tap sensor. No additional devices are required in the headphone shell and cable, so the user can use their favorite headphones as a controller while listening music. A prototype is implemented with several calibration processes to compensate the differences in headphones and users' tapping actions. Tests confirm that the user can control a music player by tapping regular headphones.

Tangible

The proximity toolkit: prototyping proxemic interactions in ubiquitous computing ecologies BIBAFull-Text 315-326
  Nicolai Marquardt; Robert Diaz-Marino; Sebastian Boring; Saul Greenberg
People naturally understand and use proxemic relationships (e.g., their distance and orientation towards others) in everyday situations. However, only few ubiquitous computing (ubicomp) systems interpret such proxemic relationships to mediate interaction (proxemic interaction). A technical problem is that developers find it challenging and tedious to access proxemic information from sensors. Our Proximity Toolkit solves this problem. It simplifies the exploration of interaction techniques by supplying fine-grained proxemic information between people, portable devices, large interactive surfaces, and other non-digital objects in a room-sized environment. The toolkit offers three key features. 1) It facilitates rapid prototyping of proxemic-aware systems by supplying developers with the orientation, distance, motion, identity, and location information between entities. 2) It includes various tools, such as a visual monitoring tool, that allows developers to visually observe, record and explore proxemic relationships in 3D space. (3) Its flexible architecture separates sensing hardware from the proxemic data model derived from these sensors, which means that a variety of sensing technologies can be substituted or combined to derive proxemic information. We illustrate the versatility of the toolkit with proxemic-aware systems built by students.
ZeroN: mid-air tangible interaction enabled by computer controlled magnetic levitation BIBAFull-Text 327-336
  Jinha Lee; Rehmi Post; Hiroshi Ishii
This paper presents ZeroN, a new tangible interface element that can be levitated and moved freely by computer in a three dimensional space. ZeroN serves as a tangible representation of a 3D coordinate of the virtual world through which users can see, feel, and control computation. To accomplish this, we developed a magnetic control system that can levitate and actuate a permanent magnet in a predefined 3D volume. This is combined with an optical tracking and display system that projects images on the levitating object. We present applications that explore this new interaction modality. Users are invited to place or move the ZeroN object just as they can place objects on surfaces. For example, users can place the sun above physical objects to cast digital shadows, or place a planet that will start revolving based on simulated physical conditions. We describe the technology and interaction scenarios, discuss initial observations, and outline future development.
Medusa: a proximity-aware multi-touch tabletop BIBAFull-Text 337-346
  Michelle Annett; Tovi Grossman; Daniel Wigdor; George Fitzmaurice
We present Medusa, a proximity-aware multi-touch tabletop. Medusa uses 138 inexpensive proximity sensors to: detect a user's presence and location, determine body and arm locations, distinguish between the right and left arms, and map touch point to specific users and specific hands. Our tracking algorithms and hardware designs are described. Exploring this unique design, we develop and report on a collection of interactions enabled by Medusa in support of multi-user collaborative design, specifically within the context of Proxi-Sketch, a multi-user UI prototyping tool. We discuss design issues, system implementation, limitations, and generalizable concepts throughout the paper.
Portico: tangible interaction on and around a tablet BIBAFull-Text 347-356
  Daniel Avrahami; Jacob O. Wobbrock; Shahram Izadi
We present Portico, a portable system for enabling tangible interaction on and around tablet computers. Two cameras on small foldable arms are positioned above the display to recognize a variety of physical objects placed on or around the tablet. These cameras have a larger field-of-view than the screen, allowing Portico to extend interaction significantly beyond the tablet itself. Our prototype, which uses a 12" tablet, delivers an interaction space six times the size of the tablet screen. Portico thus allows tablets to extend both their sensing capabilities and interaction space without sacrificing portability. We describe the design of our system and present a number of applications that demonstrate Portico's unique capability to track objects. We focus on a number of fun applications that demonstrate how such a device can be used as a low-cost way to create personal surface computing experiences. Finally, we discuss the challenges in supporting tangible interaction beyond the screen and describe possible mechanisms for overcoming them.
Conté: multimodal input inspired by an artist's crayon BIBAFull-Text 357-366
  Daniel Vogel; Géry Casiez
Conté is a small input device inspired by the way artists manipulate a real Conté crayon. By changing which corner, edge, end, or side is contacting the display, the operator can switch interaction modes using a single hand. Conté's rectangular prism shape enables both precise pen-like input and tangible handle interaction. Conté also has a natural compatibility with multi-touch input: it can be tucked in the palm to interleave same-hand touch input, or used to expand the vocabulary of bimanual touch. Inspired by informal interviews with artists, we catalogue Conté's characteristics, and use these to outline a design space. We describe a prototype device using common materials and simple electronics. With this device, we demonstrate interaction techniques in a test-bed drawing application. Finally, we discuss alternate hardware designs and future human factors research to study this new class of input.
Clip-on gadgets: expanding multi-touch interaction area with unpowered tactile controls BIBAFull-Text 367-372
  Neng-Hao Yu; Sung-Sheng Tsai; I-Chun Hsiao; Dian-Je Tsai; Meng-Han Lee; Mike Y. Chen; Yi-Ping Hung
Virtual keyboards and controls, commonly used on mobile multi-touch devices, occlude content of interest and do not provide tactile feedback. Clip-on Gadgets solve these issues by extending the interaction area of multi-touch devices with physical controllers. Clip-on Gadgets use only conductive materials to map user input on the controllers to touch points on the edges of screens; therefore, they are battery-free, lightweight, and low-cost. In addition, they can be used in combination with multi-touch gestures. We present several hardware designs and a software toolkit, which enable users to simply attach Clip-on Gadgets to an edge of a device and start interacting with it.

Sensing form and rhythm

Sketch-sketch revolution: an engaging tutorial system for guided sketching and application learning BIBAFull-Text 373-382
  Jennifer Fernquist; Tovi Grossman; George Fitzmaurice
We describe Sketch-Sketch Revolution, a new tutorial system that allows any user to experience the success of drawing content previously created by an expert artist. Sketch-Sketch Revolution not only guides users through the application user interface, it also provides assistance with the actual sketching. In addition, the system offers an authoring tool that enables artists to create content and then automatically generates a tutorial from their recorded workflow history. Sketch-Sketch Revolution is a unique hybrid tutorial system that combines in-product, content-centric and reactive tutorial methods to provide an engaging learning experience. A qualitative user study showed that our system successfully taught users how to interact with a drawing application user interface, gave users confidence they could recreate expert content, and was uniformly considered useful and easy to use.
Elasticurves: exploiting stroke dynamics and inertia for the real-time neatening of sketched 2D curves BIBAFull-Text 383-392
  Yannick Thiel; Karan Singh; Ravin Balakrishnan
Elasticurves present a novel approach to neaten sketches in real-time, resulting in curves that combine smoothness with user-intended detail. Inspired by natural variations in stroke speed when drawing quickly or with precision, we exploit stroke dynamics to distinguish intentional fine detail from stroke noise. Combining inertia and stroke dynamics, elasticurves can be imagined as the trace of a pen attached to the user by an oscillation-free elastic band. Sketched quickly, the elasticurve spatially lags behind the stroke, smoothing over stroke detail, but catches up and matches the input stroke at slower speeds. Connectors, such as lines or circular-arcs link the evolving elasticurve to the next input point, growing the curve by a responsiveness fraction along the connector. Responsiveness is calibrated, to reflect drawing skill or device noise. Elasticurves are theoretically sound and robust to variations in stroke sampling. Practically, they neaten digital strokes in real-time while retaining the modeless and visceral feel of pen on paper.
ReVision: automated classification, analysis and redesign of chart images BIBAFull-Text 393-402
  Manolis Savva; Nicholas Kong; Arti Chhajta; Li Fei-Fei; Maneesh Agrawala; Jeffrey Heer
Poorly designed charts are prevalent in reports, magazines, books and on the Web. Most of these charts are only available as bitmap images; without access to the underlying data it is prohibitively difficult for viewers to create more effective visual representations. In response we present ReVision, a system that automatically redesigns visualizations to improve graphical perception. Given a bitmap image of a chart as input, ReVision applies computer vision and machine learning techniques to identify the chart type (e.g., pie chart, bar chart, scatterplot, etc.). It then extracts the graphical marks and infers the underlying data. Using a corpus of images drawn from the web, ReVision achieves image classification accuracy of 96% across ten chart categories. It also accurately extracts marks from 79% of bar charts and 62% of pie charts, and from these charts it successfully extracts data from 71% of bar charts and 64% of pie charts. ReVision then applies perceptually-based design principles to populate an interactive gallery of redesigned charts. With this interface, users can view alternative chart designs and retarget content to different visual styles.
Calibration games: making calibration tasks enjoyable by adding motivating game elements BIBAFull-Text 403-412
  David R. Flatla; Carl Gutwin; Lennart E. Nacke; Scott Bateman; Regan L. Mandryk
Interactive systems often require calibration to ensure that input and output are optimally configured. Without calibration, user performance can degrade (e.g., if an input device is not adjusted for the user's abilities), errors can increase (e.g., if color spaces are not matched), and some interactions may not be possible (e.g., use of an eye tracker). The value of calibration is often lost, however, because many calibration processes are tedious and unenjoyable, and many users avoid them altogether. To address this problem, we propose calibration games that gather calibration data in an engaging and entertaining manner. To facilitate the creation of calibration games, we present design guidelines that map common types of calibration to core tasks, and then to well-known game mechanics. To evaluate the approach, we developed three calibration games and compared them to standard procedures. Users found the game versions significantly more enjoyable than regular calibration procedures, without compromising the quality of the data. Calibration games are a novel way to motivate users to carry out calibrations, thereby improving the performance and accuracy of many human-computer systems.
onNote: playing printed music scores as a musical instrument BIBAFull-Text 413-422
  Yusuke Yamamoto; Hideaki Uchiyama; Yasuaki Kakehi
This paper presents a novel musical performance system named onNote that directly utilizes printed music scores as a musical instrument. This system can make users believe that sound is indeed embedded on the music notes in the scores. The users can play music simply by placing, moving and touching the scores under a desk lamp equipped with a camera and a small projector. By varying the movement, the users can control the playing sound and the tempo of the music. To develop this system, we propose an image processing based framework for retrieving music from a music database by capturing printed music scores. From a captured image, we identify the scores by matching them with the reference music scores, and compute the position and pose of the scores with respect to the camera. By using this framework, we can develop novel types of musical interactions.
Peripheral paced respiration: influencing user physiology during information work BIBAFull-Text 423-428
  Neema Moraveji; Ben Olson; Truc Nguyen; Mahmoud Saadat; Yaser Khalighi; Roy Pea; Jeffrey Heer
We present the design and evaluation of a technique for influencing user respiration by integrating respiration-pacing methods into the desktop operating system in a peripheral manner. Peripheral paced respiration differs from prior techniques in that it does not require the user's full attention. We conducted a within-subjects study to evaluate the efficacy of peripheral paced respiration, as compared to no feedback, in an ecologically valid environment. Participant respiration decreased significantly in the pacing condition. Upon further analysis, we attribute this difference to a significant decrease in breath rate while the intermittent pacing feedback is active, rather than a persistent change in respiratory pattern. The results have implications for researchers in physiological computing, biofeedback designers, and human-computer interaction researchers concerned with user stress and affect.

Keynote address 2

Sex, food, and words: the hidden meanings behind everyday language BIBAFull-Text 429-430
  Dan Jurafsky
Language is a subtle and powerful tool for communication. But the words we use also provide a rich mine of information for the social scientist. The history of words like "ketchup", "ceviche", or "dessert" tells us about the relationships between the superpowers who dominated the globe 500 or 1000 years ago. The words on the back of potato chip packages can demonstrate popular attitudes toward social class. And the names we give ice cream flavors may be an evolutionary reflex of the attempt by early mammals to appear larger than their competitors. The language of dating is just as informative as the language of food. In experiments with speed dating, work in our lab shows that we can detect flirtation or other stances in men and women on dates, just by looking at linguistic features like their pitch, their use of negative words like "can't" or "don't", or how often they use hedges like "sort of" or "kind of". The language of these two popular topics of conversation, food and dating, can teach us a lot about history, culture, and psychology.

Mobile

SideBySide: ad-hoc multi-user interaction with handheld projectors BIBAFull-Text 431-440
  Karl D. D. Willis; Ivan Poupyrev; Scott E. Hudson; Moshe Mahler
We introduce SideBySide, a system designed for ad-hoc multi-user interaction with handheld projectors. SideBySide uses device-mounted cameras and hybrid visible/infrared light projectors to track multiple independent projected images in relation to one another. This is accomplished by projecting invisible fiducial markers in the near-infrared spectrum. Our system is completely self-contained and can be deployed as a handheld device without instrumentation of the environment. We present the design and implementation of our system including a hybrid handheld projector to project visible and infrared light, and techniques for tracking projected fiducial markers that move and overlap. We introduce a range of example applications that demonstrate the applicability of our system to real-world scenarios such as mobile content exchange, gaming, and education.
OmniTouch: wearable multitouch interaction everywhere BIBAFull-Text 441-450
  Chris Harrison; Hrvoje Benko; Andrew D. Wilson
OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces. Beyond the shoulder-worn system, there is no instrumentation of the user or environment. Foremost, the system allows the wearer to use their hands, arms and legs as graphical, interactive surfaces. Users can also transiently appropriate surfaces from the environment to expand the interactive area (e.g., books, walls, tables). On such surfaces -- without any calibration -- OmniTouch provides capabilities similar to that of a mouse or touchscreen: X and Y location in 2D interfaces and whether fingers are "clicked" or hovering, enabling a wide variety of interactions. Reliable operation on the hands, for example, requires buttons to be 2.3cm in diameter. Thus, it is now conceivable that anything one can do on today's mobile devices, they could do in the palm of their hand.
Visual separation in mobile multi-display environments BIBAFull-Text 451-460
  Jessica R. Cauchard; Markus Löchtefeld; Pourang Irani; Johannes Schoening; Antonio Krüger; Mike Fraser; Sriram Subramanian
Projector phones, handheld game consoles and many other mobile devices increasingly include more than one display, and therefore present a new breed of mobile Multi-Display Environments (MDEs) to users. Existing studies illustrate the effects of visual separation between displays in MDEs and suggest interaction techniques that mitigate these effects. Currently, mobile devices with heterogeneous displays such as projector phones are often designed without reference to visual separation issues; therefore it is critical to establish whether concerns and opportunities raised in the existing MDE literature apply to the emerging category of Mobile MDEs (MMDEs). This paper investigates the effects of visual separation in the context of MMDEs and contrasts these with fixed MDE results, and explores design factors for Mobile MDEs. Our study uses a novel eye-tracking methodology for measuring switches in visual context between displays and identifies that MMDEs offer increased design flexibility over traditional MDEs in terms of visual separation. We discuss these results and identify several design implications.
The 1line keyboard: a QWERTY layout in a single line BIBAFull-Text 461-470
  Frank Chun Yat Li; Richard T. Guy; Koji Yatani; Khai N. Truong
Current soft QWERTY keyboards often consume a large portion of the screen space on portable touchscreens. This space consumption can diminish the overall user experience on these devices. In this paper, we present the 1Line keyboard, a soft QWERTY keyboard that is 140 pixels tall (in landscape mode) and 40% of the height of the native iPad QWERTY keyboard. Our keyboard condenses the three rows of keys in the normal QWERTY layout into a single line with eight keys. The sizing of the eight keys is based on users' mental layout of a QWERTY keyboard on an iPad. The system disambiguates the word the user types based on the sequence of keys pressed. The user can use flick gestures to perform backspace and enter, and tap on the bezel below the keyboard to input a space. Through an evaluation, we show that participants are able to quickly learn how to use the 1Line keyboard and type at a rate of over 30 WPM after just five 20-minute typing sessions. Using a keystroke level model, we predict the peak expert text entry rate with the 1Line keyboard to be 66-68 WPM.
1 thumb, 4 buttons, 20 words per minute: design and evaluation of H4-writer BIBAFull-Text 471-480
  I. Scott MacKenzie; R. William Soukoreff; Joanna Helga
We present what we believe is the most efficient and quickest four-key text entry method available. H4-Writer uses Huffman coding to assign minimized key sequences to letters, with full access to error correction, punctuation, digits, modes, etc. The key sequences are learned quickly, and support eyes-free entry. With KSPC = 2.321, the effort to enter text is comparable to multitap on a mobile phone keypad; yet multitap requires nine keys. In a longitudinal study with six participants, an average text entry speed of 20.4 wpm was observed in the 10th session. Error rates were under 1%. To improve external validity, an extended session was included that required input of punctuation and other symbols. Entry speed dropped only by about 3 wpm, suggesting participants quickly leveraged their acquired skill with H4-Writer to access advanced features.
Pub -- point upon body: exploring eyes-free interaction and methods on an arm BIBAFull-Text 481-488
  Shu-Yang Lin; Chao-Huai Su; Kai-Yin Cheng; Rong-Hao Liang; Tzu-Hao Kuo; Bing-Yu Chen
This paper presents a novel interaction system, PUB (Point Upon Body), to explore eyes-free interaction in a personal space by allowing users tapping on their own arms to be provided with haptic feedback from their skin. Two user studies determine how users can interact precisely with their forearms and how users behave when operating in their arm space. According to those results, normal users can divide their arm space at most into 6 points between their wrists and elbows with iterative practice. Experimental results also indicate that the divided pattern of each user is unique from that of other ones. Based on the design principles from the observations, an interaction system, PUB, is designed to demonstrate how interaction design benefits from those findings. Two scenarios, remote display control and mobile device control, are demonstrated through the UltraSonic device attached on the users' wrists to detect their tapped positions.

Sensing

SpeckleSense: fast, precise, low-cost and compact motion sensing using laser speckle BIBAFull-Text 489-498
  Jan Zizka; Alex Olwal; Ramesh Raskar
Motion sensing is of fundamental importance for user interfaces and input devices. In applications, where optical sensing is preferred, traditional camera-based approaches can be prohibitive due to limited resolution, low frame rates and the required computational power for image processing. We introduce a novel set of motion-sensing configurations based on laser speckle sensing that are particularly suitable for human-computer interaction. The underlying principles allow these configurations to be fast, precise, extremely compact and low cost. We provide an overview and design guidelines for laser speckle sensing for user interaction and introduce four general speckle projector/sensor configurations. We describe a set of prototypes and applications that demonstrate the versatility of our laser speckle sensing techniques.
Area-based photo-plethysmographic sensing method for the surfaces of handheld devices BIBAFull-Text 499-508
  Hiroshi Chigira; Atsuhiko Maeda; Minoru Kobayashi
Capturing the user's vital signs is an urgent goal in the HCI community. Photo-plethysmography (PPG) is one approach; it can collect data from the finger tips that indicate the user's autonomic nervous system (ANS) and offers new potentials such as mental stress measurement and drowsy state detection. Our goal is to set PPG sensors on the surfaces of ordinary devices such as mice, smartphones, and steering wheels. This will offer smart monitoring without the burden of additional wearable sensors. Unfortunately, current PPG sensors are very narrow, and even if the sensor is attached to the surface of a device, the user is forced to align and hold the finger to the sensor point, which degrades device usability. To solve this problem, we propose an area-based sensing method that relaxes the alignment requirement. The proposed method uses two thin acrylic plates, a diffuser plate and a detection plate, as an IR waveguide. The proposed method can yield very thin sensing surfaces and gentle curvatures are possible. An experiment compares the proposed method to the conventional point-sensor in terms of LF/HF discrimination performance with the participant in the resting state, and the proposed method is shown to offer comparable sensing performance with superior usability.
Detecting shape deformation of soft objects using directional photoreflectivity measurement BIBAFull-Text 509-516
  Yuta Sugiura; Gota Kakehi; Anusha Withana; Calista Lee; Daisuke Sakamoto; Maki Sugimoto; Masahiko Inami; Takeo Igarashi
We present the FuwaFuwa sensor module, a round, hand-size, wireless device for measuring the shape deformations of soft objects such as cushions and plush toys. It can be embedded in typical soft objects in the household without complex installation procedures and without spoiling the softness of the object because it requires no physical connection. Six LEDs in the module emit IR light in six orthogonal directions, and six corresponding photosensors measure the reflected light energy. One can easily convert almost any soft object into a touch-input device that can detect both touch position and surface displacement by embedding multiple FuwaFuwa sensor modules in the object. A variety of example applications illustrate the utility of the FuwaFuwa sensor module. An evaluation of the proposed deformation measurement technique confirms its effectiveness.
Modular and deformable touch-sensitive surfaces based on time domain reflectometry BIBAFull-Text 517-526
  Raphael Wimmer; Patrick Baudisch
Time domain reflectometry, a technique originally used in diagnosing cable faults, can also locate where a cable is being touched. In this paper, we explore how to extend time domain reflectometry in order to touch-enable thin, modular, and deformable surfaces and devices. We demonstrate how to use this approach to make smart clothing and to rapid prototype touch-sensitive objects of arbitrary shape. To accomplish this, we extend time domain reflectometry in three ways: (1) Thin: We demonstrate how to run time domain reflectometry on a single wire. This allows us to touch-enable thin metal objects, such as guitar strings. (2) Modularity: We present a two-pin connector system that allows users to daisy chain touch-sensitive segments. We illustrate these enhancements with 13 prototypes and a series of performance measurements. (3) Deformability: We create deformable touch devices by mounting stretch-able wire patterns onto elastic tape and meshes. We present selected performance measurements.
deForm: an interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch BIBAFull-Text 527-536
  Sean Follmer; Micah Johnson; Edward Adelson; Hiroshi Ishii
We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications.
A new angle on cheap LCDs: making positive use of optical distortion BIBAFull-Text 537-540
  Chris Harrison; Scott E. Hudson
Most LCD screens exhibit color distortions when viewed at oblique angles. Engineers have invested significant time and resources to alleviate this effect. However, the massive manufacturing base, as well as millions of in-the-wild monitors, means this effect will be common for many years to come. We take an opposite stance, embracing these optical peculiarities, and consider how they can be used in productive ways. This paper discusses how a special palette of colors can yield visual elements that are invisible when viewed straight-on, but visible at oblique angles. In essence, this allows conventional, unmodified LCD screens to output two images simultaneously -- a feature normally only available in far more complex setups. We enumerate several applications that could take advantage of this ability.

3D

Direct and gestural interaction with relief: a 2.5D shape display BIBAFull-Text 541-548
  Daniel Leithinger; David Lakatos; Anthony DeVincenzi; Matthew Blackshaw; Hiroshi Ishii
Actuated shape output provides novel opportunities for experiencing, creating and manipulating 3D content in the physical world. While various shape displays have been proposed, a common approach utilizes an array of linear actuators to form 2.5D surfaces. Through identifying a set of common interactions for viewing and manipulating content on shape displays, we argue why input modalities beyond direct touch are required. The combination of freehand gestures and direct touch provides additional degrees of freedom and resolves input ambiguities, while keeping the locus of interaction on the shape output. To demonstrate the proposed combination of input modalities and explore applications for 2.5D shape displays, two example scenarios are implemented on a prototype system.
6D hands: markerless hand-tracking for computer aided design BIBAFull-Text 549-558
  Robert Wang; Sylvain Paris; A Jovan Popovic
Computer Aided Design (CAD) typically involves tasks such as adjusting the camera perspective and assembling pieces in free space that require specifying 6 degrees of freedom (DOF). The standard approach is to factor these DOFs into 2D subspaces that are mapped to the x and y axes of a mouse. This metaphor is inherently modal because one needs to switch between subspaces, and disconnects the input space from the modeling space. In this paper, we propose a bimanual hand tracking system that provides physically-motivated 6-DOF control for 3D assembly. First, we discuss a set of principles that guide the design of our precise, easy-to-use, and comfortable-to-use system. Based on these guidelines, we describe a 3D input metaphor that supports constraint specification classically used in CAD software, is based on only a few simple gestures, lets users rest their elbows on their desk, and works alongside the keyboard and mouse. Our approach uses two consumer-grade webcams to observe the user's hands. We solve the pose estimation problem with efficient queries of a precomputed database that relates hand silhouettes to their 3D configuration. We demonstrate efficient 3D mechanical assembly of several CAD models using our hand-tracking system.
KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera BIBAFull-Text 559-568
  Shahram Izadi; David Kim; Otmar Hilliges; David Molyneaux; Richard Newcombe; Pushmeet Kohli; Jamie Shotton; Steve Hodges; Dustin Freeman; Andrew Davison; Andrew Fitzgibbon
KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.
Vermeer: direct interaction with a 360° viewable 3D display BIBAFull-Text 569-576
  Alex Butler; Otmar Hilliges; Shahram Izadi; Steve Hodges; David Molyneaux; David Kim; Danny Kong
We present Vermeer, a novel interactive 360° viewable 3D display. Like prior systems in this area, Vermeer provides viewpoint-corrected, stereoscopic 3D graphics to simultaneous users, 360° around the display, without the need for eyewear or other user instrumentation. Our goal is to over-come an issue inherent in these prior systems which -- typically due to moving parts -- restrict interactions to outside the display volume. Our system leverages a known optical illusion to demonstrate, for the first time, how users can reach into and directly touch 3D objects inside the display volume. Vermeer is intended to be a new enabling technology for interaction, and we therefore describe our hardware implementation in full, focusing on the challenges of combining this optical configuration with an existing approach for creating a 360° viewable 3D display. Initially we demonstrate direct involume interaction by sensing user input with a Kinect camera placed above the display. However, by exploiting the properties of the optical configuration, we also demonstrate novel prototypes for fully integrated input sensing alongside simultaneous display. We conclude by discussing limitations, implications for interaction, and ideas for future work.
IrCube tracker: an optical 6-DOF tracker based on LED directivity BIBAFull-Text 577-586
  Seongkook Heo; Jaehyun Han; Sangwon Choi; Seunghwan Lee; Geehyuk Lee; Hyong-Euk Lee; SangHyun Kim; Won-Chul Bang; DoKyoon Kim; ChangYeong Kim
Six-degrees-of-freedom (6-DOF) trackers, which were mainly for professional computer applications, are now in demand by everyday consumer applications. With the requirements of consumer electronics in mind, we designed an optical 6-DOF tracker where a few photo-sensors can track the position and orientation of an LED cluster. The operating principle of the tracker is basically source localization by solving an inverse problem. We implemented a prototype system for a TV viewing environment, verified the feasibility of the operating principle, and evaluated the basic performance of the prototype system in terms of accuracy and speed. We also examined its application possibility to different environments, such as a tabletop computer, a tablet computer, and a mobile spatial interaction environment.
Toucheo: multitouch and stereo combined in a seamless workspace BIBAFull-Text 587-592
  Martin Hachet; Benoit Bossavit; Aurélie Cohé; Jean-Baptiste de la Rivière
We propose a new system that efficiently combines direct multitouch interaction with co-located 3D stereoscopic visualization. In our approach, users benefit from well-known 2D metaphors and widgets displayed on a monoscopic touchscreen, while visualizing occlusion-free 3D objects floating above the surface at an optically correct distance. Technically, a horizontal semi-transparent mirror is used to reflect 3D images produced by a stereoscopic screen, while the user's hand as well as a multitouch screen located below this mirror remain visible. By registering the 3D virtual space and the physical space, we produce a rich and unified workspace where users benefit simultaneously from the advantages of both direct and indirect interaction, and from 2D and 3D visualizations. A pilot usability study shows that this combination of technology provides a good user experience.

Pointing

Harpoon selection: efficient selections for ungrouped content on large pen-based surfaces BIBAFull-Text 593-602
  Jakob Leitner; Michael Haller
In this paper, we present the Harpoon selection tool, a novel selection technique specifically designed for interactive whiteboards. The tool combines area cursors and crossing to perform complex selections amongst a large number of unsorted, ungrouped items. It is optimized for large-scale pen-based surfaces and works well in both dense and sparse surroundings. We describe a list of key features relevant to the design of the tool and provide a detailed description of both the mechanics as well as the feedback of the tool. The results of a user study are described and analyzed to confirm our design. The study shows that the Harpoon tool performs significantly faster than Tapping and Lassoing.
No more bricolage!: methods and tools to characterize, replicate and compare pointing transfer functions BIBAFull-Text 603-614
  Géry Casiez; Nicolas Roussel
Transfer functions are the only pointing facilitation technique actually used in modern graphical interfaces involving the indirect control of an on-screen cursor. But despite their general use, very little is known about them. We present EchoMouse, a device we created to characterize the transfer functions of any system, and libpointing, a toolkit that we developed to replicate and compare the ones used by Windows, OS X and Xorg. We describe these functions and report on an experiment that compared the default one of the three systems. Our results show that these default functions improve performance up to 24% compared to a unitless constant CD gain. We also found significant differences between them, with the one from OS X improving performance for small target widths but reducing its performance up to 9% for larger ones compared to Windows and Xorg. These results notably suggest replacing the constant CD gain function commonly used by HCI researchers by the default function of the considered systems.
FingerFlux: near-surface haptic feedback on tabletops BIBAFull-Text 615-620
  Malte Weiss; Chat Wacharamanotham; Simon Voelker; Jan Borchers
We introduce FingerFlux, an output technique to generate near-surface haptic feedback on interactive tabletops. Our system combines electromagnetic actuation with permanent magnets attached to the user's hand. FingerFlux lets users feel the interface before touching, and can create both attracting and repelling forces. This enables applications such as reducing drifting, adding physical constraints to virtual controls, and guiding the user without visual output. We show that users can feel vibration patterns up to 35 mm above our table, and that FingerFlux can significantly reduce drifting when operating on-screen buttons without looking.
Force gestures: augmenting touch screen gestures with normal and tangential forces BIBAFull-Text 621-626
  Seongkook Heo; Geehyuk Lee
Force gestures are touch screen gestures augmented by the normal and tangential forces on the screen. In order to study the feasibility of the force gestures on a mobile touch screen, we implemented a prototype touch screen device that can sense the normal and tangential forces of a touch gesture on the screen. We also designed two example applications, a web browser and an e-book reader, that utilize the force gestures for their primary actions. We conducted a user study with the prototype and the applications to study the characteristics of the force gestures and the effectiveness of their mapping to the primary actions. In the user study we could also discover interesting usability issues and collect useful user feedback about the force gestures and their mapping to GUI actions.
TapSense: enhancing finger interaction on touch surfaces BIBAFull-Text 627-636
  Chris Harrison; Julia Schwarz; Scott E. Hudson
We present TapSense, an enhancement to touch interaction that allows conventional surfaces to identify the type of object being used for input. This is achieved by segmenting and classifying sounds resulting from an object's impact. For example, the diverse anatomy of a human finger allows different parts to be recognized including the tip, pad, nail and knuckle -- without having to instrument the user. This opens several new and powerful interaction opportunities for touch input, especially in mobile devices, where input is extremely constrained. Our system can also identify different sets of passive tools. We conclude with a comprehensive investigation of classification accuracy and training implications. Results show our proof-of-concept system can support sets with four input types at around 95% accuracy. Small, but useful input sets of two (e.g., pen and finger discrimination) can operate in excess of 99% accuracy.