HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 000102030405060708091011-111-212-112-213-113-214-114-215-115-2

Proceedings of the 2010 ACM Symposium on User Interface Software and Technology

Fullname:Proceedings of the 2010 ACM Symposium on User Interface and Software Technology
Editors:Ken Perlin; Mary Czerwinski; Rob Miller
Location:New York City
Dates:2010-Oct-03 to 2010-Oct-06
Publisher:ACM
Standard No:ISBN: 1-4503-0271-8, 978-1-4503-0271-5; ACM DL: Table of Contents 1866218 hcibib: UIST10
Papers:90
Pages:462
Links:Conference Home Page
  1. Keynote
  2. Freeform input
  3. AI and toolkits
  4. Input
  5. Frameworks
  6. Space and time
  7. Artist talk
  8. Feet or TOE CHI
  9. Intelligence
  10. Surface
  11. Social
  12. Keynote
  13. Doctoral consortium
  14. Demonstrations
  15. Posters

Keynote

Intimacy versus privacy BIBAKFull-Text 1-2
  Marvin Minsky
When you talk to a person, it's safe to assume that you both share large bodies of "common sense knowledge." But when you converse with a programmed computer, neither of you is likely to know much about what the other one knows.
   Indeed, in some respects this is desirable -- as when we're concerned with our privacy. We don't want strangers to know our most personal goals, or all the resources that we may control.
   However, when we turn to our computers for help, we'll want that relationship to change -- because now it is in our interest for those systems to understand our aims and goals, as well as our fears and phobias. Indeed, the extents to which those processes "know us as individuals".
   Issues like these will always arise whenever we need a new interface -- and as one of my teachers wrote long ago, "The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today."
   Indeed, the '60s and '70s saw substantial advances towards this but it seems to me that then progress slowed down. If so, perhaps this was partly because the AI community moved from semantic and heuristic methods towards more formal (but less flexible) statistical schemes. So now I'd like to see more researchers remedy this by developing systems that use more commonsense knowledge.
Keywords: privacy

Freeform input

Imaginary interfaces: spatial interaction with empty hands and without visual feedback BIBAKFull-Text 3-12
  Sean Gustafson; Daniel Bierwirth; Patrick Baudisch
Screen-less wearable devices allow for the smallest form factor and thus the maximum mobility. However, current screen-less devices only support buttons and gestures. Pointing is not supported because users have nothing to point at. However, we challenge the notion that spatial interaction requires a screen and propose a method for bringing spatial interaction to screen-less devices.
   We present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solutions, such as Sixth Sense, all visual "feedback" takes place in the user's imagination. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space.
   With three user studies we investigate the question: To what extent can users interact spatially with a user interface that exists only in their imagination? Participants created simple drawings, annotated existing drawings, and pointed at locations described in imaginary space. Our findings suggest that users' visual short-term memory can, in part, replace the feedback conventionally displayed on a screen.
Keywords: bimanual, computer vision, gesture, memory, mobile, screen-less, spatial, wearable
PhoneTouch: a technique for direct phone interaction on surfaces BIBAKFull-Text 13-16
  Dominik Schmidt; Fadi Chehimi; Enrico Rukzio; Hans Gellersen
PhoneTouch is a novel technique for integration of mobile phones and interactive surfaces. The technique enables use of phones to select targets on the surface by direct touch, facilitating for instance pick&drop-style transfer of objects between phone and surface. The technique is based on separate detection of phone touch events by the surface, which determines location of the touch, and by the phone, which contributes device identity. The device-level observations are merged based on correlation in time. We describe a proof-of-concept implementation of the technique, using vision for touch detection on the surface (including discrimination of finger versus phone touch) and acceleration features for detection by the phone.
Keywords: interaction techniques, interactive tabletops, mobile phones, personal devices, surface computing
Hands-on math: a page-based multi-touch and pen desktop for technical work and problem solving BIBAKFull-Text 17-26
  Robert Zeleznik; Andrew Bragdon; Ferdi Adeputra; Hsu-Sheng Ko
Students, scientists and engineers have to choose between the flexible, free-form input of pencil and paper and the computational power of Computer Algebra Systems (CAS) when solving mathematical problems. Hands-On Math is a multi-touch and pen-based system which attempts to unify these approaches by providing virtual paper that is enhanced to recognize mathematical notations as a means of providing in situ access to CAS functionality. Pages can be created and organized on a large pannable desktop, and mathematical expressions can be computed, graphed and manipulated using a set of uni- and bi-manual interactions which facilitate rapid exploration by eliminating tedious and error prone transcription tasks. Analysis of a qualitative pilot evaluation indicates the potential of our approach and highlights usability issues with the novel techniques used.
Keywords: gestures, math, multi-touch, pages, paper, stylus
Pen + touch = new tools BIBAKFull-Text 27-36
  Ken Hinckley; Koji Yatani; Michel Pahud; Nicole Coddington; Jenny Rodenhouse; Andy Wilson; Hrvoje Benko; Bill Buxton
We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the "glue" that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.
Keywords: bimanual, gestures, pen, systems, tabletop, tablets, touch

AI and toolkits

Gestalt: integrated support for implementation and analysis in machine learning BIBAKFull-Text 37-46
  Kayur Patel; Naomi Bancroft; Steven M. Drucker; James Fogarty; Andrew J. Ko; James Landay
We present Gestalt, a development environment designed to support the process of applying machine learning. While traditional programming environments focus on source code, we explicitly support both code and data. Gestalt allows developers to implement a classification pipeline, analyze data as it moves through that pipeline, and easily transition between implementation and analysis. An experiment shows this significantly improves the ability of developers to find and fix bugs in machine learning systems. Our discussion of Gestalt and our experimental observations provide new insight into general-purpose support for the machine learning process.
Keywords: gestalt, machine learning, software development
A framework for robust and flexible handling of inputs with uncertainty BIBAKFull-Text 47-56
  Julia Schwarz; Scott Hudson; Jennifer Mankoff; Andrew D. Wilson
New input technologies (such as touch), recognition based input (such as pen gestures) and next-generation interactions (such as inexact interaction) all hold the promise of more natural user interfaces. However, these techniques all create inputs with some uncertainty. Unfortunately, conventional infrastructure lacks a method for easily handling uncertainty, and as a result input produced by these technologies is often converted to conventional events as quickly as possible, leading to a stunted interactive experience. We present a framework for handling input with uncertainty in a systematic, extensible, and easy to manipulate fashion. To illustrate this framework, we present several traditional interactors which have been extended to provide feedback about uncertain inputs and to allow for the possibility that in the end that input will be judged wrong (or end up going to a different interactor). Our six demonstrations include tiny buttons that are manipulable using touch input, a text box that can handle multiple interpretations of spoken input, a scrollbar that can respond to inexactly placed input, and buttons which are easier to click for people with motor impairments. Our framework supports all of these interactions by carrying uncertainty forward all the way through selection of possible target interactors, interpretation by interactors, generation of (uncertain) candidate actions to take, and a mediation process that decides (in a lazy fashion) which actions should become final.
Keywords: ambiguity, input handling, recognition
TurKit: human computation algorithms on mechanical turk BIBAKFull-Text 57-66
  Greg Little; Lydia B. Chilton; Max Goldman; Robert C. Miller
Mechanical Turk (MTurk) provides an on-demand source of human computation. This provides a tremendous opportunity to explore algorithms which incorporate human computation as a function call. However, various systems challenges make this difficult in practice, and most uses of MTurk post large numbers of independent tasks. TurKit is a toolkit for prototyping and exploring algorithmic human computation, while maintaining a straight-forward imperative programming style. We present the crash-and-rerun programming model that makes TurKit possible, along with a variety of applications for human computation algorithms. We also present case studies of TurKit used for real experiments across different fields.
Keywords: human computation, mturk, toolkit
Mixture model based label association techniques for web accessibility BIBAKFull-Text 67-76
  Muhammad Asiful Islam; Yevgen Borodin; I. V. Ramakrishnan
An important aspect of making the Web accessible to blind users is ensuring that all important web page elements such as links, clickable buttons, and form fields have explicitly assigned labels. Properly labeled content is then correctly read out by screen readers, a dominant assistive technology used by blind users. In particular, improperly labeled form fields can critically impede online transactions such as shopping, paying bills, etc. with screen readers. Very often labels are not associated with form fields or are missing altogether, making form filling a challenge for blind users. Algorithms for associating a form element with one of several candidate labels in its vicinity must cope with the variability of the element's features including label's location relative to the element, distance to the element, etc. Probabilistic models provide a natural machinery to reason with such uncertainties. In this paper we present a Finite Mixture Model (FMM) formulation of the label association problem. The variability of feature values are captured in the FMM by a mixture of random variables that are drawn from parameterized distributions. Then, the most likely label to be paired with a form element is computed by maximizing the log-likelihood of the feature data using the Expectation-Maximization algorithm. We also adapt the FMM approach for two related problems: assigning labels (from an external Knowledge Base) to form elements that have no candidate labels in their vicinity and for quickly identifying clickable elements such as add-to-cart, checkout, etc., used in online transactions even when these elements do not have textual captions (e.g., image buttons w/o alternative text). We provide a quantitative evaluation of our techniques, as well as a user study with two blind subjects who used an aural web browser implementing our approach.
Keywords: aural web browser, blind user, context, mixture models, screen reader, web accessibility, web forms

Input

Performance optimizations of virtual keyboards for stroke-based text entry on a touch-based tabletop BIBAKFull-Text 77-86
  Jochen Rick
Efficiently entering text on interactive surfaces, such as touch-based tabletops, is an important concern. One novel solution is shape writing -- the user strokes through all the letters in the word on a virtual keyboard without lifting his or her finger. While this technique can be used with any keyboard layout, the layout does impact the expected performance. In this paper, I investigate the influence of keyboard layout on expert text-entry performance for stroke-based text entry. Based on empirical data, I create a model of stroking through a series of points based on Fitts's law. I then use that model to evaluate various keyboard layouts for both tapping and stroking input. While the stroke-based technique seems promising by itself (i.e., there is a predicted gain of 17.3% for a Qwerty layout), significant additional gains can be made by using a more-suitable keyboard layout (e.g., the OPTI II layout is predicted to be 29.5% faster than Qwerty).
Keywords: Fitts's law, interactive tabletops, keyboard layout, shape writing, touch input, virtual keyboard
Gesture search: a tool for fast mobile data access BIBAKFull-Text 87-96
  Yang Li
Modern mobile phones can store a large amount of data, such as contacts, applications and music. However, it is difficult to access specific data items via existing mobile user interfaces. In this paper, we present Gesture Search, a tool that allows a user to quickly access various data items on a mobile phone by drawing gestures on its touch screen. Gesture Search contributes a unique way of combining gesture-based interaction and search for fast mobile data access. It also demonstrates a novel approach for coupling gestures with standard GUI interaction. A real world deployment with mobile phone users showed that Gesture Search enabled fast, easy access to mobile data in their day-to-day lives. Gesture Search has been released to public and is currently in use by hundreds of thousands of mobile users. It was rated positively by users, with a mean of 4.5 out of 5 for over 5000 ratings.
Keywords: gesture-based interaction, hidden Markov models, mobile computing, search, shortcuts
MAI painting brush: an interactive device that realizes the feeling of real painting BIBAKFull-Text 97-100
  Mai Otsuki; Kenji Sugihara; Asako Kimura; Fumihisa Shibata; Hideyuki Tamura
Many digital painting systems have been proposed and their quality is improving. In these systems, graphics tablets are widely used as input devices. However, because of its rigid nib and indirect manipulation, the operational feeling of a graphics tablet is different from that of real paint brush. We solved this problem by developing the MR-based Artistic Interactive (MAI) Painting Brush, which imitates a real paint brush, and constructed a mixed reality (MR) painting system that enables direct painting on physical objects in the real world.
Keywords: brush model, brush stroke, input device, mixed reality, paint brush, painting system
SqueezeBlock: using virtual springs in mobile devices for eyes-free interaction BIBAKFull-Text 101-104
  Sidhant Gupta; Tim Campbell; Jeffrey R. Hightower; Shwetak N. Patel
Haptic feedback provides an additional interaction channel when auditory and visual feedback may not be appropriate. We present a novel haptic feedback system that changes its elasticity to convey information for eyes-free interaction. SqueezeBlock is an electro-mechanical system that can realize a virtual spring having a programmatically controlled spring constant. It also allows for additional haptic modalities by altering the Hooke's Law linear-elastic force-displacement equation, such as non-linear springs, size changes, and spring length (range of motion) variations. This ability to program arbitrarily spring constants also allows for "click" and button-like feedback. We present several potential applications along with results from a study showing how well participants can distinguish between several levels of stiffness, size, and range of motion. We conclude with implications for interaction design.
Keywords: eyes free interaction, haptics, springs

Frameworks

Bringing the field into the lab: supporting capture and replay of contextual data for the design of context-aware applications BIBAKFull-Text 105-108
  Mark W. Newman; Mark S. Ackerman; Jungwoo Kim; Atul Prakash; Zhenan Hong; Jacob Mandel; Tao Dong
When designing context-aware applications, it is difficult to for designers in the studio or lab to envision the contextual conditions that will be encountered at runtime. Designers need a tool that can create/re-create naturalistic contextual states and transitions, so that they can evaluate an application under expected contexts. We have designed and developed RePlay: a system for capturing and playing back sensor traces representing scenarios of use. RePlay contributes to research on ubicomp design tools by embodying a structured approach to the capture and playback of contextual data. In particular, RePlay supports: capturing naturalistic data through Capture Probes, encapsulating scenarios of use through Episodes, and supporting exploratory manipulation of scenarios through Transforms. Our experiences using RePlay in internal design projects illustrate its potential benefits for ubicomp design.
Keywords: context-aware, data capture, design tools
Eden: supporting home network management through interactive visual tools BIBAKFull-Text 109-118
  Jeonghwa Yang; W. Keith Edwards; David Haslem
As networking moves into the home, home users are increasingly being faced with complex network management chores. Previous research, however, has demonstrated the difficulty many users have in managing their networks. This difficulty is compounded by the fact that advanced network management tools -- such as those developed for the enterprise -- are generally too complex for home users, do not support the common tasks they face, and are not a good fit for the technical peculiarities of the home. This paper presents Eden, an interactive, direct manipulation home network management system aimed at end users. Eden supports a range of common tasks, and provides a simple conceptual model that can help users understand key aspects of networking better. The system leverages a novel home network router that acts as a "dropin" replacement for users' current router. We demonstrate that Eden not only improves the user experience of networking, but also aids users in forming workable conceptual models of how the network works.
Keywords: home network, home-network interaction
TwinSpace: an infrastructure for cross-reality team spaces BIBAKFull-Text 119-128
  Derek F. Reilly; Hafez Rouzati; Andy Wu; Jee Yeon Hwang; Jeremy Brudvik; W. Keith Edwards
We introduce TwinSpace, a flexible software infrastructure for combining interactive workspaces and collaborative virtual worlds. Its design is grounded in the need to support deep connectivity and flexible mappings between virtual and real spaces to effectively support collaboration. This is achieved through a robust connectivity layer linking heterogeneous collections of physical and virtual devices and services, and a centralized service to manage and control mappings between physical and virtual. In this paper we motivate and present the architecture of TwinSpace, discuss our experiences and lessons learned in building a generic framework for collaborative cross-reality, and illustrate the architecture using two implemented examples that highlight its flexibility and range, and its support for rapid prototyping.
Keywords: collaborative virtual environment, cross-reality, interactive room, ontology, RDF, smart room, tuplespace, virtual world
D-Macs: building multi-device user interfaces by demonstrating, sharing and replaying design actions BIBAKFull-Text 129-138
  Jan Meskens; Kris Luyten; Karin Coninx
Multi-device user interface design mostly implies creating suitable interface for each targeted device, using a diverse set of design tools and toolkits. This is a time consuming activity, concerning a lot of repetitive design actions without support for reusing this effort in later designs. In this paper, we propose D-Macs: a design tool that allows designers to record their design actions across devices, to share these actions with other designers and to replay their own design actions and those of others. D-Macs lowers the burden in multi-device user interface design and can reduce the necessity for manually repeating design actions.
Keywords: design tools, multi-device UI design

Space and time

Content-aware dynamic timeline for video browsing BIBAKFull-Text 139-142
  Suporn Pongnumkul; Jue Wang; Gonzalo Ramos; Michael Cohen
When browsing a long video using a traditional timeline slider control, its effectiveness and precision degrade as a video's length grows. When browsing videos with more frames than pixels in the slider, aside from some frames being inaccessible, scrolling actions cause sudden jumps in a video's continuity as well as video frames to flash by too fast for one to assess the content. We propose a content-aware dynamic timeline control that is designed to overcome these limitations. Our timeline control decouples video speed and playback speed, and leverages video content analysis to allow salient shots to be presented at an intelligible speed. Our control also takes advantage of previous work on elastic sliders, which allows us to produce an accurate navigation control.
Keywords: dynamic video skims, slider, timeline
Chronicle: capture, exploration, and playback of document workflow histories BIBAKFull-Text 143-152
  Tovi Grossman; Justin Matejka; George Fitzmaurice
We describe Chronicle, a new system that allows users to explore document workflow histories. Chronicle captures the entire video history of a graphical document, and provides links between the content and the relevant areas of the history. Users can indicate specific content of interest, and see the workflows, tools, and settings needed to reproduce the associated results, or to better understand how it was constructed to allow for informed modification. Thus, by storing the rich information regarding the document's history workflow, Chronicle makes any working document a potentially powerful learning tool. We outline some of the challenges surrounding the development of such a system, and then describe our implementation within an image editing application. A qualitative user study produced extremely encouraging results, as users unanimously found the system both useful and easy to use.
Keywords: chronicle, history, timeline, video, workflow
Enhanced area cursors: reducing fine pointing demands for people with motor impairments BIBAKFull-Text 153-162
  Leah Findlater; Alex Jansen; Kristen Shinohara; Morgan Dixon; Peter Kamb; Joshua Rakita; Jacob O. Wobbrock
Computer users with motor impairments face major challenges with conventional mouse pointing. These challenges are mostly due to fine pointing corrections at the final stages of target acquisition. To reduce the need for correction-phase pointing and to lessen the effects of small target size on acquisition difficulty, we introduce four enhanced area cursors, two of which rely on magnification and two of which use goal crossing. In a study with motor-impaired and able-bodied users, we compared the new designs to the point and Bubble cursors, the latter of which had not been evaluated for users with motor impairments. Two enhanced area cursors, the Visual-Motor-Magnifier and Click-and-Cross, were the most successful new designs for users with motor impairments, reducing selection time for small targets by 19%, corrective submovements by 45%, and error rate by up to 82% compared to the point cursor. Although the Bubble cursor also improved performance, participants with motor impairments unanimously preferred the enhanced area cursors.
Keywords: accessibility, area cursors, bubble cursor, goal crossing, magnification, motor space, visual space
The satellite cursor: achieving MAGIC pointing without gaze tracking using multiple cursors BIBAKFull-Text 163-172
  Chun Yu; Yuanchun Shi; Ravin Balakrishnan; Xiangliang Meng; Yue Suo; Mingming Fan; Yongqiang Qin
We present the satellite cursor -- a novel technique that uses multiple cursors to improve pointing performance by reducing input movement. The satellite cursor associates every target with a separate cursor in its vicinity for pointing, which realizes the MAGIC (manual and gaze input cascade) pointing method without gaze tracking. We discuss the problem of visual clutter caused by multiple cursors and propose several designs to mitigate it. Two controlled experiments were conducted to evaluate satellite cursor performance in a simple reciprocal pointing task and a complex task with multiple targets of varying layout densities. Results show the satellite cursor can save significant mouse movement and consequently pointing time, especially for sparse target layouts, and that satellite cursor performance can be accurately modeled by Fitts' Law.
Keywords: magic pointing, multiple cursor, reducing a
UIMarks: quick graphical interaction with specific targets BIBAKFull-Text 173-182
  Olivier Chapuis; Nicolas Roussel
This paper reports on the design and evaluation of UIMarks, a system that lets users specify on-screen targets and associated actions by means of a graphical marking language. UIMarks supplements traditional pointing by providing an alternative mode in which users can quickly activate these marks. Associated actions can range from basic pointing facilitation to complex sequences possibly involving user interaction: one can leave a mark on a palette to make it more reachable, but the mark can also be configured to wait for a click and then automatically move the pointer back to its original location, for example. The system has been implemented on two different platforms, Metisse and OS X. We compared it to traditional pointing on a set of elementary and composite tasks in an abstract setting. Although pure pointing was not improved, the programmable automation supported by the system proved very effective.
Keywords: direct manipulation, macros, pointing

Artist talk

Connected environments BIBAKFull-Text 183-184
  Natalie Jeremijenko
Can new interfaces contribute to social and environmental improvement? For all the care, wit and brilliance that UIST innovations can contribute, can they actually make things better -- better in the sense of public good -- not merely lead to easier to use or more efficient consumer goods? This talk will explore the impact of interface technology on society and the environment, and examine engineered systems that invite participation, document change over time, and suggest alternative courses of action that are ethical and sustainable, drawing on examples from a diverse series of experimental designs and site-specific work Natalie has created throughout her career.
Keywords: ethical and sustainable interfaces, social and environmental improvement

Feet or TOE CHI

Gilded gait: reshaping the urban experience with augmented footsteps BIBAKFull-Text 185-188
  Yuichiro Takeuchi
In this paper we describe Gilded Gait, a system that changes the perceived physical texture of the ground, as felt through the soles of users' feet. Ground texture, in spite of its potential as an effective channel of peripheral information display, has so far been paid little attention in HCI research. The system is designed as a pair of insoles with embedded actuators, and utilizes vibrotactile feedback to simulate the perceptions of a range of different ground textures. The discreet, low-key nature of the interface makes it particularly suited for outdoor use, and its capacity to alter how people experience the built environment may open new possibilities in urban design.
Keywords: augmented reality, ground texture, haptic interface, urban navigation, vibrotactile feedback
Jogging over a distance between Europe and Australia BIBAKFull-Text 189-198
  Florian Mueller; Frank Vetere; Martin R. Gibbs; Darren Edge; Stefan Agamanolis; Jennifer G. Sheridan
Exertion activities, such as jogging, require users to invest intense physical effort and are associated with physical and social health benefits. Despite the benefits, our understanding of exertion activities is limited, especially when it comes to social experiences. In order to begin understanding how to design for technologically augmented social exertion experiences, we present "Jogging over a Distance", a system in which spatialized audio based on heart rate allowed runners as far apart as Europe and Australia to run together. Our analysis revealed how certain aspects of the design facilitated a social experience, and consequently we describe a framework for designing augmented exertion activities. We make recommendations as to how designers could use this framework to aid the development of future social systems that aim to utilize the benefits of exertion.
Keywords: audio, exergame, exergaming, exertion interface, heart rate, mobile phone, physiological data, running, spatialization, sports, whole-body interaction
Sensing foot gestures from the pocket BIBAKFull-Text 199-208
  Jeremy Scott; David Dearman; Koji Yatani; Khai N. Truong
Visually demanding interfaces on a mobile phone can diminish the user experience by monopolizing the user's attention when they are focusing on another task and impede accessibility for visually impaired users. Because mobile devices are often located in pockets when users are mobile, explicit foot movements can be defined as eyes-and-hands-free input gestures for interacting with the device. In this work, we study the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. Building upon these results, we then developed a system to learn and recognize foot gestures using a single commodity mobile phone placed in the user's pocket or in a holster on their hip. Our system uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures. Through a lab study, we demonstrate that our system can classify ten different foot gestures at approximately 86% accuracy.
Keywords: eyes-free interaction, foot-based gestures, hands-free interaction, mobile devices
Multitoe: high-precision interaction with back-projected floors based on high-resolution multi-touch input BIBAKFull-Text 209-218
  Thomas Augsten; Konstantin Kaefer; René Meusel; Caroline Fetzer; Dorian Kanitz; Thomas Stoff; Torsten Becker; Christian Holz; Patrick Baudisch
Tabletop applications cannot display more than a few dozen on-screen objects. The reason is their limited size: tables cannot become larger than arm's length without giving up direct touch. We propose creating direct touch surfaces that are orders of magnitude larger. We approach this challenge by integrating high-resolution multitouch input into a back-projected floor. As the same time, we maintain the purpose and interaction concepts of tabletop computers, namely direct manipulation.
   We base our hardware design on frustrated total internal reflection. Its ability to sense per-pixel pressure allows the floor to locate and analyze users' soles. We demonstrate how this allows the floor to recognize foot postures and identify users. These two functions form the basis of our system. They allow the floor to ignore users unless they interact explicitly, identify and track users based on their shoes, enable high-precision interaction, invoke menus, track heads, and allow users to control high-degree of freedom interactions using their feet. While we base our designs on a series of simple user studies, the primary contribution on this paper is in the engineering domain.
Keywords: direct manipulation, front diffuse illumination, FTIR, interactive floor, multi-touch, projection, tabletop

Intelligence

Cosaliency: where people look when comparing images BIBAKFull-Text 219-228
  David E. Jacobs; Dan B. Goldman; Eli Shechtman
Image triage is a common task in digital photography. Determining which photos are worth processing for sharing with friends and family and which should be deleted to make room for new ones can be a challenge, especially on a device with a small screen like a mobile phone or camera. In this work we explore the importance of local structure changes?e.g. human pose, appearance changes, object orientation, etc.?to the photographic triage task. We perform a user study in which subjects are asked to mark regions of image pairs most useful in making triage decisions. From this data, we train a model for image saliency in the context of other images that we call cosaliency. This allows us to create collection-aware crops that can augment the information provided by existing thumbnailing techniques for the image triage task.
Keywords: automated thumbnailing, collection-aware cropping, cosaliency, saliency
A conversational interface to web automation BIBAKFull-Text 229-238
  Tessa Lau; Julian Cerruti; Guillermo Manzato; Mateo Bengualid; Jeffrey P. Bigham; Jeffrey Nichols
This paper presents CoCo, a system that automates web tasks on a user's behalf through an interactive conversational interface. Given a short command such as "get road conditions for highway 88," CoCo synthesizes a plan to accomplish the task, executes it on the web, extracts an informative response, and returns the result to the user as a snippet of text. A novel aspect of our approach is that we leverage a repository of previously recorded web scripts and the user's personal web browsing history to determine how to complete each requested task. This paper describes the design and implementation of our system, along with the results of a brief user study that evaluates how likely users are to understand what CoCo does for them.
Keywords: automation, intelligent assistants, natural language interfaces
Designing adaptive feedback for improving data entry accuracy BIBAKFull-Text 239-248
  Kuang Chen; Joseph M. Hellerstein; Tapan S. Parikh
Data quality is critical for many information-intensive applications. One of the best opportunities to improve data quality is during entry. Usher provides a theoretical, data-driven foundation for improving data quality during entry. Based on prior data, Usher learns a probabilistic model of the dependencies between form questions and values. Using this information, Usher maximizes information gain. By asking the most unpredictable questions first, Usher is better able to predict answers for the remaining questions. In this paper, we use Usher's predictive ability to design a number of intelligent user interface adaptations that improve data entry accuracy and efficiency. Based on an underlying cognitive model of data entry, we apply these modifications before, during and after committing an answer. We evaluated these mechanisms with professional data entry clerks working with real patient data from six clinics in rural Uganda. The results show that our adaptations have the potential to reduce error (by up to 78%), with limited effect on entry time (varying between -14% and +6%). We believe this approach has wide applicability for improving the quality and availability of data, which is increasingly important for decision-making and resource allocation.
Keywords: adaptive interface, data entry, data quality, form design, repetitive task
Creating collections with automatic suggestions and example-based refinement BIBAKFull-Text 249-258
  Adrian Secord; Holger Winnemoeller; Wilmot Li; Mira Dontcheva
To create collections, like music playlists from personal media libraries, users today typically do one of two things. They either manually select items one-by-one, which can be time consuming, or they use an example-based recommendation system to automatically generate a collection. While such automatic engines are convenient, they offer the user limited control over how items are selected. Based on prior research and our own observations of existing practices, we propose a semi-automatic interface for creating collections that combines automatic suggestions with manual refinement tools. Our system includes a keyword query interface for specifying high-level collection preferences (e.g., "some rock, no Madonna, lots of U2,") as well as three example-based collection refinement techniques: 1) a suggestion widget for adding new items in-place in the context of the collection; 2) a mechanism for exploring alternatives for one or more collection items; and 3) a two-pane linked interface that helps users browse their libraries based on any selected collection item. We demonstrate our approach with two applications. SongSelect helps users create music playlists, and PhotoSelect helps users select photos for sharing. Initial user feedback is positive and confirms the need for semi-automated tools that give users control over automatically created collections.
Keywords: collections, constraint solver, keyword search

Surface

The IR ring: authenticating users' touches on a multi-touch display BIBAKFull-Text 259-262
  Volker Roth; Philipp Schmidt; Benjamin Güldenring
Multi-touch displays are particularly attractive for collaborative work because multiple users can interact with applications simultaneously. However, unfettered access can lead to loss of data confidentiality and integrity. For example, one user can open or alter files of a second user, or impersonate the second user, while the second user is absent or not looking. Towards preventing these attacks, we explore means to associate the touches of a user with the user's identity in a fashion that is cryptographically sound as well as easy to use. We describe our current solution, which relies on a ring-like device that transmits a continuous pseudorandom bit sequence in the form of infrared light pulses. The multi-touch display receives and localizes the sequence, and verifies its authenticity. Each sequence is bound to a particular user, and all touches in the direct vicinity of the location of the sequence on the display are associated with that user.
Keywords: authentication, multi-touch
Enabling beyond-surface interactions for interactive surface with an invisible projection BIBAKFull-Text 263-272
  Li-Wei Chan; Hsiang-Tao Wu; Hui-Shan Kao; Ju-Chun Ko; Home-Ru Lin; Mike Y. Chen; Jane Hsu; Yi-Ping Hung
This paper presents a programmable infrared (IR) technique that utilizes invisible, programmable markers to support interaction beyond the surface of a diffused-illumination (DI) multi-touch system. We combine an IR projector and a standard color projector to simultaneously project visible content and invisible markers. Mobile devices outfitted with IR cameras can compute their 3D positions based on the markers perceived. Markers are selectively turned off to support multi-touch and direct on-surface tangible input. The proposed techniques enable a collaborative multi-display multi-touch tabletop system. We also present three interactive tools: i-m-View, i-m-Lamp, and i-m-Flashlight, which consist of a mobile tablet and projectors that users can freely interact with beyond the main display surface. Early user feedback shows that these interactive devices, combined with a large interactive display, allow more intuitive navigation and are reportedly enjoyable to use.
Keywords: beyond-surface, infra-red projection, invisible marker, multi-display, multi-resolution, multi-touch, pico-projector, tabletop
Combining multiple depth cameras and projectors for interactions on, above and between surfaces BIBAKFull-Text 273-282
  Andrew D. Wilson; Hrvoje Benko
Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may "pick up" the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and "drop" the object onto the wall by touching it with their other hand. We detail the interactions and algorithms unique to LightSpace, discuss some initial observations of use and suggest future directions.
Keywords: augmented reality, depth cameras, interactive spaces, surface computing, ubiquitous computing
TeslaTouch: electrovibration for touch surfaces BIBAKFull-Text 283-292
  Olivier Bau; Ivan Poupyrev; Ali Israr; Chris Harrison
We present a new technology for enhancing touch interfaces with tactile feedback. The proposed technology is based on the electrovibration principle, does not use any moving parts and provides a wide range of tactile feedback sensations to fingers moving across a touch surface. When combined with an interactive display and touch input, it enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch. We present the principles of operation and an implementation of the technology. We also report the results of three controlled psychophysical experiments and a subjective user evaluation that describe and characterize users' perception of this technology. We conclude with an exploration of the design space of tactile touch screens using two comparable setups, one based on electrovibration and another on mechanical vibrotactile actuation.
Keywords: multitouch, tactile feedback, touch screens
Madgets: actuating widgets on interactive tabletops BIBAKFull-Text 293-302
  Malte Weiss; Florian Schwarz; Simon Jakubowski; Jan Borchers
We present a system for the actuation of tangible magnetic widgets (Madgets) on interactive tabletops. Our system combines electromagnetic actuation with fiber optic tracking to move and operate physical controls. The presented mechanism supports actuating complex tangibles that consist of multiple parts. A grid of optical fibers transmits marker positions past our actuation hardware to cameras below the table. We introduce a visual tracking algorithm that is able to detect objects and touches from the strongly sub-sampled video input of that grid. Six sample Madgets illustrate the capabilities of our approach, ranging from tangential movement and height actuation to inductive power transfer. Madgets combine the benefits of passive, untethered, and translucent tangibles with the ability to actuate them with multiple degrees of freedom.
Keywords: actuation, multi-touch, tabletop interaction, tangible user interfaces, widgets

Social

Eddi: interactive topic-based browsing of social status streams BIBAKFull-Text 303-312
  Michael S. Bernstein; Bongwon Suh; Lichan Hong; Jilin Chen; Sanjay Kairam; Ed H. Chi
Twitter streams are on overload: active users receive hundreds of items per day, and existing interfaces force us to march through a chronologically-ordered morass to find tweets of interest. We present an approach to organizing a user's own feed into coherently clustered trending topics for more directed exploration. Our Twitter client, called Eddi, groups tweets in a user's feed into topics mentioned explicitly or implicitly, which users can then browse for items of interest. To implement this topic clustering, we have developed a novel algorithm for discovering topics in short status updates powered by linguistic syntactic transformation and callouts to a search engine. An algorithm evaluation reveals that search engine callouts outperform other approaches when they employ simple syntactic transformation and backoff strategies. Active Twitter users evaluated Eddi and found it to be a more efficient and enjoyable way to browse an overwhelming status update feed than the standard chronological interface.
Keywords: social streams, topic clustering, twitter
Soylent: a word processor with a crowd inside BIBAKFull-Text 313-322
  Michael S. Bernstein; Greg Little; Robert C. Miller; Björn Hartmann; Mark S. Ackerman; David R. Karger; David Crowell; Katrina Panovich
This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.
Keywords: crowdsourcing, mechanical turk, outsourcing
Tag expression: tagging with feeling BIBAKFull-Text 323-332
  Jesse Vig; Matthew Soukup; Shilad Sen; John Riedl
In this paper we introduce tag expression, a novel form of preference elicitation that combines elements from tagging and rating systems. Tag expression enables users to apply affect to tags to indicate whether the tag describes a reason they like, dislike, or are neutral about a particular item. We present a user interface for applying affect to tags, as well as a technique for visualizing the overall community's affect. By analyzing 27,773 tag expressions from 553 users entered in a 3-month period, we empirically evaluate our design choices. We also present results of a survey of 97 users that explores users' motivations in tagging and measures user satisfaction with tag expression.
Keywords: community, ratings, tagging, user preference
VizWiz: nearly real-time answers to visual questions BIBAKFull-Text 333-342
  Jeffrey P. Bigham; Chandrika Jayant; Hanjie Ji; Greg Little; Andrew Miller; Robert C. Miller; Robin Miller; Aubrey Tatarowicz; Brandyn White; Samual White; Tom Yeh
The lack of access to visual information like text labels, icons, and colors can cause frustration and decrease independence for blind people. Current access technology uses automatic approaches to address some problems in this space, but the technology is error-prone, limited in scope, and quite expensive. In this paper, we introduce VizWiz, a talking application for mobile phones that offers a new alternative to answering visual questions in nearly real-time -- asking multiple people on the web. To support answering questions quickly, we introduce a general approach for intelligently recruiting human workers in advance called quikTurkit so that workers are available when new questions arrive. A field deployment with 11 blind participants illustrates that blind people can effectively use VizWiz to cheaply answer questions in their everyday lives, highlighting issues that automatic approaches will need to address to be useful. Finally, we illustrate the potential of using VizWiz as part of the participatory design of advanced tools by using it to build and evaluate VizWiz::LocateIt, an interactive mobile tool that helps blind people solve general visual search problems.
Keywords: blind users, non-visual interfaces, real-time human computation

Keynote

The engineering of personhood BIBAKFull-Text 343-346
  Jaron Lanier
Any subset of reality can potentially be interpreted as a computer, so when we speak about a particular computer, we are merely speaking about a portion of reality we can understand computationally. That means that computation is only identifiable through the human experience of it. User interface is ultimately the only grounding for the abstractions of computation, in the same way that the measurement of physical phenomena provides the only legitimate basis for physics. But user interface also changes humans. As computation is perceived, the natures of self and personhood are transformed. This process, when designers are aware of it, can be understood as an emerging form of applied philosophy or even applied spirituality.
Keywords: applied philosophy, applied spirituality, reality

Doctoral consortium

Crowd-powered interfaces BIBAKFull-Text 347-350
  Michael S. Bernstein
We investigate crowd-powered interfaces: interfaces that embed human activity to support high-level conceptual activities such as writing, editing and question-answering. For example, a crowd-powered interface using paid crowd workers can compute a series of textual cuts and edits to a paragraph, then provide the user with an interface to condense his or her writing. We map out the design space of interfaces that depend on outsourced, friendsourced, and data mined resources, and report on designs for each of these. We discuss technical and motivational challenges inherent in human-powered interfaces.
Keywords: crowdsourcing, outsourcing, social computing
Supporting self-expression for informal communication BIBAKFull-Text 351-354
  Lisa G. Cowan
Mobile phones are becoming the central tools for communicating and can help us keep in touch with friends and family on-the-go. However, they can also place high demands on attention and constrain interaction. My research concerns how to design communication mechanisms that mitigate these problems to support self-expression for informal communication on mobile phones. I will study how people communicate with camera-phone photos, paper-based sketches, and projected information and how this communication impacts social practices.
Keywords: communication, mobile, self-expression
Lowering the barrier to applying machine learning BIBAKFull-Text 355-358
  Kayur Patel
Machine learning algorithms are key components in many cutting edge applications of computation. However, the full potential of machine learning has not been realized because using machine learning is hard, even for otherwise tech-savvy developers. This is because developing with machine learning is different than normal programming. My thesis is that developers applying machine learning need new general-purpose tools that provide structure for common processes and common pipelines while remaining flexible to account for variability in problems. In this paper, I describe my efforts to understanding the difficulties that developers face when applying machine learning. I then describe Gestalt, a general-purpose integrated development environment designed the application of machine learning. Finally, I describe work on developing a pattern language for building machine learning systems and creating new techniques that help developers understand the interaction between their data and learning algorithms.
Keywords: gestalt, integrated development environments, machine learning
User interface models for the cloud BIBAKFull-Text 359-362
  Hubert Pham
The current desktop metaphor is unsuitable for the coming age of cloud-based applications. The desktop was developed in an era that was focused on local resources, and consequently its gestures, semantics, and security model reflect heavy reliance on hierarchy and physical locations. This paper proposes a new user interface model that accounts for cloud applications, incorporating representations of people and new gestures for sharing and access, while minimizing the prominence of location. The model's key feature is a lightweight mechanism to group objects for resource organization, sharing, and access control, towards the goal of providing simple semantics for a wide range of tasks, while also achieving security through greater usability.
Keywords: cloud, desktop, groups, model
Towards personalized surface computing BIBAKFull-Text 363-366
  Dominik Schmidt
With recent progress in the field of surface computing it becomes foreseeable that interactive surfaces will turn into a commodity in the future, ubiquitously integrated into our everyday environments. At the same time, we can observe a trend towards personal data and whole applications being accessible over the Internet, anytime from anywhere. We envision a future where interactive surfaces surrounding us serve as powerful portals to access these kinds of data and services. In this paper, we contribute two novel interaction techniques supporting parts of this vision: First, HandsDown, a biometric user identification approach based on hand contours and, second, PhoneTouch, a novel technique for using mobile phones in conjunction with interactive surfaces.
Keywords: mobile devices, surface computing, user identification
Towards a unified framework for modeling, dispatching, and interpreting uncertain input BIBAKFull-Text 367-370
  Julia Schwarz
Many new input technologies (such as touch and voice) hold the promise of more natural user interfaces. However, many of these technologies create inputs with some uncertainty. Unfortunately, conventional infrastructure lacks a method for easily handling uncertainty, and as a result input produced by these technologies is often converted to conventional events as quickly as possible, leading to a stunted interactive experience. Our ongoing work aims to design a unified framework for modeling uncertain input and dispatching it to interactors. This should allow developers to easily create interactors which can interpret uncertain input, give the user appropriate feedback, and accurately resolve any ambiguity. This abstract presents an overview of the design of a framework for handling input with uncertainty and describes topics we hope to pursue in future work. We also give an example of how we built highly accurate touch buttons using our framework. For examples of what interactors can be built and a more detailed description of our framework we refer the reader to [8].
Keywords: ambiguity, input handling, recognition
Intelligent tagging interfaces: beyond folksonomy BIBAKFull-Text 371-374
  Jesse Vig
This paper summarizes our work on using tags to broaden the dialog between a recommender system and its users. We present two tagging applications that enrich this dialog: tagsplanations are tag-based explanations of recommendations provided by a system to its users, and Movie Tuner is a conversational recommender system that enables users to provide feedback on movie recommendations using tags. We discuss the design of both systems and the experimental methodology used to evaluate the design choices.
Keywords: conversational recommenders, explanations, recommender systems, tagging
Bringing everyday applications to interactive surfaces BIBAKFull-Text 375-378
  Malte Weiss
This paper presents ongoing work that intends to simplify the introduction of everyday applications to interactive tabletops. SLAP Widgets bring tangible general-purpose widgets to tabletops while providing the flexibility of on-screen controls. Madgets maintain consistency between physical controls and their digital state. BendDesk represents our vision of a multi-touch enabled office environment. Our pattern language captures knowledge for the design of interactive tabletops. For each project, we describe its technical background, present the current state of research, and discuss future work.
Keywords: actuation, applications, curved surface, haptic feedback, interactive tabletops, tangible user interfaces

Demonstrations

OnObject: gestural play with tagged everyday objects BIBAKFull-Text 379-380
  Keywon Chung; Michael Shilman; Chris Merrill; Hiroshi Ishii
Many Tangible User Interface (TUI) systems employ sensor-equipped physical objects. However they do not easily scale to users' actual environments; most everyday objects lack the necessary hardware, and modification requires hardware and software development by skilled individuals. This limits TUI creation by end users, resulting in inflexible interfaces in which the mapping of sensor input and output events cannot be easily modified reflecting the end user's wishes and circumstances. We introduce OnObject, a small device worn on the hand, which can program physical objects to respond to a set of gestural triggers. Users attach RFID tags to situated objects, grab them by the tag, and program their responses to grab, release, shake, swing, and thrust gestures using a built-in button and a microphone. In this paper, we demonstrate how novice end users including preschool children can instantly create engaging gestural object interfaces with sound feedback from toys, drawings, or clay.
Keywords: end user programming, gestural object interfaces, tangible interfaces, ubiquitous computing
CopyCAD: remixing physical objects with copy and paste from the real world BIBAKFull-Text 381-382
  Sean Follmer; David Carr; Emily Lovell; Hiroshi Ishii
This paper introduces a novel technique for integrating geometry from physical objects into computer aided design (CAD) software. We allow users to copy arbitrary real world object geometry into 2D CAD designs at scale through the use of a camera/projector system. This paper also introduces a system, CopyCAD, that uses this technique, and augments a Computer Controlled (CNC) milling machine. CopyCAD gathers input from physical objects, sketches and interactions directly on a milling machine, allowing novice users to copy parts of real world objects, modify them and then create a new physical part.
Keywords: design tools, fabrication, prototyping, TUI
Reflective haptics: haptic augmentation of GUIs through frictional actuation of stylus-based interactions BIBAKFull-Text 383-384
  Fabian Hemmert; Alexander Müller; Ron Jagodzinski; Götz Wintergerst; Gesche Joost
In this paper, we present a novel system for stylus-based GUI interactions: Simulated physics through actuated frictional properties of a touch screen stylus. We present a prototype that implements a series of principles which we propose for the design of frictionally augmented GUIs. It is discussed how such actuation could be a potential addition of value for stylus-controlled GUIs, through enabling prioritized content, allowing for inherent confirmation, and leveraging on manual dexterity.
Keywords: friction, haptic display, physicality, stylus, touch screen
MudPad: localized tactile feedback on touch surfaces BIBAKFull-Text 385-386
  Yvonne Jansen; Thorsten Karrer; Jan Borchers
We present MudPad, a system that is capable of localized active haptic feedback on multitouch surfaces. An array of electromagnets locally actuates a tablet-sized overlay containing magnetorheological (MR) fluid. The reaction time of the fluid is fast enough for realtime feedback ranging from static levels of surface softness to a broad set of dynamically changeable textures. As each area can be addressed individually, the entire visual interface can be enriched with a multi-touch haptic layer that conveys semantic information as the appropriate counterpart to multi-touch input.
Keywords: haptic i/o, multitouch, tactile feedback
Surfboard: keyboard with microphone as a low-cost interactive surface BIBAKFull-Text 387-388
  Jun Kato; Daisuke Sakamoto; Takeo Igarashi
We introduce a technique to detect simple gestures of "surfing" (moving a hand horizontally) on a standard keyboard by analyzing recorded sounds in real-time with a microphone attached close to the keyboard. This technique allows the user to maintain a focus on the screen while surfing on the keyboard. Since this technique uses a standard keyboard without any modification, the user can take full advantage of the input functionality and tactile quality of his favorite keyboard supplemented with our interface.
Keywords: interactive surface, keyboard, low-cost, microphone
Animated paper: a moving prototyping platform BIBAKFull-Text 389-390
  Naoya Koizumi; Kentaro Yasu; Angela Liu; Maki Sugimoto; Masahiko Inami
We have developed a novel prototyping method that utilizes animated paper, a versatile platform created from paper and shape memory alloy (SMA), which is easy to control using a range of different energy sources from sunlight to lasers. We have further designed a laser point tracking system to improve the precision of the wireless control system by embedding retro-reflective material on the paper to act as light markers. It is possible to change the movement of paper prototypes by varying where to mount the SMA or how to heat it, creating a wide range of applications.
Keywords: flexible structure, organic user interfaces, paper, SMA
A support to multi-devices web application BIBAKFull-Text 391-392
  Xaiver Le Pallec; Raphaël Marvie; José Rouillard; Jean-Claude Tarby
Programming an application which uses interactive devices located on different terminals is not easy. Programming such applications with standard Web technologies (HTTP, Javascript, Web browser) is even more difficult. However, Web applications have interesting properties like running on very different terminals, the lack of a specific installation step, the ability to evolve the application code at runtime. Our demonstration presents a support for designing multi-devices Web applications. After introducing the context of this work, we briefly describe some problems related to the design of multi-devices web application. Then, we present the toolkit we have implemented to help the development of applications based upon distant interactive devices.
Keywords: interactive devices, toolkit, web application
Beyond: collapsible input device for direct 3D manipulation beyond the screen BIBAKFull-Text 393-394
  Jinha Lee; Surat Teerapittayanon; Hiroshi Ishii
What would it be like to reach into a screen and manipulate or design virtual objects as in real world? We present Beyond, a collapsible input device for direct 3D manipulation. When pressed against a screen, Beyond collapses in the physical world and extends into the digital space of the screen, such that users can perceive that they are inserting the tool into the virtual space. Beyond allows users to directly interact with 3D media, avoiding separation between the users' input and the displayed 3D graphics without requiring special glasses or wearables, thereby enabling users to select, draw, and sculpt in 3D virtual space unfettered. We describe detailed interaction techniques, implementation and application scenarios focused on 3D geometric design and prototyping.
Keywords: 3d interaction, augmented reality, input and interaction technologies, interaction design, pen and tactile input, pen-based UIs, tabletop UIs, user interface design, virtual reality
LuminAR: portable robotic augmented reality interface design and prototype BIBAKFull-Text 395-396
  Natan Linder; Pattie Maes
In this paper we introduce LuminAR: a prototype for a new portable and compact projector-camera system designed to use the traditional incandescent bulb interface as a power source, and a robotic desk lamp that carries it, enabling it with dynamic motion capabilities. We are exploring how the LuminAR system embodied in a familiar form factor of a classic Angle Poise lamp may evolve into a new class of robotic, digital information devices.
Keywords: actuated UI, augmented reality, gestural interfaces, human robot interaction, multi-touch interfaces, robotic lamp
Blinkbot: look at, blink and move BIBAKFull-Text 397-398
  Pranav Mistry; Kentaro Ishii; Masahiko Inami; Takeo Igarashi
In this paper we present BlinkBot -- a hands free input interface to control and command a robot. BlinkBot explores the natural modality of gaze and blink to direct a robot to move an object from a location to another. The paper also explains detailed hardware and software implementation of the prototype system.
Keywords: blink aware interaction, hands free interaction, human-robot interaction, robot
RoboJockey: real-time, simultaneous, and continuous creation of robot actions for everyone BIBAKFull-Text 399-400
  Takumi Shirokura; Daisuke Sakamoto; Yuta Sugiura; Tetsuo Ono; Masahiko Inami; Takeo Igarashi
We developed a RoboJockey (Robot Jockey) interface for coordinating robot actions, such as dancing -- similar to "Disc jockey" and "Video jockey". The system enables a user to choreograph a dance for a robot to perform by using a simple visual language. Users can coordinate humanoid robot actions with a combination of arm and leg movements. Every action is automatically performed to background music and beat. The RoboJockey will give a new entertainment experience with robots to the end-users.
Keywords: creation of robot action, multi-touch interface, robot jockey interface, visual language
ARmonica: a collaborative sonic environment BIBAKFull-Text 401-402
  Mengu Sukan; Ohan Oda; Xiang Shi; Manuel Entrena; Shrenik Sadalgi; Jie Qi; Steven Feiner
ARmonica is a 3D audiovisual augmented reality environment in which players can position and edit virtual bars that play sounds when struck by virtual balls launched under the influence of physics. Players experience ARmonica through head-tracked head-worn displays and tracked hand-held ultramobile personal computers, and interact through tracked Wii remotes and touch-screen taps. The goal is for players to collaborate in the creation and editing of an evolving sonic environment. Research challenges include supporting walk-up usability without sacrificing deeper functionality.
Keywords: augmented reality, sound
IODisk: disk-type i/o interface for browsing digital contents BIBAKFull-Text 403-404
  Koji Tsukada; Keisuke Kambara
We propose a disk-type I/O interface, IODisk, which helps users browse various digital contents intuitively in their living environment. IODisk mainly consists of a forcefeedback mechanism integrated in the rotation axis of a disk. Users can control the playing speed/direction contents (e.g., videos or picture slideshows) in proportion to the rotational speed/direction of the disk. We developed a prototype system and some applications.
Keywords: disk, force feedback, i/o device, tangible interface
Enabling social interactions through real-time sketch-based communication BIBAKFull-Text 405-406
  Nadir Weibel; Lisa G. Cowan; Laura R. Pina; William G. Griswold; James D. Hollan
We present UbiSketch, a tool for ubiquitous real-time sketch-based communication. We describe the UbiSketch system, which enables people to create doodles, drawings, and notes with digital pens and paper and publish them quickly and easily via their mobile phones to social communication channels, such as Facebook, Twitter, and email. The natural paper-based social interaction enabled by UbiSketch has the potential to enrich current mobile communication practices.
Keywords: communication, digital pen, interactive paper, mobile phone, sketching, social networks
HIPerPaper: introducing pen and paper interfaces for ultra-scale wall displays BIBAKFull-Text 407-408
  Nadir Weibel; Anne Marie Piper; James D. Hollan
While recent advances in graphics, display, and computer hardware support ultra-scale visualizations of a tremendous amount of data sets, mechanisms for interacting with this information on large high-resolution wall displays are still under investigation. Different issues in terms of user interface, ergonomics, multi-user interaction, and system flexibility arise while facing ultra-scale wall displays and none of the introduced approaches fully address them. We introduce HIPerPaper, a novel digital pen and paper interface that enables natural interaction with the HIPerSpace wall, a 31.8 by 7.5 foot tiled wall display of 268,720,000 pixels. HIPerPaper provides a flexible, portable, and inexpensive medium for interacting with large high-resolution wall displays.
Keywords: interfaces, pen and paper, wall display
EasySnap: real-time audio feedback for blind photography BIBAKFull-Text 409-410
  Samuel White; Hanjie Ji; Jeffrey P. Bigham
This demonstration presents EasySnap, an application that enables blind and low-vision users to take high-quality photos by providing real-time audio feedback as they point their existing camera phones. Users can readily follow the audio instructions to adjust their framing, zoom level and subject lighting appropriately. Real-time feedback is achieved on current hardware using computer vision in conjunction with use patterns drawn from current blind photographers.
Keywords: blind users, non-visual interfaces, photography
ImpAct: enabling direct touch and manipulation for surface computing BIBAKFull-Text 411-412
  Anusha Withana; Makoto Kondo; Gota Kakehi; Yasutoshi Makino; Maki Sugimoto; Masahiko Inami
This paper explores direct touch and manipulation techniques for surface computing platforms using a special force feedback stylus named ImpAct (Immersive Haptic Augmentation for Direct Touch). Proposed haptic stylus can change its length when it is pushed against a display surface. Correspondingly, a virtual stem is rendered inside the display area so that user perceives the stylus immersed through to the digital space below the screen. We propose ImpAct as a tool to probe and manipulate digital objects in the shallow region beneath display surface. ImpAct creates a direct touch interface by providing kinesthetic haptic sensations along with continuous visual contact to digital objects below the screen surface.
Keywords: 6-dof input, direct touch, force feedback, haptic display, simulated projection rendering, touch screen
The multiplayer: multi-perspective social video navigation BIBAKFull-Text 413-414
  Zihao Yu; Nicholas Diakopoulos; Mor Naaman
We present a multi-perspective video "multiplayer" designed to organize social video aggregated from online sites like YouTube. Our system automatically time-aligns videos using audio fingerprinting, thus bringing them into a unified temporal frame. The interface utilizes social metadata to visually aid navigation and cue users to more interesting portions of an event. We provide details about the visual and interaction design rationale of the multiplayer.
Keywords: multi-perspective, social media, video

Posters

The enhancement of hearing using a combination of sound and skin sensation to the pinna BIBAKFull-Text 415-416
  Kanako Aou; Asuka Ishii; Masahiro Furukawa; Shogo Fukushima; Hiroyuki Kajimoto
Recent development in sound technologies has enabled the realistic replay of real-life sounds. Thanks to these technologies, we can experience a virtual real sound environment. However, there are other types of sound technologies that enhance reality, such as acoustic filters, sound effects, and background music. They are quite effective if carefully prepared, but they also alter the sound itself. Consequently, sound is simultaneously used to reconstruct realistic environments and to enhance emotions, which are actually incompatible functions.
   With this background, we focused on using tactile modality to enhance emotions and propose a method that enhances the sound experience by a combination of sound and skin sensation to the pinna (earlobe). In this paper, we evaluate the effectiveness of this method.
Keywords: crossmodal displays, emotion, emotional amplification, pinna, skin sensation
What can internet search engines "suggest" about the usage and usability of popular desktop applications? BIBAKFull-Text 417-418
  Adam Fourney; Richard Mann; Michael Terry
In this paper, we show how Internet search query logs can yield rich, ecologically valid data sets describing the common tasks and issues that people encounter when using software on a day-to-day basis. These data sets can feed directly into standard usability practices. We address challenges in collecting, filtering, and summarizing queries, and show how data can be collected at very low cost, even without direct access to raw query logs.
Keywords: internet search, query log analysis
Interacting with live preview frames: in-picture cues for a digital camera interface BIBAKFull-Text 419-420
  Steven R. Gomez
We present a new interaction paradigm for digital cameras aimed at making interactive imaging algorithms accessible on these devices. In our system, the user creates visual cues in front of the lens during the live preview frames that are continuously processed before the snapshot is taken. These cues are recognized by the camera's image processor to control the lens or other settings. We design and analyze vision-based camera interactions, including focus and zoom controls, and argue that the vision-based paradigm offers a new level of photographer control needed for the next generation of digital cameras.
Keywords: computer vision, digital photography, interaction
HyperSource: bridging the gap between source and code-related web sites BIBAKFull-Text 421-422
  Björn Hartmann; Mark Dhillon
Programmers frequently use the Web while writing code: they search for libraries, code examples, tutorials, documentation, and engage in discussions on Q&A forums. This link between code and visited Web pages largely remains implicit today. Connecting source code and (selective) browsing history can help programmers maintain context, reduce the cost of Web content re-retrieval, and enhance understanding when code is shared. This paper introduces HyperSource, an IDE augmentation that associates browsing histories with source code edits. HyperSource comprises a browser extension that logs visited pages; a novel source document format that maps visited pages to individual characters; and a user interface that enables interaction with these histories.
Keywords: augmented source code, browsing history
Shoe-shaped i/o interface BIBAKFull-Text 423-424
  Hideaki Higuchi; Takuya Nojima
In this research, we propose a shoe-shaped I/O interface. The benefits to users of wearable devices are significantly reduced if they are aware of them. Wearable devices should have the ability to be worn without requiring any attention from the user. However, previous wearable systems required users to be careful and be aware of wearing or carrying them. To solve this problem, we propose a shoe-shaped I/O interface. By wearing the shoes throughout the day, users soon cease to be conscious of them. Electromechanical devices are potentially easy to install in shoes. This report describes the concept of a shoe-shaped I/O interface, the development of a prototype system, and possible applications.
Keywords: projectors, shoe-shaped interface, wearable devices
Development of the motion-controllable ball BIBAKFull-Text 425-426
  Takashi Ichikawa; Takuya Nojima
In this report, we propose a novel ball type interactive interface device. Balls are one of the most important pieces of equipment used for entertainment and sports. Their motion guides a player's response in terms of, for example, a feint or similar movement. Many kinds of breaking ball throws have been developed for various sports (e.g. baseball). However, acquiring the skill to appropriately react to these breaking balls is often hard to achieve and requires long-term training. Many researchers focus on the ball itself and have developed interactive balls with visual and acoustic feedbacks. However, these balls do not have the ability for motion control. In this paper, we introduce a ball-type motion control interface device. It is composed of a ball and an air-pressure tank to change its vector using gas ejection. We conducted an experiment that measures the ball's flight path while subjected to gas ejection and the results showed that the prototype system had enough power to change the ball's vector while flying.
Keywords: air pressure, augmented sports, ball interface
PETALS: a visual interface for landmine detection BIBAKFull-Text 427-428
  Lahiru G. Jayatilaka; Luca F. Bertuccelli; James Staszewski; Krzysztof Z. Gajos
Post-conflict landmines have serious humanitarian repercussions: landmines cost lives, limbs and land. The primary method used to locate these buried devices relies on the inherently dangerous and difficult task of a human listening to audio feedback from a metal detector. Researchers have previously hypothesized that expert operators respond to these challenges by building mental patterns with metal detectors through the identification of object-dependent spatially distributed metallic fields. This paper presents the preliminary stages of a novel interface -- Pattern Enhancement Tool for Assisting Landmine Sensing (PETALS) -- that aims to assist with building and visualizing these patterns, rather than relying on memory alone. Simulated demining experiments show that the experimental interface decreases classification error from 23% to 5% and reduces localization error by 54%, demonstrating the potential for PETALS to improve novice deminer safety and efficiency.
Keywords: assistive visual interface, humanitarian demining, landmine detection, petals, spatial patterns representation
Pinstripe: eyes-free continuous input anywhere on interactive clothing BIBAKFull-Text 429-430
  Thorsten Karrer; Moritz Wittenhagen; Florian Heller; Jan Borchers
We present Pinstripe, a textile user interface element for eyes-free, continuous value input on smart garments that uses pinching and rolling a piece of cloth between your fingers. Input granularity can be controlled by the amount of cloth pinched. Pinstripe input elements are invisible, and can be included across large areas of a garment. Pinstripe thus addresses several problems previously identified in the placement and operation of textile UI elements on smart clothing.
Keywords: continuous input, eyes-free interaction, smart textiles, wearable computing
Kinetic tiles: modular construction units for interactive kinetic surfaces BIBAKFull-Text 431-432
  Hyunjung Kim; Woohun Lee
We propose and demonstrate Kinetic Tiles, modular construction units for Interactive Kinetic Surfaces (IKSs). We aimed to design Kinetic Tiles to be accessible and available so that users can construct IKSs easily and rapidly. The components of Kinetic Tiles are inexpensive and easily available. In addition, the use of magnetic force enables the separation of the surface material and actuators so that users only interact with the tile modules as if constructing a tile mosaic. Kinetic Tiles can be utilized as a new design and architectural material that allows the surfaces of everyday objects and spaces to convey ambient and pleasurable kinetic expressions.
Keywords: interactive kinetic surface, kinetic design material, kinetic organic interfaces
Stacksplorer: understanding dynamic program behavior BIBAKFull-Text 433-434
  Jan-Peter Krämer; Thorsten Karrer; Jonathan Diehl; Jan Borchers
To thoroughly comprehend application behavior, programmers need to understand the interactions of objects at runtime. Today, these interactions are often poorly visualized in common IDEs except during debugging. Stacksplorer allows visualizing and traversing potential call stacks in an application even when it is not running by showing callers and called methods in two columns next to the code editor. The relevant information is gathered from the source code automatically.
Keywords: IDE, navigation, programming
Memento: unifying content and context to aid webpage re-visitation BIBAKFull-Text 435-436
  Chinmay E. Kulkarni; Santosh Raju; Raghavendra Udupa
While users often revisit pages on the Web, tool support for such re-visitation is still lacking. Current tools (such as browser histories) only provide users with basic information such as the date of the last visit and title of the page visited. In this paper, we describe a system that provides users with descriptive topic-phrases that aid re-finding. Unlike prior work, our system considers both the content of a webpage and the context in which the page was visited. Preliminary evaluation of this system suggests users find this approach of combining content with context useful.
Keywords: browsing history, internet search, topic phrases
Interactive calibration of a multi-projector system in a video-wall multi-touch environment BIBAKFull-Text 437-438
  Alessandro Lai; Alessandro Soro; Riccardo Scateni
Wall-sized interactive displays gain more and more attention as a valuable tool for multiuser applications, but typically require the adoption of projectors tiles. Projectors tend to display deformed images, due to lens distortion and/or imperfection, and because they are almost never perfectly aligned to the projection surface. Multi-projector video-walls are typically bounded to the video architecture and to the specific application to be displayed. This makes it harder to develop interactive applications, in which a fine grained control of the coordinate transformations (to and from user space and model space) is required. This paper presents a solution to such issues: implementing the blending functionalities at an application level allows seamless development of multi-display interactive applications with multi-touch capabilities. The description of the multi-touch interaction, guaranteed by an array of cameras on the baseline of the wall, is beyond the scope of this work which focuses on calibration.
Keywords: multi-touch, video-walls
CodeGraffiti: communication by sketching for pair programmers BIBAKFull-Text 439-440
  Leonhard Lichtschlag; Jan Borchers
In pair programming, two software developers work on their code together in front of a single workstation, one typing, the other commenting. This frequently involves pointing to code on the screen, annotating it verbally, or sketching on paper or a nearby whiteboard, little of which is captured in the source code for later reference. CodeGraffiti lets pair programmers simultaneously write their code, and annotate it with ephemeral and persistent sketches on screen using touch or pen input. We integrated CodeGraffiti into the Xcode software development environment, to study how these techniques may improve the pair programming workflow.
Keywords: code annotation, pair programming, pen input
Mouseless BIBAKFull-Text 441-442
  Pranav Mistry; Patricia Maes
In this short paper we present Mouseless -- a novel input device that provides the familiarity of interaction of a physical computer mouse without actually requiring a real hardware mouse. The paper also briefly describes hardware and software implementation of the prototype system and discusses interactions supported.
Keywords: desktop computing, gestural interaction, input device, mouse, multi-touch
Anywhere touchtyping: text input on arbitrary surface using depth sensing BIBAKFull-Text 443-444
  Adiyan Mujibiya; Takashi Miyaki; Jun Rekimoto
In this paper, touch typing enabled virtual keyboard system using depth sensing on arbitrary surface is proposed. Keystroke event detection is conducted using 3-dimensional hand appearance database matching combined with fingertip's surface touch sensing. Our prototype system acquired hand posture depth map by implementing phase shift algorithm for Digital Light Processor (DLP) fringe projection on arbitrary flat surface. The system robustly detects hand postures on the sensible surface with no requirement of hand position alignment on virtual keyboard frame. The keystroke feedback is the physical touch to the surface, thus no specific hardware must be worn. The system works real-time in average of 20 frames per second.
Keywords: depth sensing, touch typing, virtual keyboard
Using temporal video annotation as a navigational aid for video browsing BIBAKFull-Text 445-446
  Stefanie Müller; Gregor Miller; Sidney Fels
Video is a complex information space that requires advanced navigational aids for effective browsing. The increasing number of temporal video annotations offers new opportunities to provide video navigation according to a user's needs. We present a novel video browsing interface called TAV (Temporal Annotation Viewing) that provides the user with a visual overview of temporal video annotations. TAV enables the user to quickly determine the general content of a video, the location of scenes of interest and the type of annotations that are displayed while watching the video. An ongoing user study will evaluate our novel approach.
Keywords: video annotation, video browsing, video navigation, video search
Tweeting halo: clothing that tweets BIBAKFull-Text 447-448
  Wai Shan (Florence) Ng; Ehud Sharlin
People often like to express their unique personalities, interests, and opinions. This poster explores new ways that allow a user to express her feelings in both physical and virtual settings. With our Tweeting Halo, we demonstrate how a wearable lightweight projector can be used for self-expression very much like a hairstyle, makeup or a T-shirt imprint. Our current prototype allows a user to post a message physically above their head and virtually on Twitter at the same time. We also explore simple ways that will allow physical followers of the Tweeting Halo user to easily become virtual followers by simply taking a snapshot of her projected tweet with a mobile device such as a camera phone. In this extended abstract we present our current prototype, and the results of a design critique we performed using it.
Keywords: microblogging, personal halo, personal projector, social networking, wearable interfaces
DoubleFlip: a motion gesture delimiter for interaction BIBAKFull-Text 449-450
  Jaime Ruiz; Yang Li
In order to use motion gestures with mobile devices it is imperative that the device be able to distinguish between input motion and everyday motion. In this abstract we present DoubleFlip, a unique motion gesture designed to act as an input delimiter for mobile motion gestures. We demonstrate that the DoubleFlip gesture is extremely resistant to false positive conditions, while still achieving high recognition accuracy. Since DoubleFlip is easy to perform and less likely to be accidentally invoked, it provides an always-active input event for mobile interaction.
Keywords: mobile interaction, motion gestures, sensors
QWIC: performance heuristics for large scale exploratory user interfaces BIBAKFull-Text 451-452
  Daniel A. Smith; Joe Lambert; mc schraefel; David Bretherton
Faceted browsers offer an effective way to explore relationships and build new knowledge across data sets. So far, web-based faceted browsers have been hampered by limited feature performance and scale. QWIC, Quick Web Interface Control, describes a set of design heuristics to address performance speed both at the interface and the backend to operate on large-scale sources.
Keywords: faceted browsing, performance, scalability
What interfaces mean: a history and sociology of computer windows BIBAKFull-Text 453-454
  Louis-Jean Teitelbaum
This poster presents a cursory look at the history of windows in Graphical User Interfaces. It examines the controversy between tiling and overlapping window managers and explains that controversy's sociological importance: windows are control devices, enabling their users to manage their activity and attention. It then explores a few possible reasons for the relative disappearance of windowing in recent computing devices. It concludes with a recapitulative typology.
Keywords: activity, history, sociology, windows
Exploring pen and paper interaction with high-resolution wall displays BIBAKFull-Text 455-456
  Nadir Weibel; Anne Marie Piper; James D. Hollan
We introduce HIPerPaper, a novel digital pen and paper interface that enables natural interaction with a 31.8 by 7.5 foot tiled wall display of 268,720,000 pixels. HIPerPaper provides a flexible, portable, and inexpensive medium for interacting with large high-resolution wall displays. While the size and resolution of such displays allow visualization of data sets of a scale not previously possible, mechanisms for interacting with wall displays remain challenging. HIPerPaper enables multiple concurrent users to select, move, scale, and rotate objects on a high-dimension wall display.
Keywords: digital pen, paper, wall display
Enabling tangible interaction on capacitive touch panels BIBAKFull-Text 457-458
  Neng-Hao Yu; Li-Wei Chan; Lung-Pan Cheng; Mike Y. Chen; Yi-Ping Hung
We propose two approaches to sense tangible objects on capacitive touch screens, which are used in off-the-shelf multi-touch devices such as Apple iPad, iPhone, and 3M's multi-touch displays. We seek for the approaches that do not require modifications to the panels: spatial tag and frequency tag. Spatial tag is similar to fiducial tag used by tangible tabletop surface interaction, and uses multi-point, geometric patterns to encode object IDs. Frequency tag simulates high-frequency touches in the time domain to encode object IDs, using modulation circuits embedded inside tangible objects to simulate high-speed touches in varying frequency. We will show several demo applications. The first combines simultaneous tangible + touch input system. This explores how tangible inputs (e.g., pen, easer, etc.) and some simple gestures work together on capacitive touch panels.
Keywords: interactive surface, markers, physical interaction, tangible
MobileSurface: interaction in the air for mobile computing BIBAKFull-Text 459-460
  Ji Zhao; Hujia Liu; Chunhui Zhang; Zhengyou Zhang
We describe a virtual interactive surface technology based on a projector-camera system connected to a mobile device. This system, named mobile surface, can project images on any free surfaces and enable interaction in the air within the projection area. The projector used in the system scans a laser beam very quickly across the projection area to produce a stable image at 60 fps. The camera-projector synchronization is applied to obtain the image of the appointed scanning line. So our system can project what is perceived as a stable image onto the display surface, while simultaneously working as a structured light 3D scanning system.
Keywords: anywhere interaction, mobile, pico-projector, projector-camera system