HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 96979899000102030405060708091011-111-212-112-213-113-2

Proceedings of the 2006 ACM Symposium on User Interface Software and Technology

Fullname:Proceedings of the 2006 ACM Symposium on User Interface and Software Technology
Editors:Ken Hinckley
Location:Montreux, Switzerland
Dates:2006-Oct-15 to 2006-Oct-18
Publisher:ACM
Standard No:ISBN 1-59593-313-1; ACM Order Number 429062; ACM DL: Table of Contents hcibib: UIST06
Papers:42
Pages:338
Links:Conference Home Page
  1. Keynote
  2. Perspectives on pointing & picking
  3. Information landscapes
  4. Sensing from head to toe
  5. Browsing & scrolling
  6. Clever renditions
  7. Pen & paper
  8. Projecting the future
  9. Tables galore
  10. Keynote
  11. DANGER -- interface construction zone
  12. Handwriting & character input

Keynote

Computing versus human thinking BIBFull-Text 1-2
  Peter Naur

Perspectives on pointing & picking

The design and evaluation of selection techniques for 3D volumetric displays BIBAFull-Text 3-12
  Tovi Grossman; Ravin Balakrishnan
Volumetric displays, which display imagery in true 3D space, are a promising platform for the display and manipulation of 3D data. To fully leverage their capabilities, appropriate user interfaces and interaction techniques must be designed. In this paper, we explore 3D selection techniques for volumetric displays. In a first experiment, we find a ray cursor to be superior to a 3D point cursor in a single target environment. To address the difficulties associated with dense target environments we design four new ray cursor techniques which provide disambiguation mechanisms for multiple intersected targets. Our techniques showed varied success in a second, dense target experiment. One of the new techniques, the depth ray, performed particularly well, significantly reducing movement time, error rate, and input device footprint in comparison to the 3D point cursor.
ModelCraft: capturing freehand annotations and edits on physical 3D models BIBAFull-Text 13-22
  Hyunyoung Song; François Guimbretière; Chang Hu; Hod Lipson
With the availability of affordable new desktop fabrication techniques such as 3D printing and laser cutting, physical models are used increasingly often during the architectural and industrial design cycle. Models can easily be annotated to capture comments, edits and other forms of feedback. Unfortunately, these annotations remain in the physical world and cannot be easily transferred back to the digital world. Here we present a simple solution to this problem based on a tracking pattern printed on the surface of each model. Our solution is inexpensive, requires no tracking infrastructure or per object calibration, and can be used in the field without a computer nearby. It lets users not only capture annotations, but also edit the model using a simple yet versatile command system. Once captured, annotations and edits are merged into the original CAD models. There they can be easily edited or further refined. We present the design of a SolidWorks plug-in implementing this concept, and report initial feedback from potential users using our prototype. We also present how this prototype could be extended seamlessly to a fully functional system using current 3D printing technology.
A direct texture placement and editing interface BIBAFull-Text 23-32
  Yotam I. Gingold; Philip L. Davidson; Jefferson Y. Han; Denis Zorin
The creation of most models used in computer animation and computer games requires the assignment of texture coordinates, texture painting, and texture editing. We present a novel approach for texture placement and editing based on direct manipulation of textures on the surface. Compared to conventional tools for surface texturing, our system combines UV-coordinate specification and texture editing into one seamless process, reducing the need for careful initial design of parameterization and providing a natural interface for working with textures directly on 3D surfaces.
   A combination of efficient techniques for interactive constrained parameterization and advanced input devices makes it possible to realize a set of natural interaction paradigms. The texture is regarded as a piece of stretchable material, which the user can position and deform on the surface, selecting arbitrary sets of constraints and mapping texture points to the surface; in addition, the multi-touch input makes it possible to specify natural handles for texture manipulation using point constraints associated with different fingers. Pressure can be used as a direct interface for texture combination operations. The 3D position of the object and its texture can be manipulated simultaneously using two-hand input.
CINCH: a cooperatively designed marking interface for 3D pathway selection BIBAFull-Text 33-42
  David Akers
To disentangle and analyze neural pathways estimated from magnetic resonance imaging data, scientists need an interface to select 3D pathways. Broad adoption of such an interface requires the use of commodity input devices such as mice and pens, but these devices offer only two degrees of freedom. CINCH solves this problem by providing a marking interface for 3D pathway selection. CINCH interprets pen strokes as pathway selections in 3D using a marking language designed together with scientists. Its bimanual interface employs a pen and a trackball (see Figure 1), allowing alternating selections and scene rotations without changes of mode. CINCH was evaluated by observing four scientists using the tool over a period of three weeks as part of their normal work activity. Event logs and interviews revealed dramatic improvements in both the speed and quality of scientists' everyday work, and a set of principles that should inform the design of future 3D marking interfaces. More broadly, CINCH demonstrates the value of the iterative, participatory design process that catalyzed its evolution.
Soap: a pointing device that works in mid-air BIBAFull-Text 43-46
  Patrick Baudisch; Mike Sinclair; Andrew Wilson
Soap is a pointing device based on hardware found in a mouse, yet works in mid-air. Soap consists of an optical sensor device moving freely inside a hull made of fabric. As the user applies pressure from the outside, the optical sensor moves independent from the hull. The optical sensor perceives this relative motion and reports it as position input. Soap offers many of the benefits of optical mice, such as high-accuracy sensing. We describe the design of a soap prototype and report our experiences with four application scenarios, including a wall display, Windows Media Center, slide presentation, and interactive video games.

Information landscapes

Comparing and managing multiple versions of slide presentations BIBAFull-Text 47-56
  Steven M. Drucker; Georg Petschnigg; Maneesh Agrawala
Despite the ubiquity of slide presentations, managing multiple presentations remains a challenge. Understanding how multiple versions of a presentation are related to one another, assembling new presentations from existing presentations, and collaborating to create and edit presentations are difficult tasks. In this paper, we explore techniques for comparing and managing multiple slide presentations. We propose a general comparison framework for computing similarities and differences between slides. Based on this framework we develop an interactive tool for visually comparing multiple presentations. The interactive visualization facilitates understanding how presentations have evolved over time. We show how the interactive tool can be used to assemble new presentations from a collection of older ones and to merge changes from multiple presentation authors.
Viz: a visual analysis suite for explaining local search behavior BIBAFull-Text 57-66
  Steven Halim; Roland H. C. Yap; Hoong Chuin Lau
NP-hard combinatorial optimization problems are common in real life. Due to their intractability, local search algorithms are often used to solve such problems. Since these algorithms are heuristic-based, it is hard to understand how to improve or tune them. We propose an interactive visualization tool, VIZ, meant for understanding the behavior of local search. VIZ uses animation of abstract search trajectories with other visualizations which are also animated in a VCR-like fashion to graphically playback the algorithm behavior. It combines generic visualizations applicable on arbitrary algorithms with algorithm and problem specific visualizations. We use a variety of techniques such as alpha blending to reduce visual clutter and to smooth animation, highlights and shading, automatically generated index points for playback, and visual comparison of two algorithms. The use of multiple viewpoints can be an effective way of understanding search behavior and highlight algorithm behavior which might otherwise be hidden.
From information visualization to direct manipulation: extending a generic visualization framework for the interactive editing of large datasets BIBAFull-Text 67-76
  Thomas Baudel
Today's generic data management applications such as accounting, CRM or logging and tracking software, rely on form and menu based interfaces. These applications take only marginal advantage of current graphical user interfaces. This is because the data they handle does not have intrinsic visual representations upon which direct manipulation principles can be used. This article presents how we have extended an Information Visualization framework with generic data manipulation functions. These new data editing capabilities are tuned to take advantage of the characteristics of each view. They enable us to generalize the direct manipulation mechanisms to address many abstract data manipulation needs. In this article we present five uses of the features we have implemented and deduce a general workflow applicable to a variety of contexts. The workflow comprises three steps and five editing actions. The steps are: adjust view, select, and edit. The editing actions are: edit a value or group of values, clone objects, remove objects, add attributes, and remove attributes. The workflow provides complete editing access to table and hierarchical data structures using particularly terse interaction methods. It defines a general data editing model that enables powerful data manipulation tasks without requiring end-user programming or scripting.
WindowScape: a task oriented window manager BIBAFull-Text 77-80
  Craig Tashman
We propose WindowScape, a window manager that uses a photograph metaphor for lightweight, post hoc task management. This is the first task management windowing model to provide intuitive accessibility while allowing windows to exist simultaneously in multiple tasks. WindowScape exploits users' spatial and visual memories by providing a stable thumbnail layout in which to search for windows. A function is provided to let users search the window space while maintaining a largely consistent screen image to minimize distractions. A novel keyboard interaction technique is also presented.

Sensing from head to toe

Using a low-cost electroencephalograph for task classification in HCI research BIBAFull-Text 81-90
  Johnny Chung Lee; Desney S. Tan
Modern brain sensing technologies provide a variety of methods for detecting specific forms of brain activity. In this paper, we present an initial step in exploring how these technologies may be used to perform task classification and applied in a relevant manner to HCI research. We describe two experiments showing successful classification between tasks using a low-cost off-the-shelf electroencephalograph (EEG) system. In the first study, we achieved a mean classification accuracy of 84.0% in subjects performing one of three cognitive tasks - rest, mental arithmetic, and mental rotation - while sitting in a controlled posture. In the second study, conducted in more ecologically valid setting for HCI research, we attained a mean classification accuracy of 92.4% using three tasks that included non-cognitive features: a relaxation task, playing a PC based game without opponents, and engaging opponents within the game. Throughout the paper, we provide lessons learned and discuss how HCI researchers may utilize these technologies in their work.
Sensing from the basement: a feasibility study of unobtrusive and low-cost home activity recognition BIBAFull-Text 91-100
  James Fogarty; Carolyn Au; Scott E. Hudson
The home deployment of sensor-based systems offers many opportunities, particularly in the area of using sensor-based systems to support aging in place by monitoring an elder's activities of daily living. But existing approaches to home activity recognition are typically expensive, difficult to install, or intrude into the living space. This paper considers the feasibility of a new approach that "reaches into the home" via the existing infrastructure. Specifically, we deploy a small number of low-cost sensors at critical locations in a home's water distribution infrastructure. Based on water usage patterns, we can then infer activities in the home. To examine the feasibility of this approach, we deployed real sensors into a real home for six weeks. Among other findings, we show that a model built on microphone-based sensors that are placed away from systematic noise sources can identify 100% of clothes washer usage, 95% of dishwasher usage, 94% of showers, 88% of toilet flushes, 73% of bathroom sink activity lasting ten seconds or longer, and 81% of kitchen sink activity lasting ten seconds or longer. While there are clear limits to what activities can be detected when analyzing water usage, our new approach represents a sweet spot in the tradeoff between what information is collected at what cost.
Camera phone based motion sensing: interaction techniques, applications and performance study BIBAFull-Text 101-110
  Jingtao Wang; Shumin Zhai; John Canny
This paper presents TinyMotion, a pure software approach for detecting a mobile phone user's hand movement in real time by analyzing image sequences captured by the built-in camera. We present the design and implementation of TinyMotion and several interactive applications based on TinyMotion. Through both an informal evaluation and a formal 17-participant user study, we found that 1. TinyMotion can detect camera movement reliably under most background and illumination conditions. 2. Target acquisition tasks based on TinyMotion follow Fitts' law and Fitts law parameters can be used for TinyMotion based pointing performance measurement. 3. The users can use Vision TiltText, a TinyMotion enabled input method, to enter sentences faster than MultiTap with a few minutes of practicing. 4. Using camera phone as a handwriting capture device and performing large vocabulary, multilingual real time handwriting recognition on the cell phone are feasible. 5. TinyMotion based gaming is enjoyable and immediately available for the current generation camera phones. We also report user experiences and problems with TinyMotion based interaction as resources for future design and development of mobile interfaces.
Mobile interaction using paperweight metaphor BIBAFull-Text 111-114
  Itiro Siio; Hitomi Tsujita
Conventional scrolling methods for small sized display in PDAs or mobile phones are difficult to use when frequent switching of scrolling and editing operations are required, for example, browsing and operating large sized WWW pages.
   In this paper, we propose a new user-interface method to provide seamless switching between scrolling and other operations such as editing, based on "Paperweight Metaphor". A sheet of paper that has been placed on a slippery table is difficult to draw on. Therefore, in order to write or draw something on the sheet of paper, a person must secure the paper with his/her palm to avoid the paper from moving. This will be a good metaphor to design switching operation of scroll and editing modes.
   We have made prototype systems by placing a touch sensor under each PDA display where user's palm will be hit. Three application programs - map browser, WWW browser, and photograph browser - that switch between scrolling and other operation modes depending on sensor output have been developed. We have carried out user tests on this mode switching method and have received favorable feedback on the same.

Browsing & scrolling

Summarizing personal web browsing sessions BIBAFull-Text 115-124
  Mira Dontcheva; Steven M. Drucker; Geraldine Wade; David Salesin; Michael F. Cohen
We describe a system, implemented as a browser extension, that enables users to quickly and easily collect, view, and share personal Web content. Our system employs a novel interaction model, which allows a user to specify webpage extraction patterns by interactively selecting webpage elements and applying these patterns to automatically collect similar content. Further, we present a technique for creating visual summaries of the collected information by combining user labeling with predefined layout templates. These summaries are interactive in nature: depending on the behaviors encoded in their templates, they may respond to mouse events, in addition to providing a visual summary. Finally, the summaries can be saved or sent to others to continue the research at another place or time. Informal evaluation shows that our approach works well for popular websites, and that users can quickly learn this interaction model for collecting content from the Web.
Enabling web browsers to augment web sites' filtering and sorting functionalities BIBAFull-Text 125-134
  David F. Huynh; Robert C. Miller; David R. Karger
Existing augmentations of web pages are mostly small cosmetic changes (e.g., removing ads) and minor addition of third-party content (e.g., product prices from competing sites). None leverages the structured data presented in web pages. This paper describes Sifter, a web browser extension that can augment a well-structured web site with advanced filtering and sorting functionality. These added features work inside the site's own pages, preserving the site's presentational style and the user's context. Sifter contains an algorithm that scrapes structured data out of well-structured web pages while usually requiring no user intervention. We tested Sifter on real web sites and real users and found that people could use Sifter to perform sophisticated queries and high-level analyses on sizable data collections on the Web. We propose that web sites can be similarly augmented with other sophisticated data-centric functionality, giving users new benefits over the existing Web.
Translating keyword commands into executable code BIBAFull-Text 135-144
  Greg Little; Robert C. Miller
Modern applications provide interfaces for scripting, but many users do not know how to write script commands. However, many users are familiar with the idea of entering keywords into a web search engine. Hence, if a user is familiar with the vocabulary of an application domain, we anticipate that they could write a set of keywords expressing a command in that domain. For instance, in the web browsing domain, a user might enter "click search button". We call expressions of this form keyword commands, and we present a novel approach for translating keyword commands directly into executable code. Our prototype of this system in the web browsing domain translates "click search button" into the Chickenfoot code click(findButton("search")). This code is then executed in the context of a web browser to carry out the effect. We also present an implementation of this system in the domain of Microsoft Word. A user study revealed that subjects could use keyword commands to successfully complete 90% of the web browsing tasks in our study without instructions or training. Conversely, we would expect users to complete close to 0% of the tasks if they had to guess the underlying JavaScript commands with no instructions or training.
RecipeSheet: creating, combining and controlling information processors BIBAFull-Text 145-154
  Aran Lunzer; Kasper Hornbaek
Many tasks require users to extract information from diverse sources, to edit or process this information locally, and to explore how the end results are affected by changes in the information or in its processing. We present the RecipeSheet, a general-purpose tool for assisting users in such tasks. The RecipeSheet lets users create information processors, called recipes, which may take input in a variety of forms such as text, Web pages, or XML, and produce results in a similar variety of forms. The processing carried out by a recipe may be specified using a macro or query language, of which we currently support Rexx, Smalltalk and XQuery, or by capturing the behaviour of a Web application or Web service. In the RecipeSheet's spreadsheet-inspired user interface, information appears in cells, with inter-cell dependencies defined by recipes rather than formulas. Users can also intervene manually to control which information flows through the dependency connections. Through a series of examples we illustrate how tasks that would be challenging in existing environments are supported by the RecipeSheet.
Content-aware scrolling BIBAFull-Text 155-158
  Edward W. Ishak; Steven K. Feiner
Scrolling is used to navigate large information spaces on small screens, but is often too restrictive or cumbersome to use for particular types of content, such as multi-page, multi-column documents. To address this problem, we introduce content-aware scrolling (CAS), an approach that takes into account various characteristics of document content to determine scrolling direction, speed, and zoom. We also present the CAS widget, which supports scrolling through a content-aware path using traditional scrolling methods, demonstrating the advantages of making a traditional technique content-aware.

Clever renditions

Mnemonic rendering: an image-based approach for exposing hidden changes in dynamic displays BIBAFull-Text 159-168
  Anastasia Bezerianos; Pierre Dragicevic; Ravin Balakrishnan
Managing large amounts of dynamic visual information involves understanding changes happening out of the user's sight. In this paper, we show how current software does not adequately support users in this task, and motivate the need for a more general approach. We propose an image-based storage, visualization, and implicit interaction paradigm called mnemonic rendering that provides better support for handling visual changes. Once implemented on a system, mnemonic rendering techniques can benefit all applications. We explore its rich design space and discuss its expected benefits as well as limitations based on feedback from users of a small-screen and a wall-size prototype.
Phosphor: explaining transitions in the user interface using afterglow effects BIBAFull-Text 169-178
  Patrick Baudisch; Desney Tan; Maxime Collomb; Dan Robbins; Ken Hinckley; Maneesh Agrawala; Shengdong Zhao; Gonzalo Ramos
Sometimes users fail to notice a change that just took place on their display. For example, the user may have accidentally deleted an icon or a remote collaborator may have changed settings in a control panel. Animated transitions can help, but they force users to wait for the animation to complete. This can be cumbersome, especially in situations where users did not need an explanation. We propose a different approach. Phosphor objects show the outcome of their transition instantly; at the same time they explain their change in retrospect. Manipulating a phosphor slider, for example, leaves an afterglow that illustrates how the knob moved. The parallelism of instant outcome and explanation supports both types of users. Users who already understood the transition can continue interacting without delay, while those who are inexperienced or may have been distracted can take time to view the effects at their own pace. We present a framework of transition designs for widgets, icons, and objects in drawing programs. We evaluate phosphor objects in two user studies and report significant performance benefits for phosphor objects.
Procedural haptic texture BIBAFull-Text 179-186
  Jeremy Shopf; Marc Olano
We present the Haptic Shading Framework (HSF), a framework for procedurally defining haptic texture. HSF haptic texture shaders are short procedures allowing an application-programmer to easily define interesting haptic surface interaction and the parameters that control the surface properties. These shaders provide the illusion of surface characteristics by altering previously calculated forces from object collision in the haptic pipeline.
   HSF can be used in an existing haptic application with few modifications. The framework consists of user-programmable modules that are dynamically loaded. This framework and all user-defined procedures are written in C++, with a provided library of useful math and geometry functions. These functions are meant to mimic RenderMan functionality, creating a familiar shading environment. As we demonstrate, many procedural shading methods and algorithms can be directly adopted for haptic shading.
Personalizing routes BIBAFull-Text 187-190
  Kayur Patel; Mike Y. Chen; Ian Smith; James A. Landay
Navigation services (e.g., in-car navigation systems and online mapping sites) compute routes between two locations to help users navigate. However, these routes may direct users along an unfamiliar path when a familiar path exists, or, conversely, may include redundant information that the user already knows. These overly complicated directions increase the cognitive load of the user, which may lead to a dangerous driving environment. Since the level of detail is user specific and depends on their familiarity with a region, routes need to be personalized. We have developed a system, called MyRoute, that reduces route complexity by creating user specific routes based on a priori knowledge of familiar routes and landmarks. MyRoute works by compressing well known steps into a single contextualized step and rerouting users along familiar routes.

Pen & paper

Quiet interfaces that help students think BIBAFull-Text 191-200
  Sharon Oviatt; Alex Arthur; Julia Cohen
As technical as we have become, modern computing has not permeated many important areas of our lives, including mathematics education which still involves pencil and paper. In the present study, twenty high school geometry students varying in ability from low to high participated in a comparative assessment of math problem solving using existing pencil and paper work practice (PP), and three different interfaces: an Anoto-based digital stylus and paper interface (DP), pen tablet interface (PT), and graphical tablet interface (GT). Cognitive Load Theory correctly predicted that as interfaces departed more from familiar work practice (GT > PT > DP), students would experience greater cognitive load such that performance would deteriorate in speed, attentional focus, meta-cognitive control, correctness of problem solutions, and memory. In addition, low-performing students experienced elevated cognitive load, with the more challenging interfaces (GT, PT) disrupting their performance disproportionately more than higher performers. The present results indicate that Cognitive Load Theory provides a coherent and powerful basis for predicting the rank ordering of users' performance by type of interface. In the future, new interfaces for areas like education and mobile computing could benefit from designs that minimize users' load so performance is more adequately supported.
Pen-top feedback for paper-based interfaces BIBAFull-Text 201-210
  Chunyuan Liao; François Guimbretière; Corinna E. Loeckenhoff
Current paper-based interfaces such as PapierCraft, provide very little feedback and this limits the scope of possible interactions. So far, there has been little systematic exploration of the structure, constraints, and contingencies of feedback-mechanisms in paper-based interaction systems for paper-only environments. We identify three levels of feedback: discovery feedback (e.g., to aid with menu learning), status-indication feedback (e.g., for error detection), and task feedback (e.g., to aid in a search task). Using three modalities (visual, tactile, and auditory) which can be easily implemented on a pen-sized computer, we introduce a conceptual matrix to guide systematic research on pen-top feedback for paper-based interfaces. Using this matrix, we implemented a multimodal pen prototype demonstrating the potential of our approach. We conducted an experiment that confirmed the efficacy of our design in helping users discover a new interface and identify and correct their errors.
HybridPointing: fluid switching between absolute and relative pointing with a direct input device BIBAFull-Text 211-220
  Clifton Forlines; Daniel Vogel; Ravin Balakrishnan
We present HybridPointing, a technique that lets users easily switch between absolute and relative pointing with a direct input device such as a pen. Our design includes a new graphical element, the Trailing Widget, which remains "close at hand" but does not interfere with normal cursor operation. The use of visual feedback to aid the user's understanding of input state is discussed, and several novel visual aids are presented. An experiment conducted on a large, wall-sized display validates the benefits of HybridPointing under certain conditions. We also discuss other situations in which HybridPointing may be useful. Finally, we present an extension to our technique that allows for switching between absolute and relative input in the middle of a single drag-operation.
Videotater: an approach for pen-based digital video segmentation and tagging BIBAFull-Text 221-224
  Nicholas Diakopoulos; Irfan Essa
The continuous growth of media databases necessitates development of novel visualization and interaction techniques to support management of these collections. We present Videotater, an experimental tool for a Tablet PC that supports the efficient and intuitive navigation, selection, segmentation, and tagging of video. Our veridical representation immediately signals to the user where appropriate segment boundaries should be placed and allows for rapid review and refinement of manually or automatically generated segments. Finally, we explore a distribution of modalities in the interface by using multiple timeline representations, pressure sensing, and a tag painting/erasing metaphor with the pen.

Projecting the future

Interacting with dynamically defined information spaces using a handheld projector and a pen BIBAFull-Text 225-234
  Xiang Cao; Ravin Balakrishnan
The recent trend towards miniaturization of projection technology indicates that handheld devices will soon have the ability to project information onto any surface, thus enabling interfaces that are not possible with current handhelds. We explore the design space of dynamically defining and interacting with multiple virtual information spaces embedded in a physical environment using a handheld projector and a passive pen tracked in 3D. We develop techniques for defining and interacting with these spaces, and explore usage scenarios.
Projector-guided painting BIBAFull-Text 235-244
  Matthew Flagg; James M. Rehg
This paper presents a novel interactive system for guiding artists to paint using traditional media and tools. The enabling technology is a multi-projector display capable of controlling the appearance of an artist's canvas. This display-on-canvas guides the artist to construct the painting as a series of layers. Our process model for painting is based on classical techniques and was designed to address three main issues which are challenging to novices: (1) positioning and sizing elements on the canvas, (2) executing the brushstrokes to achieve a desired texture and (3) mixing pigments to make a target color. These challenges are addressed through a set of interaction modes. Preview and color selection modes enable the artist to focus on the current target layer by highlighting the areas of the canvas to be painted. Orientation mode displays brushstroke guidelines for the creation of desired brush texture. Color mixing mode guides the artist through the color mixing process with a user interface similar to a color wheel. These interaction modes allow a novice artist to focus on a series of manageable subtasks in executing a complex painting. Our system covers the gamut of the painting process from overall composition down to detailed brushwork. We present the results from a user study which quantify the benefit that our system can provide to a novice painter.
Interactive environment-aware display bubbles BIBAFull-Text 245-254
  Daniel Cotting; Markus Gross
We present a novel display metaphor which extends traditional tabletop projections in collaborative environments by introducing freeform, environment-aware display representations and a matching set of interaction schemes. For that purpose, we map personalized widgets or ordinary computer applications that have been designed for a conventional, rectangular layout into space-efficient bubbles whose warping is performed with a potential-based physics approach. With a set of interaction operators based on laser pointer tracking, these freeform displays can be transformed and elastically deformed using focus and context visualization techniques. We also provide operations for intuitive instantiation of bubbles, cloning, cut & pasting, deletion and grouping in an interactive way, and we allow for user-drawn annotations and text entry using a projected keyboard. Additionally, an optional environment-aware adaptivity of the displays is achieved by imperceptible, realtime scanning of the projection geometry. Subsequently, collision-responses of the bubbles with non-optimal surface parts are computed in a rigid body simulation. The extraction of the projection surface properties runs concurrently with the main application of the system. Our approach is entirely based on off the-shelf, low-cost hardware including DLP-projectors and FireWire cameras.
Robust computer vision-based detection of pinching for one and two-handed gesture input BIBAFull-Text 255-258
  Andrew D. Wilson
We present a computer vision technique to detect when the user brings their thumb and forefinger together (a pinch gesture) for close-range and relatively controlled viewing circumstances. The technique avoids complex and fragile hand tracking algorithms by detecting the hole formed when the thumb and forefinger are touching; this hole is found by simple analysis of the connected components of the background segmented against the hand. Our Thumb and Fore-Finger Interface (TAFFI) demonstrates the technique for cursor control as well as map navigation using one and two-handed interactions.

Tables galore

Under the table interaction BIBAFull-Text 259-268
  Daniel Wigdor; Darren Leigh; Clifton Forlines; Samuel Shipman; John Barnwell; Ravin Balakrishnan; Chia Shen
We explore the design space of a two-sided interactive touch table, designed to receive touch input from both the top and bottom surfaces of the table. By combining two registered touch surfaces, we are able to offer a new dimension of input for co-located collaborative groupware. This design accomplishes the goal of increasing the relative size of the input area of a touch table while maintaining its direct-touch input paradigm. We describe the interaction properties of this two-sided touch table, report the results of a controlled experiment examining the precision of user touches to the underside of the table, and a series of application scenarios we developed for use on inverted and two-sided tables. Finally, we present a list of design recommendations based on our experiences and observations with inverted and two-sided tables.
Multi-layer interaction for digital tables BIBAFull-Text 269-272
  Sriram Subramanian; Dzimitry Aliakseyeu; Andres Lucero
Interaction on digital tables has been restricted to a single layer on the table's active work-surface. We extend the design space of digital tables to include multiple layers of interaction. We leverage 3D position information of a pointing device to support interaction in the space above the active work-surface by creating multiple layers with drift-correction in which the user can interact with an application. We also illustrate through a point-design that designers can use multiple-layers to create a rich and clutter free application. A subjective evaluation showed that users liked the interaction techniques and found that, because of the drift correction we use, they could control the pointer when working in any layer.
Multi-user, multi-display interaction with a single-user, single-display geospatial application BIBAFull-Text 273-276
  Clifton Forlines; Alan Esenther; Chia Shen; Daniel Wigdor; Kathy Ryall
In this paper, we discuss our adaptation of a single-display, single-user commercial application for use in a multi-device, multi-user environment. We wrap Google Earth, a popular geospatial application, in a manner that allows for synchronized coordinated views among multiple instances running on different machines in the same co-located environment. The environment includes a touch-sensitive tabletop display, three vertical wall displays, and a TabletPC. A set of interaction techniques that allow a group to manage and exploit this collection of devices is presented.

Keynote

Brain-computer interaction BIBAFull-Text 277-278
  Jose del R. Millan
The promise of Brain-Computer Interfaces (BCI) technology is to augment human capabilities by enabling people to interact with a computer through a conscious and spontaneous modulation of their brainwaves after a short training period. Indeed, by analyzing brain electrical activity online, several groups have designed brain-actuated systems that provide alternative channels for communication, entertainment and control. Thus, a person can write messages using a virtual keyboard on a computer screen and also browse the internet. Alternatively, subjects can operate simple computer games, or brain games, and interact with educational software. Researchers have also been able to train monkeys to move a computer cursor to desired targets and also to control a robot arm. Work with humans has shown that it is possible for them to move a cursor and even to drive a mobile robot between rooms in a house model. In this talk I will review the field of BCI, with a focus on non-invasive systems based on electroencephalogram (EEG) signals. I will also describe three brain-actuated applications we have developed: a virtual keyboard, a brain game, and a mobile robot (emulating a motorized wheelchair). Finally, we discuss current research directions we are pursuing in order to improve the performance and robustness of our BCI system, especially for real-time control of brain-actuated robots.

DANGER -- interface construction zone

Huddle: automatically generating interfaces for systems of multiple connected appliances BIBAFull-Text 279-288
  Jeffrey Nichols; Brandon Rothrock; Duen Horng Chau; Brad A. Myers
Systems of connected appliances, such as home theaters and presentation rooms, are becoming commonplace in our homes and workplaces. These systems are often difficult to use, in part because users must determine how to split the tasks they wish to perform into sub-tasks for each appliance and then find the particular functions of each appliance to complete their sub-tasks. This paper describes Huddle, a new system that automatically generates task-based interfaces for a system of multiple appliances based on models of the content flow within the multi-appliance system.
Rapid construction of functioning physical interfaces from cardboard, thumbtacks, tin foil and masking tape BIBAFull-Text 289-298
  Scott E. Hudson; Jennifer Mankoff
Rapid, early, but rough system prototypes are becoming a standard and valued part of the user interface design process. Pen, paper, and tools like Flash and Director are well suited to creating such prototypes. However, in the case of physical forms with embedded technology, there is a lack of tools for developing rapid, early prototypes. Instead, the process tends to be fragmented into prototypes exploring forms that look like the intended product or explorations of functioning interactions that work like the intended product - bringing these aspects together into full design concepts only later in the design process. To help alleviate this problem, we present a simple tool for very rapidly creating functioning, rough physical prototypes early in the design process - supporting what amounts to interactive physical sketching. Our tool allows a designer to combine exploration of form and interactive function, using objects constructed from materials such as thumbtacks, foil, cardboard and masking tape, enhanced with a small electronic sensor board. By means of a simple and fluid tool for delivering events to "screen clippings," these physical sketches can then be easily connected to any existing (or new) program running on a PC to provide real or Wizard of Oz supported functionality.
Reflective physical prototyping through integrated design, test, and analysis BIBAFull-Text 299-308
  Bjorn Hartmann; Scott R. Klemmer; Michael Bernstein; Leith Abdulla; Brandon Burr; Avi Robinson-Mosher; Jennifer Gee
Prototyping is the pivotal activity that structures innovation, collaboration, and creativity in design. Prototypes embody design hypotheses and enable designers to test them. Framin design as a thinking-by-doing activity foregrounds iteration as a central concern. This paper presents d.tools, a toolkit that embodies an iterative-design-centered approach to prototyping information appliances. This work offers contributions in three areas. First, d.tools introduces a statechart-based visual design tool that provides a low threshold for early-stage prototyping, extensible through code for higher-fidelity prototypes. Second, our research introduces three important types of hardware extensibility - at the hardware-to-PC interface, the intra-hardware communication level, and the circuit level. Third, d.tools integrates design, test, and analysis of information appliances. We have evaluated d.tools through three studies: a laboratory study with thirteen participants; rebuilding prototypes of existing and emerging devices; and by observing seven student teams who built prototypes with d.tools.
User interface facades: towards fully adaptable user interfaces BIBAFull-Text 309-318
  Wolfgang Stuerzlinger; Olivier Chapuis; Dusty Phillips; Nicolas Roussel
User interfaces are becoming more and more complex. Adaptable and adaptive interfaces have been proposed to address this issue and previous studies have shown that users prefer interfaces that they can adapt to self-adjusting ones. However, most existing systems provide users with little support for adapting their interfaces. Interface customization techniques are still very primitive and usually constricted to particular applications. In this paper, we present User Interface Facades, a system that provides users with simple ways to adapt, reconfigure, and re-combine existing graphical interfaces, through the use of direct manipulation techniques. The paper describes the user's view of the system, provides some technical details, and presents several examples to illustrate its potential.
SwingStates: adding state machines to the swing toolkit BIBAFull-Text 319-322
  Caroline Appert; Michel Beaudouin-Lafon
This article describes SwingStates, a library that adds state machines to the Java Swing user interface toolkit. Unlike traditional approaches, which use callbacks or listeners to define interaction, state machines provide a powerful control structure and localize all of the interaction code in one place. SwingStates takes advantage of Java's inner classes, providing programmers with a natural syntax and making it easier to follow and debug the resulting code. SwingStates tightly integrates state machines, the Java language and the Swing toolkit. It reduces the potential for an explosion of states by allowing multiple state machines to work together. We show how to use SwingStates to add new interaction techniques to existing Swing widgets, to program a powerful new Canvas widget and to control high-level dialogues.

Handwriting & character input

CueTIP: a mixed-initiative interface for correcting handwriting errors BIBAFull-Text 323-332
  Michael Shilman; Desney S. Tan; Patrice Simard
With advances in pen-based computing devices, handwriting has become an increasingly popular input modality. Researchers have put considerable effort into building intelligent recognition systems that can translate handwriting to text with increasing accuracy. However, handwritten input is inherently ambiguous, and these systems will always make errors. Unfortunately, work on error recovery mechanisms has mainly focused on interface innovations that allow users to manually transform the erroneous recognition result into the intended one. In our work, we propose a mixed-initiative approach to error correction. We describe CueTIP, a novel correction interface that takes advantage of the recognizer to continually evolve its results using the additional information from user corrections. This significantly reduces the number of actions required to reach the intended result. We present a user study showing that CueTIP is more efficient and better preferred for correcting handwriting recognition errors. Grounded in the discussion of CueTIP, we also present design principles that may be applied to mixed-initiative correction interfaces in other domains.
In-stroke word completion BIBAFull-Text 333-336
  Jacob O. Wobbrock; Brad A. Myers; Duen Horng Chau
We present the design and implementation of a word-level stroking system called Fisch, which is intended to improve the speed of character-level unistrokes. Importantly, Fisch does not alter the way in which character-level unistrokes are made, but allows users to gradually ramp up to word-level unistrokes by extending their letters in minimal ways. Fisch relies on in-stroke word completion, a flexible design for fluidly turning unistroke letters into whole words. Fisch can be memorized at the motor level since word completions always appear at the same positions relative to the strokes being made. Our design for Fisch is suitable for use with any unistroke alphabet. We have implemented Fisch for multiple versions of EdgeWrite, and results show that Fisch reduces the number of strokes during entry by 43.9% while increasing the rate of entry. An informal test of "record speed" with the stylus version resulted in 50-60 wpm with no uncorrected errors.