HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 9293949596979899000102030405060708091011-111-2

Proceedings of the 2002 ACM Symposium on User Interface Software and Technology

Fullname:Proceedings of the 2002 ACM Symposium on User Interface and Software Technology
Editors:Michel Beaudouin-Lafon
Location:Paris, France
Dates:2002-Oct-27 to 2002-Oct-30
Publisher:ACM
Standard No:ISBN 1-58113-488-6; ACM Order Number 429022; ACM DL: Table of Contents hcibib: UIST02
Papers:26
Pages:247
Links:Conference Home Page
  1. Papers: collaborating through documents
  2. Interaction in the real world
  3. Papers: novel 2D interaction
  4. Papers: managing user interaction
  5. Speech and ambiguous input
  6. Papers: infrastructure for ubicomp
  7. Papers: novel input, output, and computation
  8. Papers: breaking out of the monitor

Papers: collaborating through documents

FLANNEL: adding computation to electronic mail during transmission BIBAFull-Text 1-10
  Victoria Bellotti; Nicolas Ducheneaut; Mark Howard; Christine Neuwirth; Ian Smith; Trevor Smith
In this paper, we describe FLANNEL, an architecture for adding computational capabilities to email. FLANNEL allows email to be modified by an application while in transit between sender and receiver. This modification is done without modification to the endpoints -- mail clients -- at either end. This paper also describes interaction techniques that we have developed to allow senders of email to quickly and easily select computations to be performed by FLANNEL. Through, our experience, we explain the properties that applications must have in order to be successful in the context of FLANNEL.
Augmenting shared personal calendars BIBAFull-Text 11-20
  Joe Tullio; Jeremy Goecks; Elizabeth D. Mynatt; David H. Nguyen
In this paper, we describe Augur, a groupware calendar system to support personal calendaring practices, informal workplace communication, and the socio-technical evolution of the calendar system within a workgroup. Successful design and deployment of groupware calendar systems have been shown to depend on several converging, interacting perspectives. We describe calendar-based work practices as viewed from these perspectives, and present the Augur system in support of them. Augur allows users to retain the flexibility of personal calendars by anticipating and compensating for inaccurate calendar entries and idiosyncratic event names. We employ predictive user models of event attendance, intelligent processing of calendar text, and discovery of shared events to drive novel calendar visualizations that facilitate interpersonal communication. In addition, we visualize calendar access to support privacy management and long-term evolution of the calendar system.
Moving markup: repositioning freeform annotations BIBAFull-Text 21-30
  Gene Golovchinsky; Laurent Denoue
In this paper, we describe Augur, a groupware calendar system to support personal calendaring practices, informal workplace communication, and the socio-technical evolution of the calendar system within a workgroup. Successful design and deployment of groupware calendar systems have been shown to depend on several converging, interacting perspectives. We describe calendar-based work practices as viewed from these perspectives, and present the Augur system in support of them. Augur allows users to retain the flexibility of personal calendars by anticipating and compensating for inaccurate calendar entries and idiosyncratic event names. We employ predictive user models of event attendance, intelligent processing of calendar text, and discovery of shared events to drive novel calendar visualizations that facilitate interpersonal communication. In addition, we visualize calendar access to support privacy management and long-term evolution of the calendar system.

Interaction in the real world

Customizable physical interfaces for interacting with conventional applications BIBAFull-Text 31-40
  Saul Greenberg; Michael Boyle
When using today's productivity applications, people rely heavily on graphical controls (GUI widgets) as the way to invoke application functions and to obtain feedback. Yet we all know that certain controls can be difficult or tedious to find and use. As an alternative, a customizable physical interface lets an end-user easily bind a modest number of physical controls to similar graphical counterparts. The user can then use the physical control to invoke the corresponding graphical control's function, or to display its graphical state in a physical form. To show how customizable physical interfaces work, we present examples that illustrate how our combined phidgets and widget tap packages are used to link existing application widgets to physical controls. While promising, our implementation prompts a number of issues relevant to others pursuing interface customization.
The missing link: augmenting biology laboratory notebooks BIBAFull-Text 41-50
  Wendy E. Mackay; Guillaume Pothier; Catherine Letondal; Kaare Boegh; Hans Erik Sorensen
Using a participatory design process, we created three prototype augmented laboratory notebooks that provide the missing link between paper, physical artifacts and on-line data. The final a-book combines a graphics tablet and a PDA. The tablet captures writing on the paper notebook and the PDA acts as an "interaction lens" or window between physical and electronic documents. Our approach is document-centered, with a software architecture based on layers of physical and electronic information.
Ambient touch: designing tactile interfaces for handheld devices BIBAFull-Text 51-60
  Ivan Poupyrev; Shigeaki Maruyama; Jun Rekimoto
This paper investigates the sense of touch as a channel for communicating with miniature handheld devices. We embedded a PDA with a TouchEngine -- a thin, miniature lower-power tactile actuator that we have designed specifically to use in mobile interfaces (Figure 1). Unlike previous tactile actuators, the TouchEngine is a universal tactile display that can produce a wide variety of tactile feelings from simple clicks to complex vibrotactile patterns. Using the TouchEngine, we began exploring the design space of interactive tactile feedback for handheld computers. Here, we investigated only a subset of this space: using touch as the ambient, background channel of interaction. We proposed a general approach to design such tactile interfaces and described several implemented prototypes. Finally, our user studies demonstrated 22% faster task completion when we enhanced handheld tilting interfaces with tactile feedback.

Papers: novel 2D interaction

Specifying behavior and semantic meaning in an unmodified layered drawing package BIBAFull-Text 61-70
  James Fogarty; Jodi Forlizzi; Scott E. Hudson
In order to create and use rich custom appearances, designers are often forced to introduce an unnatural gap into the design process. For example, a designer creating a skin for a music player must separately specify the appearance of the elements in the music player skin and the mapping between these visual elements and the functionality provided by the music player. This gap between appearance and semantic meaning creates a number of problems. We present a set of techniques that allows designers to use their preferred drawing tool to specify both appearance and semantic meaning. We demonstrate our techniques in an unmodified version of Adobe Photoshop, but our techniques are general and adaptable to nearly any layered drawing package.
Side views: persistent, on-demand previews for open-ended tasks BIBAFull-Text 71-80
  Michael Terry; Elizabeth D. Mynatt
We introduce Side Views, a user interface mechanism that provides on-demand, persistent, and dynamic previews of commands. Side Views are designed to explicitly support the practices and needs of expert users engaged in openended tasks. In this paper, we summarize results from field studies of expert users that motivated this work, then discuss the design of Side Views in detail. We show how Side Views' design affords their use as tools for clarifying, comparing, and contrasting commands; generating alternative visualizations; experimenting without modifying the original data (i.e., "what-if" tools); and as tools that support the serendipitous discovery of viable alternatives. We then convey lessons learned from implementing Side Views in two sample applications, a rich text editor and an image manipulation application. These contributions include a discussion of how to implement Side Views for commands with parameters, for commands that require direct user input (such as mouse strokes for a paint program), and for computationally-intensive commands.
The kinetic typography engine: an extensible system for animating expressive text BIBAFull-Text 81-90
  Johnny C. Lee; Jodi Forlizzi; Scott E. Hudson
Kinetic typography -- text that uses movement or other temporal change -- has recently emerged as a new form of communication. As we hope to illustrate in this paper, kinetic typography can be seen as bringing some of the expressive power of film -- such as its ability to convey emotion, portray compelling characters, and visually direct attention -- to the strong communicative properties of text. Although kinetic typography offers substantial promise for expressive communications, it has not been widely exploited outside a few limited application areas (most notably in TV advertising). One of the reasons for this has been the lack of tools directly supporting it, and the accompanying difficulty in creating dynamic text. This paper presents a first step in remedying this situation -- an extensible and robust system for animating text in a wide variety of forms. By supporting an appropriate set of carefully factored abstractions, this engine provides a relatively small set of components that can be plugged together to create a wide range of different expressions. It provides new techniques for automating effects used in traditional cartoon animation, and provides specific support for typographic manipulations.

Papers: managing user interaction

Clothing manipulation BIBAFull-Text 91-100
  Takeo Igarashi; John F. Hughes
This paper presents interaction techniques (and the underlying implementations) for putting clothes on a 3D character and manipulating them. The user paints freeform marks on the clothes and corresponding marks on the 3D character; the system then puts the clothes around the body so that corresponding marks match. Internally, the system grows the clothes on the body surface around the marks while maintaining basic cloth constraints via simple relaxation steps. The entire computation takes a few seconds. After that, the user can adjust the placement of the clothes by an enhanced dragging operation. Unlike standard dragging where the user moves a set of vertices in a single direction in 3D space, our dragging operation moves the cloth along the body surface to make possible more flexible operations. The user can apply pushpins to fix certain cloth points during dragging. The techniques are ideal for specifying an initial cloth configuration before applying a more sophisticated cloth simulation.
StyleCam: interactive stylized 3D navigation using integrated spatial & temporal controls BIBAFull-Text 101-110
  Nicholas Burtnyk; Azam Khan; George Fitzmaurice; Ravin Balakrishnan; Gordon Kurtenbach
This paper describes StyleCam, an approach for authoring 3D viewing experiences that incorporate stylistic elements that are not available in typical 3D viewers. A key aspect of StyleCam is that it allows the author to significantly tailor what the user sees and when they see it. The resulting viewing experience can approach the visual richness and pacing of highly authored visual content such as television commercials or feature films. At the same time, StyleCam allows for a satisfying level of interactivity while avoiding the problems inherent in using unconstrained camera models. The main components of StyleCam are camera surfaces which spatially constrain the viewing camera; animation clips that allow for visually appealing transitions between different camera surfaces; and a simple, unified, interaction technique that permits the user to seamlessly and continuously move between spatial-control of the camera and temporal-control of the animated transitions. Further, the user's focus of attention is always kept on the content, and not on extraneous interface widgets. In addition to describing the conceptual model of StyleCam, its current implementation, and an example authored experience, we also present the results of an evaluation involving real users.
Boom chameleon: simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display BIBAFull-Text 111-120
  Michael Tsang; George W. Fitzmaurice; Gordon Kurtenbach; Azam Khan; Bill Buxton
We introduce the Boom Chameleon, a novel input/output device consisting of a flat-panel display mounted on a tracked mechanical boom. The display acts as a physical window into 3D virtual environments, through which a one-to-one mapping between real and virtual space is preserved. The Boom Chameleon is further augmented with a touch-screen and a microphone/speaker combination. We present a 3D annotation application that exploits this unique configuration in order to simultaneously capture viewpoint, voice and gesture information. Design issues are discussed and results of an informal user study on the device and annotation software are presented. The results show that the Boom Chameleon annotation facilities have the potential to be an effective, easy to learn and operate 3D design review system.

Speech and ambiguous input

Distributed mediation of ambiguous context in aware environments BIBAFull-Text 121-130
  Anind Dey; Jennifer Mankoff; Gregory Abowd; Scott Carter
Many context-aware services make the assumption that the context they use is completely accurate. However, in reality, both sensed and interpreted context is often ambiguous. A challenge facing the development of realistic and deployable context-aware services, therefore, is the ability to handle ambiguous context. In this paper, we describe an architecture that supports the building of context-aware services that assume context is ambiguous and allows for mediation of ambiguity by mobile users in aware environments. We illustrate the use of our architecture and evaluate it through three example context-aware services, a word predictor system, an In/Out Board, and a reminder tool.
Query-by-critique: spoken language access to large lists BIBAFull-Text 131-140
  Dan R. Olsen; Jon R. Peachey
Spoken language interfaces provide highly mobile, small form-factor, hands-free, eyes-free interaction with information. Uniform access to large lists of information using spoken interfaces is highly desirable, but problematic due to inherent limitations of speech. A speech widget for lists of attributed objects is described that provides for approximate queries to retrieve desired items. User tests demonstrate that this is an effective technique for accessing information using speech.
Mediated voice communication via mobile IP BIBAFull-Text 141-150
  Chris Schmandt; Jang Kim; Kwan Lee; Gerardo Vallejo; Mark Ackerman
Impromptu is a mobile audio device which uses wireless Internet Protocol (IP) to access novel computer-mediated voice communication channels. These channels show the richness of IP-based communication as compared to conventional mobile telephony, adding audio processing and storage in the network, and flexible, user-centered call control protocols. These channels may be synchronous, asynchronous, or event-triggered, or even change modes as a function of other user activity. The demands of these modes plus the need to navigate with an entirely non-visual user interface are met with a number of audio-oriented user interaction techniques.

Papers: infrastructure for ubicomp

That one there! Pointing to establish device identity BIBAFull-Text 151-160
  Colin Swindells; Kori M. Inkpen; John C. Dill; Melanie Tory
Computing devices within current work and play environments are relatively static. As the number of 'networked' devices grows, and as people and their devices become more dynamic, situations will commonly arise where users will wish to use 'that device there' instead of navigating through traditional user interface widgets such as lists. This paper describes a process for identifying devices through a pointing gesture using custom tags and a custom stylus called the gesturePen. Implementation details for this system are provided along with qualitative and quantitative results from a formal user study. As ubiquitous computing environments become more pervasive, people will rapidly switch their focus between many computing devices. The results of our work demonstrate that our gesturePen method can improve the user experience in ubiquitous environments by facilitating significantly faster interactions between computing devices.
Generating remote control interfaces for complex appliances BIBAFull-Text 161-170
  Jeffrey Nichols; Brad A. Myers; Michael Higgins; Joseph Hughes; Thomas K. Harris; Roni Rosenfeld; Mathilde Pignol
The personal universal controller (PUC) is an approach for improving the interfaces to complex appliances by introducing an intermediary graphical or speech interface. A PUC engages in two-way communication with everyday appliances, first downloading a specification of the appliance's functions, and then automatically creating an interface for controlling that appliance. The specification of each appliance includes a high-level description of every function, a hierarchical grouping of those functions, and dependency information, which relates the availability of each function to the appliance's state. Dependency information makes it easier for designers to create specifications and helps the automatic interface generators produce a higher quality result. We describe the architecture that supports the PUC, and the interface generators that use our specification language to build high-quality graphical and speech interfaces.
User interfaces when and where they are needed: an infrastructure for recombinant computing BIBAFull-Text 171-180
  Mark W. Newman; Shahram Izadi; W. Keith Edwards; Jana Z. Sedivy; Trevor F. Smith
Users in ubiquitous computing environments need to be able to make serendipitous use of resources that they did not anticipate and of which they have no prior knowledge. The Speakeasy recombinant computing framework is designed to support such ad hoc use of resources on a network. In addition to other facilities, the framework provides an infrastructure through which device and service user interfaces can be made available to users on multiple platforms. The framework enables UIs to be provided for connections involving multiple entities, allows these UIs to be delivered asynchronously, and allows them to be injected by any party participating in a connection.

Papers: novel input, output, and computation

The actuated workbench: computer-controlled actuation in tabletop tangible interfaces BIBAFull-Text 181-190
  Gian Pangaro; Dan Maynes-Aminzade; Hiroshi Ishii
The Actuated Workbench is a device that uses magnetic forces to move objects on a table in two dimensions. It is intended for use with existing tabletop tangible interfaces, providing an additional feedback loop for computer output, and helping to resolve inconsistencies that otherwise arise from the computer's inability to move objects on the table. We describe the Actuated Workbench in detail as an enabling technology, and then propose several applications in which this technology could be useful.
Dynamic approximation of complex graphical constraints by linear constraints BIBAFull-Text 191-200
  Nathan Hurst; Kim Marriott; Peter Moulder
Current constraint solving techniques for interactive graphical applications cannot satisfactorily handle constraints such as non-overlap, or containment within non-convex shapes or shapes with smooth edges. We present a generic new technique for efficiently handling such kinds of constraints based on trust regions and linear arithmetic constraint solving. Our approach is to model these more complex constraints by a dynamically changing conjunction of linear constraints. At each stage, these give a local approximation to the complex constraints. During direct manipulation, linear constraints in the current local approximation can become active indicating that the current solution is on the boundary of the trust region for the approximation. The associated complex constraint is notified and it may choose to modify the current linear approximation. Empirical evaluation demonstrates that it is possible to (re-)solve systems of linear constraints that are dynamically approximating complex constraints such as non-overlap sufficiently quickly to support direct manipulation in interactive graphical applications.
TiltType: accelerometer-supported text entry for very small devices BIBAFull-Text 201-204
  Kurt Partridge; Saurav Chatterjee; Vibha Sazawal; Gaetano Borriello; Roy Want
TiltType is a novel text entry technique for mobile devices. To enter a character, the user tilts the device and presses one or more buttons. The character chosen depends on the button pressed, the direction of tilt, and the angle of tilt. TiltType consumes minimal power and requires little board space, making it appropriate for wristwatch-sized devices. But because controlled tilting of one's forearm is fatiguing, a wristwatch using this technique must be easily removable from its wriststrap. Applications include two-way paging, text entry for watch computers, web browsing, numeric entry for calculator watches, and existing applications for PDAs.
WebThumb: interaction techniques for small-screen browsers BIBAFull-Text 205-208
  Jacob O. Wobbrock; Jodi Forlizzi; Scott E. Hudson; Brad A. Myers
The proliferation of wireless handheld devices is placing the World Wide Web in the palms of users, but this convenience comes at a high interactive cost. The Web that came of age on the desktop is ill-suited for use on the small displays of handhelds. Today, handheld browsing often feels like browsing on a PC with a shrunken desktop. Overreliance on scrolling is a big problem in current handheld browsing. Users confined to viewing a small portion of each page often lack a sense of the overall context -- they may feel lost in a large page and be forced to remember the locations of items as those items scroll out of view. In this paper, we present a synthesis of interaction techniques to address these problems. We implemented these techniques in a prototype, WebThumb, that can browse the live Web.

Papers: breaking out of the monitor

The "mighty mouse" multi-screen collaboration tool BIBAFull-Text 209-212
  Kellogg S. Booth; Brian D. Fisher; Chi Jui Raymond Lin; Ritchie Argue
Many computer operating systems provide seamless support for multiple display screens, but there are few cross-platform tools for collaborative use of multiple computers in a shared display environment. Mighty Mouse is a novel groupware tool built on the public domain VNC protocol. It is tailored specifically for face-to-face collaboration where multiple heterogeneous computers (usually laptops) are viewed simultaneously (usually via projectors) by people working together on a variety of applications under various operating systems. Mighty Mouse uses only the remote input capability of VNC, but enhances this with various features to support flexible movement between the various platforms, "floor control" to facilitate smooth collaboration, and customization features to accommodate different user, platform, and application preferences in a relatively seamless manner. The design rationale arises from specific observations about how people collaborate in meetings, which allows certain simplifying assumptions to be made in the implementation.
An annotated situation-awareness aid for augmented reality BIBAFull-Text 213-216
  Blaine Bell; Tobias Hollerer; Steven Feiner
We present a situation-awareness aid for augmented reality systems based on an annotated "world in miniature." Our aid is designed to provide users with an overview of their environment that allows them to select and inquire about the objects that it contains. Two key capabilities are discussed that are intended to address the needs of mobile users. The aid's position, scale, and orientation are controlled by a novel approach that allows the user to inspect the aid without the need for manual interaction. As the user alternates their attention between the physical world and virtual aid, popup annotations associated with selected objects can move freely between the objects' representations in the two models.
Manipulating structured information in a visual workspace BIBAFull-Text 217-226
  Haowei Hsieh; Frank M. Shipman
This paper describes the VITE system, a visual workspace that supports two-way mapping for projecting structured information to a two-dimensional workspace and updating the structured information based on user interactions in the workspace. This is related to information visualization, but reflecting visual edits in the structured data requires a two-way mapping from data to visualization and from visualization to data. VITE provides users with an interface for designing two-way mappings. Mappings are reusable on different datasets and may be switched within a task. An evaluation of VITE was conducted to study how people use two-way mapping and how two-way mapping can help in problem solving tasks. The results show that users could quickly design visual mappings to help their problem-solving tasks. Users developed more sophisticated strategies for visual problem-solving over time.
PointRight: experience with flexible input redirection in interactive workspaces BIBAFull-Text 227-234
  Brad Johanson; Greg Hutchins; Terry Winograd; Maureen Stone
We describe the design of and experience with PointRight, a peer-to-peer pointer and keyboard redirection system that operates in multi-machine, multi-user environments. PointRight employs a geometric model for redirecting input across screens driven by multiple independent machines and operating systems. It was created for interactive workspaces that include large, shared displays and individual laptops, but is a general tool that supports many different configurations and modes of use. Although previous systems have provided for re-routing pointer and keyboard control, in this paper we present a more general and flexible system, along with an analysis of the types of re-binding that must be handled by any pointer redirection system This paper describes the system, the ways in which it has been used, and the lessons that have been learned from its use over the last two years.