HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 8688899091929394959697

Proceedings of the 2000 ACM Symposium on User Interface Software and Technology

Fullname:Proceedings of the 2000 ACM Symposium on User Interface and Software Technology
Location:San Diego, CA, USA
Dates:2000-Nov-06 to 2000-Nov-08
Publisher:ACM
Standard No:ISBN 1-58113-212-3; ACM Order Number 429002; ACM DL: Table of Contents hcibib: UIST00
Papers:28
Pages:248
Links:Conference Home Page
  1. Multimedia UI
  2. UI Architecture
  3. Toolkits and Techniques for Pen and Video
  4. Sensing User Activity
  5. Speedy Input
  6. Augmented Reality
  7. Toolkit Support for UI
  8. Selection
  9. Illusions

Multimedia UI

Suede: A Wizard of Oz Prototyping Tool for Speech User Interfaces BIBAKFull-Text 1-10
  Scott R. Klemmer; Anoop K. Sinha; Jack Chen; James A. Landay; Nadeem Aboobaker; Annie Wang
Speech-based user interfaces are growing in popularity. Unfortunately, the technology expertise required to build speech UIs precludes many individuals from participating in the speech interface design process. Furthermore, the time and knowledge costs of building even simple speech systems make it difficult for designers to iteratively design speech UIs. SUEDE, the speech interface prototyping tool we describe in this paper, allows designers to rapidly create prompt/response speech interfaces. It offers an electronically supported Wizard of Oz (WOz) technique that captures test data, allowing designers to analyze the interface after testing. This informal tool enables speech user interface designers, even non-experts, to quickly create, test, and analyze speech user interface prototypes.
Keywords: Wizard of Oz, speech user interfaces, prototyping, design, low-fidelity, informal user interfaces, design tools
Interaction Techniques for Ambiguity Resolution in Recognition-Based Interfaces BIBAFull-Text 11-20
  Jennifer Mankoff; Scott E. Hudson; Gregory D. Abowd
Because of its promise of natural interaction, recognition is coming into its own as a mainstream technology for use with computers. Both commercial and research applications are beginning to use it extensively. However the errors made by recognizers can be quite costly, and this is increasingly becoming a focus for researchers. We present a survey of existing error correction techniques in the user interface. These mediation techniques most commonly fall into one of two strategies, repetition and choice. Based on the needs uncovered by this survey, we have developed OOPS, a toolkit that supports resolution of input ambiguity through mediation. This paper describes four new interaction techniques built using OOPS, and the toolkit mechanisms required to build them. These interaction techniques each address problems not directly handled by standard approaches to mediation, and can all be re-used in a variety of settings.
Multimodal System Processing in Mobile Environments BIBAKFull-Text 21-30
  Sharon Oviatt
One major goal of multimodal system design is to support more robust performance than can be achieved with a unimodal recognition technology, such as a spoken language system. In recent years, the multimodal literatures on speech and pen input and speech and lip movements have begun developing relevant performance criteria and demonstrating a reliability advantage for multimodal architectures. In the present studies, over 2,600 utterances processed by a multimodal pen/voice system were collected during both mobile and stationary use. A new data collection infrastructure was developed, including instrumentation worn by the user while roaming, a researcher field station, and a multimodal data logger and analysis tool tailored for mobile research. Although speech recognition as a stand-alone failed more often during mobile system use, the results confirmed that a more stable multimodal architecture decreased this error rate by 19-35%. Furthermore, these findings were replicated across different types of microphone technology. In large part this performance gain was due to significant levels of mutual disambiguation in the multimodal architecture, with higher levels occurring in the noisy mobile environment. Implications of these findings are discussed for expanding computing to support more challenging usage contexts in a robust manner.
Keywords: mobile interface design, multimodal architecture, speech and pen input, recognition errors, mutual disambiguation, robust performance

UI Architecture

A Temporal Model for Multi-Level Undo and Redo BIBAKFull-Text 31-40
  W. Keith Edwards; Takeo Igarashi; Anthony LaMarca; Elizabeth D. Mynatt
A number of recent systems have provided rich facilities for manipulating the timelines of applications. Such timelines represent the history of an application's use in some session, and captures the effects of the user's interactions with that application. Applications can use timeline manipulation techniques prosaically as a way to provide undo and redo within an application context; more interestingly, they can use these same techniques to make an application's history directly manipulable in richer ways by users. This paper presents a number of extensions to current techniques for representing and managing application timelines. The first extension captures causal relationships in timelines via a nested transaction mechanism. This extension addresses a common problem in history-based applications, namely, how to represent application state as a set of atomic, incremental operations. The second extension presents a model for "multi-level" time, in which the histories of a set of inter-related artifacts can be represented by both "local" and "global" timelines. This extension allows the histories of related objects in an application to be manipulated independently from one another.
Keywords: history management, timelines, undo, redo, Timewarp, Flatland
A Programming Model for Active Documents BIBAKFull-Text 41-50
  Paul Dourish; W. Keith Edwards; Jon Howell; Anthony LaMarca; John Lamping; Karin Petersen; Michael Salisbury; Doug Terry; Jim Thornton
Traditionally, designers organize software system as active end-points (e.g. applications) linked by passive infrastructures (e.g. networks). Increasingly, however, networks and infrastructures are becoming active components that contribute directly to application behavior. Amongst the various problems that this presents is the question of how such active infrastructures should be programmed. We have been developing an active document management system called Placeless Documents. Its programming model is organized in terms of properties that actively contribute to the functionality and behavior of the documents to which they are attached. This paper discusses active properties and their use as a programming model for active infrastructures. We have found that active properties enable the creation of persistent, autonomous active entities in document systems, independent of specific repositories and applications, but present challenges for managing problems of composition.
Keywords: Active properties, document management, component software, customization.
PicturePiper: Using a Re-Configurable Pipeline to Find Images on the Web BIBAKFull-Text 51-62
  Adam M. Fass; Eric A. Bier; Eyton Adar
In this paper, we discuss a re-configurable pipeline architecture that is ideally suited for applications in which a user is interactively managing a stream of data. Currently, document service buses allow stand-alone document services (translation, printing, etc.) to be combined for batch processing. Our architecture allows services to be composed and re-configured on the fly in order to support interactive applications. To motivate the need for such an architecture we address the problem of finding and organizing images on the World Wide Web. The resulting tool, PicturePiper, provides a mechanism for allowing users access to images on the web related to a topic of interest.
Keywords: dataflow, image retrieval, pipeline, WWW searching

Toolkits and Techniques for Pen and Video

SATIN: A Toolkit for Informal Ink-Based Applications BIBAKFull-Text 63-72
  Jason I. Hong; James A. Landay
Software support for making effective pen-based applications is currently rudimentary. To facilitate the creation of such applications, we have developed SATIN, a Java-based toolkit designed to support the creation of applications that leverage the informal nature of pens. This support includes a scenegraph for manipulating and rendering objects; support for zooming and rotating objects, switching between multiple views of an object, integration of pen input with interpreters, libraries for manipulating ink strokes, widgets optimized for pens, and compatibility with Java's Swing toolkit. SATIN includes a generalized architecture for handling pen input, consisting of recognizers, interpreters, and multi-interpreters. In this paper, we describe the functionality and architecture of SATIN, using two applications built with SATIN as examples.
Keywords: toolkits, pen, ink, informal, sketching, gesture, recognition, interpreter, recognizer, SATIN
Fluid Sketches: Continuous Recognition and Morphing of Simple Hand-Drawn Shapes BIBAKFull-Text 73-80
  James Arvo; Kevin Novins
We describe a new sketching interface in which shape recognition and morphing are tightly coupled. Raw input strokes are continuously morphed into ideal geometric shapes, even before the pen is lifted. By means of smooth and continual shape transformations the user is apprised of recognition progress and the appearance of the final shape, yet always retains a sense of control over the process. At each time t the system uses the trajectory traced out thus far by the pen coupled with the current appearance of the time-varying shape to classify the sketch as one of several pre-defined basic shapes. The recognition operation is performed using shape-specific fits based on least-squares or relaxation, which are continuously updated as the user draws. We describe the timedependent transformation of the sketch, beginning with the raw pen trajectory, using a family of first-order ordinary differential equations that depend on time and the current shape of the sketch. Using this formalism, we describe several possible behaviors that result by varying the relative significance of new and old portions of a stroke, changing the "viscosity" of the morph, and enforcing different end conditions. A preliminary user study suggests that the new interface is particularly effective for rapidly constructing diagrams consisting of simple shapes.
Keywords: Sketching, recognition, morphing.
A Semi-Automatic Approach to Home Video Editing BIBAKFull-Text 81-89
  Andreas Girgensohn; John Boreczky; Patrick Chiu; John Doherty; Jonathan Foote; Gene Golovchinsky; Shingo Uchihashi; Lynn Wilcox
videos from raw video shot with a standard video camera. In contrast to other video editing systems, Hitchcock uses automatic analysis to determine the suitability of portions of the raw video. Unsuitable video typically has fast or erratic camera motion. Hitchcock first analyzes video to identify the type and amount of camera motion: fast pan, slow zoom, etc. Based on this analysis, a numerical "unsuitability" score is computed for each frame of the video. Combined with standard editing rules, this score is used to identify clips for inclusion in the final video and to select their start and end points. To create a custom video, the user drags keyframes corresponding to the desired clips into a storyboard. Users can lengthen or shorten the clip without specifying the start and end frames explicitly. Clip lengths are balanced automatically using a spring-based algorithm.
Keywords: video editing, video analysis, video exploration, automatic video clip extraction.

Sensing User Activity

Sensing Techniques for Mobile Interaction BIBAKFull-Text 91-100
  Ken Hinckley; Jeff Pierce; Mike Sinclair; Eric Horvitz
We describe sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices. We introduce and integrate a set of sensors into a handheld device, and demonstrate several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when the user picks it up the device to start using it, and scrolling the display using tilt. We present an informal experiment, initial usability testing results, and user reactions to these techniques.
Keywords: Input devices, interaction techniques, sensing, contextawareness, mobile devices, mobile interaction, sensors
Note: Best Paper Award
The Reading Assistant: Eye Gaze Triggered Auditory Prompting for Reading Remediation BIBAKFull-Text 101-107
  John L. Sibert; Mehmet Gokturk; Robert A. Lavine
We have developed a system for remedial reading instruction that uses visually controlled auditory prompting to help the user with recognition and pronunciation of words. Our underlying hypothesis is that the relatively unobtrusive assistance rendered by such a system will be more effective than previous computer aided approaches. We present a description of the design and implementation of our system and discuss a controlled study that we undertook to evaluate the usability of the Reading Assistant.
Keywords: eye tracking, eye gaze, reading disability, interaction techniques
ToolStone: Effective Use of the Physical Manipulation Vocabularies of Input Devices BIBAKFull-Text 109-117
  Jun Rekimoto; Eduardo Sciammarella
The ToolStone is a cordless, multiple degree-of-freedom (MDOF) input device that senses physical manipulation of itself, such as rotating, flipping, or tilting. As an input device for the non-dominant hand when a bimanual interface is used, the ToolStone provides several interaction techniques including a toolpalette selector, and MDOF interactors such as zooming, 3D rotation, and virtual camera control. In this paper, we discuss the design principles of input devices that effectively use a human's physical manipulation skills, and describe the system architecture and applications of the ToolStone input device.
Keywords: Interaction techniques, input devices, physical user interfaces, multiple function inputs, multiple-degreeof-freedom input, two-handed input

Speedy Input

The Metropolis Keyboard -- An Exploration of Quantitative Techniques for Virtual Keyboard Design BIBAKFull-Text 119-128
  Shumin Zhai; Michael Hunter; Barton A. Smith
Text entry user interfaces have been a bottleneck of nontraditional computing devices. One of the promising methods is the virtual keyboard on touch screens. Various layouts have been manually designed to replace the dominant QWERTY layout. This paper presents two computerized quantitative design techniques to search for the optimal virtual keyboard. The first technique simulated the dynamics of a keyboard with "digraph springs" between keys, which produced a "Hooke's" keyboard with 41.6 wpm performance. The second technique used a Metropolis random walk algorithm guided by a "Fitts energy" objective function, which produced a "Metropolis" keyboard with 43.1 wpm performance.
   The paper also models and evaluates the performance of four existing keyboard layouts. We corrected erroneous estimates in the literature and predicted the performance of QWERTY, CHUBON, FITALY, OPTI to be in the neighborhood of 30, 33, 36 and 38 wpm respectively. Our best design was 40% faster than QWERTY and 10% faster than OPTI, illustrating the advantage of quantitative user interface design techniques based on models of human performance over traditional trial and error designs guided by heuristics.
Keywords: Graphical keyboard, soft keyboard, virtual keyboard, on screen keyboard, text entry, text input, mobile computing, mobile devices, pen based computing, ubiquitous computing, pervasive computing, Metropolis method.
Dasher -- A Data Entry Interface Using Continuous Gestures and Language Models BIBAKFull-Text 129-137
  David J. Ward; Alan F. Blackwell; David J. C. MacKay
Existing devices for communicating information to computers are bulky, slow to use, or unreliable. Dasher is a new interface incorporating language modelling and driven by continuous two-dimensional gestures, e.g. a mouse, touchscreen, or eye-tracker. Tests have shown that this device can be used to enter text at a rate of up to 34 words per minute, compared with typical ten-finger keyboard typing of 40-60 words per minute.
   Although the interface is slower than a conventional keyboard, it is small and simple, and could be used on personal data assistants and by motion-impaired computer users.
Keywords: Adaptive, Text, Entry, Language, Modelling
Speed-Dependent Automatic Zooming for Browsing Large Documents BIBAKFull-Text 139-148
  Takeo Igarashi; Ken Hinckley
We propose a navigation technique for browsing large documents that integrates rate-based scrolling with automatic zooming. The view automatically zooms out when the user scrolls rapidly so that the perceptual scrolling speed in screen space remains constant. As a result, the user can efficiently and smoothly navigate through a large document without becoming disoriented by extremely fast visual flow. By incorporating semantic zooming techniques, the user can smoothly access a global overview of the document during rate-based scrolling. We implemented several prototype systems, including a web browser, map viewer, image browser, and dictionary viewer. An informal usability study suggests that for a document browsing task, most subjects prefer automatic zooming and the technique exhibits approximately equal performance time to scroll bars, suggesting that automatic zooming is a helpful alternative to traditional scrolling when the zoomed out view provides appropriate visual cues.
Keywords: Navigation, zooming, scrolling, rate control,

Augmented Reality

The AHI: An Audio and Haptic Interface for Contact Interactions BIBAKFull-Text 149-158
  Derek DiFilippo; Dinesh K. Pai
We have implemented a computer interface that renders synchronized auditory and haptic stimuli with very low (0.5ms) latency. The audio and haptic interface (AHI) includes a Pantograph haptic device that reads position input from a user and renders force output based on this input. We synthesize audio by convolving the force profile generated by user interaction with the impulse response of the virtual surface. Auditory and haptic modes are tightly coupled because we produce both stimuli from the same force profile. We have conducted a user study with the AHI to verify that the 0.5ms system latency lies below the perceptual threshold for detecting separation between auditory and haptic contact events. We discuss future applications of the AHI for further perceptual studies and for synthesizing continuous contact interactions in virtual environments.
Keywords: User Interface, Haptics, Audio, Multimodal, Latency, Synchronization
Page Detection using Embedded Tags BIBAKFull-Text 159-160
  Maribeth J. Back; Jonathan Cohen
We describe a robust working prototype of a system for accurate page-ID detection from bound paper books. Our method uses a new RFID technology to recognize book page location. A thin flexible transponder tag with a unique ID is embedded in the paper of each page, and a tag reader is affixed to the binding of the back of the book. As the pages turn, the tag reader notices which tags are within its read range and which have moved out of its range (which is about four inches). The human interacts with the book naturally, and is not required to perform any actions for page detection that are not usual in book interaction. The page-detection data can be used to enhance the experience of the book, or to enable the book as a controller for another system. One such system, an interactive museum exhibit, is briefly described.
Keywords: Page ID, page detection, RFID, embedded tags, simultaneous ID, smart documents, electronic books.
System Lag Tests for Augmented and Virtual Environments BIBAKFull-Text 161-170
  Colin Swindells; John C. Dill; Kellogg S. Booth
We describe a simple technique for accurately calibrating the temporal lag in augmented and virtual environments within the Enhanced Virtual Hand Lab (EVHL), a collection of hardware and software to support research on goal-directed human hand motion. Lag is the sum of various delays in the data pipeline associated with sensing, processing, and displaying information from the physical world to produce an augmented or virtual world. Our main calibration technique uses a modified phonograph turntable to provide easily tracked periodic motion, reminiscent of the pendulum-based calibration technique of Liang, Shaw and Green. Measurements show a three-frame (50 ms) lag for the EVHL. A second technique, which uses a specialized analog sensor that is part of the EVHL, provides a "closed loop" calibration capable of sub-frame accuracy. Knowing the lag to sub-frame accuracy enables a predictive tracking scheme to compensate for the end-to-end lag in the data pipeline. We describe both techniques and the EVHL environment in which they are used.
Keywords: Augmented Reality, Calibration, Lag, Sensor, Turntable, Virtual Reality.

Toolkit Support for UI

Jazz: An Extensible Zoomable User Interface Graphics Toolkit in Java BIBAKFull-Text 171-180
  Benjamin B. Bederson; Jon Meyer; Lance Good
In this paper we investigate the use of scene graphs as a general approach for implementing two-dimensional (2D) graphical applications, and in particular Zoomable User Interfaces (ZUIs). Scene graphs are typically found in three-dimensional (3D) graphics packages such as Sun's Java3D and SGI's OpenInventor. They have not been widely adopted by 2D graphical user interface toolkits. To explore the effectiveness of scene graph techniques, we have developed Jazz, a general-purpose 2D scene graph toolkit. Jazz is implemented in Java using Java2D, and runs on all platforms that support Java 2. This paper describes Jazz and the lessons we learned using Jazz for ZUIs. It also discusses how 2D scene graphs can be applied to other application areas.
Keywords: Zoomable User Interfaces (ZUIs), Animation, Graphics, User Interface Management Systems (UIMS), Pad++, Jazz.
The Architecture and Implementation of CPN2000, A Post-WIMP Graphical Application BIBAKFull-Text 181-190
  Michel Beaudouin-Lafon; Henry Michael Lassen
We have developed an interface for editing and simulating Coloured Petri Nets based on toolglasses, marking menus and bi-manual interaction, in order to understand how novel interaction techniques could be supported by a new generation of user interface toolkits. The architecture of CPN2000 is based on three components: the Document Structure stores all the persistent data in the system; the Display Structure represents the contents of the screen and implements rendering and hit detection algorithms; and the Input Structure uses "instruments" to manage interaction. The rendering engine is based on OpenGL and a number of techniques have been developed to take advantage of 3D accelerated graphics for a 2D application. Performance data show that high frame rates have been achieved with off-theshelf hardware even with a non-optimized redisplay. This work paves the way towards a post-WIMP UI toolkit.
Keywords: User interface toolkit, Advanced interaction techniques, Post-WIMP interfaces, Two-handed input, Instrumental interaction, OpenGL, Coloured Petri nets.
Cross-Modal Interaction using XWeb BIBAKFull-Text 191-200
  Dan R. Olsen; Sean Jefferies; Travis Nielsen; William Moyes; Paul Fredrickson
The XWeb project addresses the problem of interacting with services by means of a variety of interactive platforms. Interactive clients are provided on a variety of hardware/software platforms that can access and XWeb service. Creators of services need not be concerned with interactive techniques or devices. The cross platform problems of a network model of interaction, adaptation to screen size and supporting both speech and visual interfaces in the same model are addressed.
Keywords: Cross-modal interaction, network interaction, screen layout, speech interfaces.

Selection

TopicShop: Enhanced Support for Evaluating and Organizing Collections of Web Sites BIBAFull-Text 201-209
  Brian Amento; Loren Terveen; Will Hill; Deborah Hix
TopicShop is an interface that helps users evaluate and organize collections of web sites. The main interface components are site profiles, which contain information that helps users select high-quality items, and a work area, which offers thumbnail images, annotation, and lightweight grouping techniques to help users organize selected sites. The two components are linked to allow task integration.
   Previous work [2] demonstrated that subjects who used TopicShop were able to select significantly more highquality sites, in less time and with less effort. We report here on studies that confirm and extend these results. We also show that TopicShop subjects spent just half the time organizing sites, yet still created more groups and more annotations, and agreed more in how they grouped sites. Finally, TopicShop subjects tightly integrated the tasks of evaluating and organizing sites. and the results of a second, larger user study.
Dual Touch: A Two-Handed Interface for Pen-Based PDAs BIBAKFull-Text 211-212
  Nobuyuki Matsushita; Yuji Ayatsuka; Jun Rekimoto
A new interaction technique called Dual Touch has been developed for pen-based PDAs. It enables a user to operate a PDA by tapping and stroking on the screen with a pen and a thumb. The PDA can detect the combined movements of two points on its pressure-based touchscreen without additional hardware. The user can use the thumb to support the task of the pen.
Keywords: pen interfaces, two-handed interfaces, mobile computers, interaction technology.
FlowMenu: Combining Command, Text, and Data Entry BIBAKFull-Text 213-216
  François Guimbretière; Terry Winograd
We present a new kind of marking menu that was developed for use with a pen device on display surfaces such as large, high resolution, wall-mounted displays. It integrates capabilities of previously separate mechanisms such as marking menus and Quikwriting, and facilitates the entry of multiple commands. While using this menu, the pen never has to leave the active surface so that consecutive menu selections, data entry (text and parameters) and direct manipulation tasks can be integrated fluidly.
Keywords: Quikwriting, Marking menu, Control Menu, Interactive surface
Fisheye Menus BIBAKFull-Text 217-225
  Benjamin B. Bederson
We introduce "fisheye menus" which apply traditional fisheye graphical visualization techniques to linear menus. This provides for an efficient mechanism to select items from long menus, which are becoming more common as menus are used to select data items in, for example, ecommerce applications. Fisheye menus dynamically change the size of menu items to provide a focus area around the mouse pointer. This makes it possible to present the entire menu on a single screen without requiring buttons, scrollbars, or hierarchies.
   A pilot study with 10 users compared user preference of fisheye menus with traditional pull-down menus that use scrolling arrows, scrollbars, and hierarchies. Users preferred the fisheye menus for browsing tasks, and hierarchical menus for goal-directed tasks.
Keywords: Fisheye view, menu selection, widgets, information visualization.

Illusions

Providing Visually Rich Resizable Images for User Interface Components BIBAKFull-Text 227-235
  Scott E. Hudson; Kenichiro Tanaka
User interface components such as buttons, scrollbars, menus, as well as various types of containers and separators, normally need to be resizable so that they can conform to the needs of the contents within them, or the environment in which they are placed. Unfortunately, in the past, providing dynamically resizable component appearances has required writing code to draw the component. As a result, visual designers have often been cut off from the ability to create these appearances. Even when visual designers can be involved, drawing programmatically is comparatively very difficult. Hence, components created this way have not typically contained artistically rich appearances. Because of this need to write drawing code, component appearances have traditionally been quite plain, and have been controlled primarily by a few toolkit writers. This paper presents a suite of very simple techniques, along with a few composition mechanisms, that are designed to overcome this problem. These techniques allow visually rich, dynamically resizable, images to be provided using primarily conventional drawing tools (and with no programming or programming-like activities at all).
Keywords: User interface appearances, look and feel, interface components, toolkits, style systems.
Illusions of Infinity: Feedback for Infinite Worlds BIBAKFull-Text 237-238
  George W. Furnas; Xiaolong Zhang
Sensory feedback for user actions in arbitrarily large information worlds can exhaust the limited dynamic range of human sensation. Two well-known illusions, one optical and one auditory, can be used to give arbitrarily large ranges of feedback.
Keywords: Zoom views, multiscale interfaces, interface feedback, ZUI, sensory illusions
Dynamic Space Management for User Interfaces BIBAKFull-Text 239-248
  Blaine A. Bell; Steven K. Feiner
We present a general approach to the dynamic representation of 2D space that is well suited for user interface layout. We partition space into two distinct categories: full and empty. The user can explicitly specify a set of possibly overlapping upright rectangles that represent the objects of interest. These full-space rectangles are processed by the system to create a representation of the remaining empty space. This representation makes it easy for users to develop customized spatial allocation strategies that avoid overlapping the full-space rectangles. We describe the representation; provide efficient incremental algorithms for adding and deleting full-space rectangles, and for querying the empty-space representation; and show several allocation strategies that the representation makes possible. We present two testbed applications that incorporate an implementation of the algorithm; one shows the utility of our representation for window management tasks; the other applies it to the layout of components in a 3D user interface, based on the upright 2D bounding boxes of their projections.
Keywords: Spatial data structures, user interface design, geometric modeling, display layout, space allocation, window management, overlap avoidance.