HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 868889909192939495969798990001020304050607

Proceedings of the 1997 ACM Symposium on User Interface Software and Technology

Fullname:Proceedings of the 1997 ACM Symposium on User Interface and Software Technology
Location:Banff, Canada
Dates:1997-Oct-14 to 1997-Oct-17
Publisher:ACM
Standard No:ISBN 0-89791-881-9; ACM Order Number 429972; ACM DL: Table of Contents hcibib: UIST97
Papers:36
Pages:232
  1. Invited Talk
  2. 3D Interaction Techniques
  3. Picking and Pointing
  4. Synchronous Collaboration
  5. Programming by Demonstration
  6. Constraints
  7. Panel
  8. Asynchronous Collaboration
  9. Facilitating Visual Output
  10. Invited Talk
  11. Making Things Visible
  12. Blurring Physical and Virtual

Invited Talk

Banff to Banff: A UISTful Retrospective BIB --
  John Sibert

3D Interaction Techniques

Usability Analysis of 3D Rotation Techniques BIBAKPDF 1-10
  Ken Hinckley; Joe Tullio; Randy Pausch; Dennis Proffitt; Neal Kassell
We report results from a formal user study of interactive 3D rotation using the mouse-driven Virtual Sphere and Arcball techniques, as well as multidimensional input techniques based on magnetic orientation sensors. Multidimensional input is often assumed to allow users to work quickly, but at the cost of precision, due to the instability of the hand moving in the open air. We show that, at least for the orientation matching task used in this experiment, users can take advantage of the integrated degrees of freedom provided by multidimensional input without necessarily sacrificing precision: using multidimensional input, users completed the experimental task up to 36% faster without any statistically detectable loss of accuracy.
   We also report detailed observations of common usability problems when first encountering the techniques. Our observations suggest some design issues for 3D input devices. For example, the physical form-factors of the 3D input device significantly influenced user acceptance of otherwise identical input sensors. The device should afford some tactile cues, so the user can feel its orientation without looking at it. In the absence of such cues, some test users were unsure of how to use the device.
Keywords: Arcball, Virtual sphere, 3D input devices, Interactive 3D rotation, Virtual manipulation, Usability study, Evaluation
Immersion in Desktop Virtual Reality BIBAKPDF 11-19
  George Robertson; Mary Czerwinski; Maarten van Dantzich
This paper explores techniques for evaluating and improving immersion in Desktop Virtual Reality (VR). Three experiments are reported which extend findings on immersion in VR reported by Pausch et al. [9]. In the current experiments, a visual search paradigm was used to examine navigation in Desktop VR both with and without navigational aids. Pausch et al. found that non-head tracked users took significantly longer than predicted when the search target was absent, which was interpreted as indicative of a loss of sense of immersion. Our first experiment extended the Pausch et al. experiment to a desktop display. Our findings differ in that search times matched prediction when the target was absent, indicating that the Pausch et al. study does not transfer to Desktop VR. In the second and third experiments, our visual search task was performed while navigating a set of 3D hallways. We introduce a new navigation aid called Peripheral Lenses, intended to provide simulated peripheral vision. Informal studies suggested that Peripheral Lenses decrease search time, indicating an enhanced sense of immersion in Desktop VR. However, formal studies contradict that, demonstrating the importance of formal usability studies in the development of user interface software. We also gained evidence that visual attention findings transfer to Desktop VR.
Keywords: Virtual reality, Immersion, Evaluation, Visual search paradigm
Worldlets -- 3D Thumbnails for Wayfinding in Virtual Environments BIBAKPDF 21-30
  T. Todd Elvins; David R. Nadeau; David Kirsh
Virtual environment landmarks are essential in wayfinding: they anchor routes through a region and provide memorable destinations to return to later. Current virtual environment browsers provide user interface menus that characterize available travel destinations via landmark textual descriptions or thumbnail images. Such characterizations lack the depth cues and context needed to reliably recognize 3D landmarks. This paper introduces a new user interface affordance that captures a 3D representation of a virtual environment landmark into a 3D thumbnail, called a worldlet. Each worldlet is a miniature virtual world fragment that may be interactively viewed in 3D, enabling a traveler to gain first-person experience with a travel destination. In a pilot study conducted to compare textual, image, and worldlet landmark representations within a wayfinding task, worldlet use significantly reduced the overall travel time and distance traversed, virtually eliminating unnecessary backtracking.
Keywords: 3D thumbnails, Wayfinding, VRML, Virtual reality

Picking and Pointing

Pick-and-Drop: A Direct Manipulation Technique for Multiple Computer Environments BIBAKPDF 31-39
  Jun Rekimoto
This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a pen-based direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-and-Drop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.
Keywords: Direct manipulation, Graphical user interfaces, Input devices, Stylus interfaces, Pen interfaces, Drag-and-drop, Multi-computer user interfaces, Ubiquitous computing, Computer augmented environments
A Finger-Mounted, Direct Pointing Device for Mobile Computing BIBAKPDF 41-42
  John L. Sibert; Mehmet Gokturk
The index (first) finger of the dominant hand seems to be an intuitively natural and efficient means for pointing tasks. This paper presents the design of a device to enable pointing with the index finger as an interaction technique in mobile computers. The device, which uses infrared emission and detection to determine where on a screen the finger is pointing, is inexpensive and can easily be incorporated into a laptop computer.
Keywords: Pointing, Interaction devices, Input devices, Infrared detection
TimeSlider: An Interface to Specify Time point BIBAKPDF 43-44
  Yuichi Koike; Atsushi Sugiura; Yoshiyuki Koseki
This paper introduces TimeSlider, a user interface technique that allows the user to specify time points. TimeSlider is a kind of slider whose time scale is nonlinear and which moves as a user operation. The nonlinearly enables it to display a long time range in a small space, and the movement as a user operation helps the user to specify time points quickly. An example application, in which TimeSlider enabled the user to restore past WWW pages, demonstrated the effectiveness of our technique.
Keywords: Desktop software, Time machine, Selection technology, Slider
Pen-Based Interaction Techniques for Organizing Material on an Electronic Whiteboard BIBAKPDF 45-54
  Thomas P. Moran; Patrick Chiu; William van Melle
This paper presents a scheme for extending an informal, pen-based whiteboard system (the Tivoli application on the Xerox LiveBoard) to provide interaction techniques that enable groups of users in informal meetings to easily organize and rearrange material and to manage the space on the board. The techniques are based on the direct manipulation of boundaries and the implicit recognition of regions. The techniques include operations for shrinking and rearranging, structured borders that tessellate the board, freeform enclosures that can be split, fused, and linked, and collapsible annotations. Experience with using these techniques, the results of a user test, some design trade-offs and lessons, and future directions are discussed.
Keywords: Whiteboard metaphor, Pen-based systems, Freeform interaction, Implicit structure, Emergent structure, Structural grouping, Informal systems, Recognition-based systems, List structures, Meeting support tools, Gestural interfaces, User interface design

Synchronous Collaboration

Transparent Sharing of Java Applets: A Replicated Approach BIBAKPDF 55-64
  James Begole; Craig A. Struble; Clifford A. Shaffer; Randall B. Smith
People interact together in all aspects of life and, as computers have become prevalent, users seek computer support for their interactions. The WWW provides an unprecedented opportunity for users to interact with each other, and the advent of Java has created a consistent computing environment to support synchronous collaboration. We describe JAMM, a prototype Java runtime environment that supports the shared use of existing Java applets, thus leveraging the existing base of software for synchronous collaboration. Our approach is based on a replicated architecture, where each user maintains their own copy of the Java applet, and the users' input events are broadcast to each applet copy. We discuss solutions to certain key problems, such as unanticipated sharing, supporting late-joiners and replicating input sources other than user inputs (e.g., files, sockets, and random number generators).
Keywords: Computer-supported cooperative work, Groupware, Collaboration transparency, Java
Simplifying Component Development in an Integrated Groupware Environment BIBAKPDF 65-72
  Mark Roseman; Saul Greenberg
This paper describes our experiences implementing a component architecture for TeamWave Workplace, an integrated groupware environment using a rooms metaphor. The problem we faced was how to design the architecture to support rapid development of new embedded components. Our solution, based on Tcl/Tk and GroupKit, uses multiple interpreters and a shared window hierarchy. This proved effective in easing development complexity in TeamWave. We discuss some of the strategies we used, and identify the types of interactions between system components. The lessons learned in developing this component model should be generally applicable to future integrated groupware systems in different environments.
Keywords: Groupware, CSCW, Tcl/Tk, GroupKit, Component architecture
A Shared Command Line in a Virtual Space: The Working Man's MOO BIBAKPDF 73-74
  Mark Guzdial
The WorkingMan's MOO extends a text-based virtual environment through a small command server which sits on the user's workstation. The extended virtual environment can offer the power of a command line, but embedded within a virtual community, enabling the creation of new interface metaphors that connect the virtual/MOO space with the desktop space.
Keywords: Virtual environments, Command line interfaces, Computer-supported collaborative work

Programming by Demonstration

CyberDesk: A Framework for Providing Self-Integrating Ubiquitous Software Services BIBAKPDF 75-76
  Anind K. Dey; Gregory Abowd; Mike Pinkerton; Andrew Wood
Current software suites suffer from problems due to poor integration of their individual tools. They require the designer to think of all possible integrating behaviours and leave little flexibility to the user. CyberDesk is a component software framework that automatically integrates desktop and network services, reducing integrating decisions to be made by the tool designers and giving more control to the user. Simple extensions to CyberDesk have been made to obtain powerful integrating behaviours.
Keywords: Adaptive interfaces, Automated integration, Dynamic integration, Software components, Context-aware computing, Future computing environments, Ubiquitous services
Alice: Easy to Use Interactive 3D Graphics BIBAKPDFEDU 77-78
  Jeffrey S. Pierce; Steve Audia; Tommy Burnette; Kevin Christiansen; Dennis Cosgrove; Matt Conway; Ken Hinckley; Kristen Monkaitis; James Patten; Joe Shochet; David Staack; Brian Stearns; Chris Sturgill; George Williams; Randy Pausch
Alice is a rapid prototyping system used to create three dimensional graphics simulations like those seen in virtual reality applications. Alice uses an interpreted language called Python as its scripting language to implement user actions. This interactive development environment allows users to explore many more design options than is possible in a compiled language environment. The alpha version of Alice for Windows 95 is available for free over the internet, with the beta release scheduled for August.
Keywords: Virtual reality, 3D graphics, Rapid prototyping, Usability engineering
A Spreadsheet Approach to Information Visualization BIBAKPDF 79-80
  Ed Huai-hsin Chi; Joseph Konstan; Phillip Barry; John Riedl
In information visualization, as the volume and complexity of the data increases, researchers require more powerful visualization tools that allow them to more effectively explore multi-dimensional datasets. In this paper, we show a novel new visualization framework built upon the spreadsheet metaphor, where each cell can contain an entire dataset. Just as a numerical spreadsheet enables exploration of numbers, a visualization spreadsheet enables exploration of visualizations of data. Our prototype spreadsheets enabled users to compare visualizations in cells using the tabular layout. Users can use the spreadsheet to display, manipulate, and explore multiple visual representation techniques for their data. By applying different operations to the cells, we showed how visualization spreadsheets afford the construction of 'what-if' scenarios. The possible set of operations that users can apply consists of animation, filtering, and algebraic operators.
Keywords: Visualization, Information visualization, Interactive graphics, Spreadsheet
Gamut: Demonstrating Whole Applications BIBAKPDF 81-82
  Richard G. McDaniel; Brad A. Myers
Gamut is a new tool for building interactive, graphical software like games, simulations, and educational software. A developer can build entire applications in Gamut's domain using only programming-by-demonstration (PBD) and never has to look at or modify code to build any behavior. To accomplish this, we have developed a simple, streamlined interaction for demonstrating so that developers can create new examples quickly and can specify negative examples without confusion. Also, Gamut allows the developer to give hints to point out objects in a relationship that would be too time consuming to find by searching. Gamut automatically revises generated code using an efficient algorithm that recursively scans for the differences between a new example and the previous behavior. To correct the discovered differences, Gamut couples heuristic search with a decision tree learning algorithm allowing it to build more complicated behaviors than it could using heuristic search alone.
Keywords: End-user programming, User interface software, Application builders, Programming-by-demonstration, Programming-by-example, Inductive learning, Gamut
Orthogonal Extensions to the WWW User Interface using Client-Side Technologies BIBAPDF 83-84
  Armando Fo; Steven D. Gribble; Yatin Chawathe; Anthony S. Polito; Andrew Huang; Benjamin Ling; Eric A. Brewer
Our work is motivated by three trends. First, the ubiquitous migration of services to the World Wide Web is due in part to its simple, consistent, and now universal user interface: navigation by following links and filling out HTML forms are interactions familiar to even novice Internet users. Second, client-side extension technologies such as Java and JavaScript allow sites to extend and "personalize" the behaviors and interfaces of their services, with portable user-interface elements that integrate transparently into the browser's existing interface.
The CSLU Toolkit: Rapid Prototyping of Spoken Language Systems BIBAPDF 85-86
  Stephen Sutton; Ronald Cole
Research and development of spoken language systems is currently limited to relatively few academic and industrial laboratories. This is because building such systems requires multidisciplinary expertise, sophisticated development tools, specialized language resources, substantial computer resources and advanced technologies such as speech recognition and text-to-speech synthesis.
   At the Center for Spoken Language Understanding (CSLU), our mission is to make spoken language systems commonplace. To do so requires that the technology become less exclusive, more affordable and more accessible. An important step towards satisfying this goal is to place the development of spoken language systems in the hands of real domain experts rather than limit it to technical specialists.
   To address this problem, we have developed the CSLU Toolkit, an integrated software environment for research and development of telephone-based spoken language systems (Sutton et al., 1996; Schalkwyk, et al., 1997). It is designed to support a wide range of research and development activities, including data capture and analysis, corpus development, multilingual recognition and understanding, dialogue design, speech synthesis, speaker recognition and language recognition, and systems evaluation among others. In addition, the Toolkit provides an excellent environment for learning about spoken language technology, providing opportunities for hands-on learning, exploration and experimentation. It has been used as a basis for several short courses in which students have produced a wide range of interesting spoken language applications, such as voice mail, airline reservation and browsing the worldwide web by voice (Colton et al., 1996, Sutton et al., 1997).

Constraints

Solving Linear Arithmetic Constraints for User Interface Applications BIBAKPDF 87-96
  Alan Borning; Kim Marriott; Peter Stuckey; Yi Xiao
Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that one window be to the left of another, requiring that a pane occupy the leftmost 1/3 of a window, or preferring that an object be contained within a rectangle if possible. Current constraint solvers designed for UI applications cannot efficiently handle simultaneous linear equations and inequalities. This is a major limitation. We describe incremental algorithms based on the dual simplex and active set methods that can solve such systems of constraints efficiently.
Keywords: Linear constraints, Inequality constraints, Simplex algorithm
An Interactive Constraint-Based System for Drawing Graphs BIBAKPDF 97-104
  Kathy Ryall; Joe Marks; Stuart Shieber
The GLIDE system is an interactive constraint-based editor for drawing small- and medium-sized graphs (50 nodes or fewer) that organizes the interaction in a more collaborative manner than in previous systems. Its distinguishing features are a vocabulary of specialized constraints for graph drawing, and a simple constraint-satisfaction mechanism that allows the user to manipulate the drawing while the constraints are active. These features result in a graph-drawing editor that is superior in many ways to those based on more general and powerful constraint-satisfaction methods.
Keywords: Graph drawing, Constraint-based layout, Drawing tools, Collaborative interfaces
Interactive Beautification: A Technique for Rapid Geometric Design BIBAKPDF 105-114
  Takeo Igarashi; Satoshi Matsuoka; Sachiko Kawachiya; Hidehiko Tanaka
We propose interactive beautification, a technique for rapid geometric design, and introduce the technique and its algorithm with a prototype system Pegasus. The motivation is to solve a problem with current drawing systems: too many complex commands and unintuitive procedures to satisfy geometric constraints. The Interactive beautification system receives the user's free stroke and beautifies it by considering geometric constraints among segments. A single stroke is beautified one after another, preventing accumulation of recognition errors or catastrophic deformation. Supported geometric constraints include perpendicularity, congruence, symmetry, etc., which were not seen in existing free stroke recognition systems. In addition, the system generates multiple candidates as a result of beautification to solve the problem of ambiguity. Using this technique, the user can draw precise diagrams rapidly satisfying geometric relations without using any editing commands.
   Interactive beautification is achieved by three sequential processes: 1) inferring underlining geometric constraints based on the spatial relationships among the input stroke and the existing segments, 2) generating multiple candidates by combining inferred constraints appropriately, and 3) evaluating the candidates to find the most plausible candidate and to remove the inappropriate candidates. A user study was performed using the prototype system, a commercial CAD tool, and an OO-based drawing system. The result showed that users can draw required diagrams more rapidly and more precisely using the prototype system.
Keywords: Drawing programs, Sketching, Pen-based computing, Constraints, Beautification

Panel

UIST'007: Where Will We Be Ten Years From Now? BIBAKPDF 115-118
  Robert J. K. Jacob; Steven K. Feiner; James D. Foley; Jock D. Mackinlay; Dan R., Jr. Olsen
The conference this year is the tenth anniversary of UIST. The keynote talk discusses the history of UIST over the last ten years; this panel looks into the future of the field over the next ten. Each of the panelists will describe a scenario for what life will be like when we meet for UIST'07, ten years from now. They will also have a chance to challenge or question each others' scenarios and to participate in open discussion with the audience.
Keywords: User interface software and technology, Human-computer interaction, Future, Prediction, UIST'2007

Asynchronous Collaboration

Designing and Implementing Asynchronous Collaborative Applications with Bayou BIBAKPDF 119-128
  W. Keith Edwards; Elizabeth D. Mynatt; Karin Petersen; Mike J. Spreitzer; Douglas B. Terry; Marvin M. Theimer
Asynchronous collaboration is characterized by the degree of independence collaborators have from one another. In particular, collaborators working asynchronously typically have little need for frequent and fine-grained coordination with one another, and typically do not need to be notified immediately of changes made by others to any shared artifacts they are working with. We present an infrastructure, called Bayou, designed to support the construction of asynchronous collaborative applications. Bayou provides a replicated, weakly-consistent, data storage engine to application writers. The system supports a number of mechanisms for leveraging application semantics; using these mechanisms, applications can implement complex conflict detection and resolution policies, and choose the level of consistency and stability they will see in their databases. We present a number of applications we have built or are building using the Bayou system, and examine how these take advantage of the Bayou architecture.
Keywords: Computer-supported cooperative work, Asynchronous interaction, Distributed systems, Bayou
Supporting Cooperative and Personal Surfing with a Desktop Assistant BIBAKPDF 129-138
  Hannes Marais; Krishna Bharat
We motivate the use of desktop assistants in the context of web surfing and show how such a tool may be used to support activities in both cooperative and personal surfing. By cooperative surfing we mean surfing by a community of users who choose to cooperatively and asynchronously build up knowledge structures relevant to their group. Specifically, we describe the design of an assistant called Vistabar, which lives on the Windows desktop and operates on the currently active web browser. Vistabar instances working for individual users support the authoring of annotations and shared bookmark hierarchies, and work with profiles of community interests to make findings highly available. Thus, they support a form of community memory. Vistabar also serves as a form of personal memory by indexing pages the user sees to assist in recall. We present rationale for the assistant's design, describe roles it could play to support surfing (including those mentioned above), and suggest efficient implementation strategies where appropriate.
Keywords: Desktop assistant, Browserware, WWW, Browser, Annotation, Asynchronous collaboration, Community knowledge, Bookmarks, Indexing, Barcodes
Flexible Conflict Detection and Management in Collaborative Applications BIBAKPDF 139-148
  W. Keith Edwards
This paper presents a comprehensive model for dealing with semantic conflicts in applications, and the implementation of this model in a toolkit for collaborative systems. Conflicts are defined purely through application semantics -- the set of behaviors supported by the applications -- and yet can be detected and managed by the infrastructure with minimal application code. This work describes a number of novel techniques for managing conflicts, both in the area of resolution policies and user interfaces for presenting standing conflicts in application data.
Keywords: CSCW, Collaborative infrastructure, Conflict Management, Timewarp

Facilitating Visual Output

Fix and Float: Object Movement by Egocentric Navigation BIBAPDF 149-150
  George G. Robertson; Stuart K. Card
The two traditional techniques for moving objects in graphical workspaces are dragging and cut and paste. Each method has some disadvantages. We introduce a new method, called fix and float, for moving objects in graphical workspaces. The new method fixes the object(s) to the gaze or viewpoint, thereby letting the user move objects implicitly while doing egocentric navigation. We describe the advantages this new method has over previous techniques, and give an example of its use in a 3D graphical workspace.
Systematic Output Modification in a 2D User Interface Toolkit BIBAKPDF 151-158
  W. Keith Edwards; Scott E. Hudson; Joshua Marinacci; Roy Rodenstein; Thomas Rodriguez; Ian Smith
In this paper we present a simple but general set of techniques for modifying output in a 2D user interface toolkit. We use a combination of simple subclassing, wrapping, and collusion between parent and output objects to produce arbitrary sets of composable output transformations. The techniques described here allow rich output effects to be added to most, if not all, existing interactors in an application, without the knowledge of the interactors themselves. This paper explains how the approach works, discusses a number of example effects that have been built, and describes how the techniques presented here could be extended to work with other toolkits. We address issues of input by examining a number of extensions to the toolkit input subsystem to accommodate transformed graphical output. Our approach uses a set of "hooks" to undo output transformations when input is to be dispatched.
Keywords: User interface toolkits, Output, Rendering, Interactors, Drawing effects
Supporting Dynamic Downloadable Appearances in an Extensible User Interface Toolkit BIBAKPDF 159-168
  Scott E. Hudson; Ian Smith
Most consumer products, from automobiles to breakfast cereals, pay significant attention to the visual appearance they present to the consumer. Designers of these products normally create custom appearances that reflect things such as the functionality or purpose of the product, the market they are trying to reach, and the image that the company creating the product is trying to create. As graphical user interfaces begin to fully penetrate the consumer market, we expect that similar customization of appearance will and should become part of every day practice in user interface design as well. This paper describes new user interface toolkit techniques designed to support dynamic, even downloadable, appearance changes for graphical user interfaces. The long term goal of this work is to create a system of styles which is analogous to current systems of fonts. That is, to provide a system for applying a style of visual appearance to an interface independent of the content of the interface, and for allowing such styles to be developed at least partially independent of specific user interface components, even in many cases supporting custom interactive components that did not exist when a style was created.
Keywords: User interface toolkits, User interface appearance, Look and feel, Constraint systems, Web-based interfaces, Java

Invited Talk

Flows in the Convergence of Television and Computing BIB --
  Jim Kajiya

Making Things Visible

Elastic Windows: A Hierarchical Multi-Window World-Wide Web Browser BIBAKPDF 169-177
  Eser Kandogan; Ben Shneiderman
The World-Wide Web is becoming an invaluable source for the information needs of many users. However, current browsers are still primitive, in that they do not support many of the navigation needs of users, as indicated by user studies. They do not provide an overview and a sense of location in the information structure being browsed. Also they do not facilitate organization and filtering of information nor aid users in accessing already visited pages without high cognitive demands. In this paper, a new browsing interface is proposed with multiple hierarchical windows and efficient multiple window operations. It provides a flexible environment where users can quickly organize, filter, and restructure the information on the screen as they reformulate their goals. Overviews can give the user a sense of location in the browsing history as well as provide fast access to a hierarchy of pages.
Keywords: World-Wide Web, Window management, Information visualization, User interfaces
Debugging Lenses: A New Class of Transparent Tools for User Interface Debugging BIBAKPDF 179-187
  Scott E. Hudson; Roy Rodenstein; Ian Smith
The visual and event driven nature of modern user interfaces, while a boon to users, can also make them more difficult to debug than conventional programs. This is because only the very surface representation of interactive objects -- their final visual appearance -- is visible to the programmer on the screen. The remaining "programming details" of the object remain hidden. If the appearance or behavior of an object is incorrect, often few clues are visible to indicate the cause. One must usually turn to text oriented debugging techniques (debuggers or simply print statements) which are separate from the interface, and often cumbersome to use with event-driven control flow.
   This paper describes a new class of techniques designed to aid in the debugging of user interfaces by making more of the invisible, visible. This class of techniques: debugging lenses, makes use of transparent lens interaction techniques to show debugging information. It is designed to work in situ -- in the context of a running interface without stopping or interfering with that interface. This paper describes and motivates the class of techniques, gives a number of specific examples of debugging lenses, and describes their implementation in the subArctic user interface toolkit.
Keywords: Interactive debugging, Lens interaction techniques, Dynamic queries, Context-based rendering, User interface toolkits, subArctic, Java
An Interactive Visual Query Environment for Exploring Data BIBAPDF 189-198
  Mark Derthick; John Kolojejchick; Steven F. Roth
Direct manipulation of visualizations is a powerful technique for performing exploratory data operations such as navigation, aggregation, and filtering. Its immediacy facilitates rapid, incremental, and reversible forays into the data. However it does not provide for reuse or modification of exploration sessions. This paper describes a visual query language, VQE, that adds these capabilities to a direct manipulation exploration environment called Visage. Queries and visualizations are dynamically linked: operations on either one immediately update the other, in contrast to the feedforward sequence of database query followed by visualization of results common in traditional systems.
   These features are supported by the architectural concept of threads, which represent a sequence of navigation steps on particular objects. Because they are tied to particular data objects, they can be directly manipulated. Because they represent operations, they can be generalized into queries. We expect this technique to apply to direct manipulation interfaces to any object-oriented system that represents both objects and the relationships among them.
Note: Color versions of the figures are at, e.g., http://www.cs.cmu.edu/~sage/UIST97/figure1.gif

Blurring Physical and Virtual

A Virtual Office Environment Based on a Shared Room Realizing Awareness Space and Transmitting Awareness Information BIBAKPDF 199-207
  Shinkuro Honda; Hironari Tomioka; Takaaki Kimura; Takaharu Ohsawa; Kenichi Okada; Yutaka Matsushita
In this paper, we describe a system that provides a "work-at-home" environment based on a virtual shared room built on a 3D graphics workstation. We realize "Awareness Space" on the system to avoid a tradeoff between providing facility of informal communication and keeping one's workspace from others' awareness information. Also, this system provides the feeling of the presence at virtual office by using "Around View" and "Sound Effect".
Keywords: Virtual office, Informal communication, Personal space, Presence, Awareness space, Concentration
HoloWall: Designing a Finger, Hand, Body, and Object Sensitive Wall BIBAKPDF 209-210
  Nobuyuki Matsushita; Jun Rekimoto
This TechNote reports on our initial results of realizing a computer augmented wall called the Holo Wall. Using an infrared camera located behind the wall, this system allows a user to interact with this computerized wall using fingers, hands, their body, or even a physical object such as a document folder.
Keywords: Wall interfaces, Infrared, Augmented reality, Ubiquitous computing
Audio Aura: Light-Weight Audio Augmented Reality BIBAKPDF 211-212
  Elizabeth D. Mynatt; Maribeth Back; Roy Want; Ron Frederick
The physical world can be augmented with auditory cues allowing passive interaction by the user. By combining active badges, distributed systems, and wireless headphones, the movements of users through their workplace can trigger the transmission of auditory cues. These cues can summarize information about the activity of colleagues, notify the status of email or the start of a meeting, and remind of tasks such as retrieving a book at opportune times. We are currently experimenting with a prototype audio augmented reality system, Audio Aura, at Xerox PARC. The goal of this work is to create an aura of auditory information that mimics existing background, auditory awareness cues. We are prototyping sound designs for Audio Aura in VRML 2.0.
Keywords: Audio, Augmented reality, Auditory icons, Active badge, VRML
The Omni-Directional Treadmill: A Locomotion Device for Virtual Worlds BIBAKPDF 213-221
  Rudolph P. Darken; William R. Cockayne; David Carmein
The Omni-Directional Treadmill (ODT) is a revolutionary device for locomotion in large-scale virtual environments. The device allows its user to walk or jog in any direction of travel. It is the third generation in a series of devices built for this purpose for the U.S. Army's Dismounted Infantry Training Program. We first describe the device in terms of its construction and operating characteristics. We then report on an analysis consisting of a series of locomotion and maneuvering tasks on the ODT. We observed user motions and system responses to those motions from the perspective of the user. Each task is described in terms of what causes certain motions to trigger unpredictable responses causing loss of balance or at least causing the user to become consciously aware of their movements. We conclude that the two primary shortcomings in the ODT are its tracking system and machine control mechanisms for centering the user on the treads.
Keywords: Virtual reality, Virtual environments, Exertion devices, Input devices, Locomotion, Maneuvering
The MetaDESK: Models and Prototypes for Tangible User Interfaces BIBAKPDF 223-232
  Brygg Ullmer; Hiroshi Ishii
The metaDESK is a user interface platform demonstrating new interaction techniques we call "tangible user interfaces." We explore the physical instantiation of interface elements from the graphical user interface paradigm, giving physical form to windows, icons, handles, menus, and controls. The design and implementation of the metaDESK display, sensor, and software architectures is discussed. A prototype application driving an interaction with geographical space, Tangible Geospace, is presented to demonstrate these concepts.
Keywords: Tangible user interfaces, Input devices, Haptic input, Augmented reality, Ubiquitous computing