HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 868889909192939495969798990001

Proceedings of the 1991 ACM Symposium on User Interface Software and Technology

Fullname:Proceedings of the 1991 ACM Symposium on User Interface and Software Technology
Editors:Jock Mackinlay
Location:Hilton Head, South Carolina
Dates:1991-Nov-11 to 1991-Nov-13
Publisher:ACM
Standard No:ACM ISBN 0-89791-451-1; ACM Order Number 429913; ACM DL: Table of Contents hcibib: UIST91
Papers:24
Pages:240
  1. Virtual Workspaces
  2. Interactive Components
  3. Banquet
  4. CSCW
  5. UI Frameworks
  6. Input Techniques
  7. Constraint Techniques
  8. Internationalization
  9. UI Builders

Virtual Workspaces

Hybrid User Interfaces: Breeding Virtually Bigger Interfaces for Physically Smaller Computers BIBAKPDF 9-17
  Steven Feiner; Ari Shamash
While virtual worlds offer a compelling alternative to conventional interfaces, the technologies these systems currently use do not provide sufficient resolution and accuracy to support detailed work such as text editing. We describe a pragmatic approach to interface design that provides users with a large virtual world in which such high-resolution work can be performed. Our approach is based on combining heterogeneous display and interaction device technologies to produce a hybrid user interface. Display and interaction technologies that have relatively low resolution, but which cover a wide (visual and interactive) field are used to form an information surround. Display and interaction technologies that have relatively high resolution over a limited visual and interaction range are used to present concentrated information in one or more selected portions of the surround. These high-resolution fields are embedded within the low-resolution surround by choosing and coordinating complementary devices that permit the user to see and interact with both simultaneously. This allows each embedded high-resolution interface to serve as a "sweet spot" within which information may be preferentially processed.
   We have developed a preliminary implementation, described in this paper, that uses a Reflection Technology Private Eye display and a Polhemus sensor to provide the secondary low-resolution surround, and a flat-panel display and mouse to provide the primary high-resolution interface.
Keywords: Software engineering, Tools and techniques, User interfaces, Operating systems, Systems programs and utilities, Window managers, Information interfaces and presentation, Multimedia information systems, Artificial realities, Information interfaces and presentation, User interfaces, Windowing systems, Computer graphics, Methodology and techniques, Interaction techniques, Computer graphics, Three-dimensional graphics and realism, Virtual reality, Head-mounted displays, Virtual worlds, Multiple displays, Portable computers
On Temporal-Spatial Realism in the Virtual Reality Environment BIBAPDF 19-25
  Jiandong Liang; Chris Shaw; Mark Green
The Polhemus Isotrak is often used as an orientation and position tracking device in virtual reality environments. When it is used to dynamically determine the user's viewpoint and line of sight (e.g. in the case of a head mounted display) the noise and delay in its measurement data causes temporal-spatial distortion, perceived by the user as jittering of images and lag between head movement and visual feedback. To tackle this problem, we first examined the major cause of the distortion, and found that the lag felt by the user is mainly due to the delay in orientation data, and the jittering of images is caused mostly by the noise in position data. Based on these observations, a predictive Kalman filter was designed to compensate for the delay in orientation data, and an anisotropic low pass filter was devised to reduce the noise in position data. The effectiveness and limitations of both approaches were then studied, and the results shown to be satisfactory.
The DigitalDesk Calculator: Tangible Manipulation on a Desk Top Display BIBAKPDF 27-33
  Pierre Wellner
Today's electronic desktop is quite separate from the physical desk of the user. Electronic documents lack many useful properties of paper, and paper lacks useful properties of electronic documents. Instead of making the electronic desktop more like the physical desk, this work attempts the opposite: to give the physical desk electronic properties and merge the two desktops into one. This paper describes a desk with a computer-controlled camera and projector above it. The camera sees where the user is pointing, and it reads portions of documents that are placed on the desk. The projector displays feedback and electronic objects onto the desk surface. This DigitalDesk adds electronic features to physical paper, and it adds physical features to electronic documents. The system allows the user to interact with paper and electronic objects by touching them with a bare finger (digit). Instead of "direct" manipulation with a mouse, this is tangible manipulation with a finger. The DigitalDesk Calculator is a prototype example of a simple application that can benefit from the interaction techniques enabled by this desktop. The paper begins by discussing the motivation behind this work, then describes the DigitalDesk, tangible manipulation, and the calculator prototype. It then discusses implementation details and ends with ideas for the future of tangible manipulation.
Keywords: User interface, Interaction technique, Display, Input device, Workstation, Desk, Desktop

Interactive Components

Buttons as First Class Objects on an X Desktop BIBAPDF 35-44
  George G. Robertson; D. Austin, Jr. Henderson; Stuart K. Card
A high-level user interface toolkit, called XButtons, has been developed to support on-screen buttons as first class objects on an X window system desktop. With the toolkit, buttons can be built that connect user interactions with procedures specified as arbitrary Unix Shell scripts. As first class desktop objects, these buttons encapsulate appearance and behaviour that is user tailorable. They are persistent objects and may store state relevant to the task they perform. They can also be mailed to other users electronically. In addition to being first class desktop objects, XButtons are gesture-based with multiple actions. They support other interaction styles, like the drag and drop metaphor, in addition to simple button click actions. They also may be concurrently shared among users, with changes reflected to all users of the shared buttons. This paper describes the goals of XButtons and the history of button development that led to XButtons. It also describes XButtons from the user's point of view. Finally, it discusses some implementation issues encountered in building XButtons on top of the X window system.
EmbeddedButtons: Documents as User Interfaces BIBAKPDF 45-53
  Eric A. Bier
Recent electronic document editors and hypertext systems allow users to create customized user interfaces by adding user-pressable buttons to on-screen documents. Positioning these buttons is easy because users are already familiar with the use of document editors. Unfortunately, the resulting user interfaces often exits only in stand-alone document systems, making it hard to integrate them with other applications. Furthermore, because buttons are usually treated as special document objects, they cannot take advantage of document editor formatting and layout capabilities to create their appearance. This paper describes the EmbeddedButtons architecture, which makes it easy to integrate buttons into documents and to use the resulting documents for a variety of user interface types. EmbeddedButtons allows arbitrary document elements to behave as buttons. Documents can be linked to application windows to serve as application control panels. Buttons can store and display application state to serve as mode indicators. New button classes, editors, and applications can be added dynamically.
Keywords: Active documents, User interface layout, Buttons
Interactive Graph Layout BIBAPDF 55-64
  Tyson R. Henry; Scott E. Hudson
This paper presents a novel methodology for viewing large graphs. The basic concept is to allow the user to interactively navigate through large graphs learning about them in appropriately small and concise pieces. An architecture is present to support graph exploration. It contains methods for building custom layout algorithms hierarchically, interactively decomposing large graphs, and creating interactive parameterized layout algorithms. As a proof of concept, examples are drawn from a working prototype that incorporates this methodology.

Banquet

A Nose Gesture Interface Device: Extending Virtual Realities BIBAPDF 65-68
  Tyson R. Henry; Scott E. Hudson; Andrey K. Yeatts; Brad A. Myers; Steven Feiner
This paper reports on the development of a nose-machine interface device that provides real-time gesture, position, smell and facial expression information. The DATANOSE{trade} Data AtomaTa CORNUCOPIA pNeumatic Olfactory I/O-deviSE Tactile Manipulation [Olsen86, Myers91] allows novice users without any formal nose training to perform complex interactive tasks.

CSCW

Primitives for Programming Multi-User Interfaces BIBAPDF 69-78
  Prasun Dewan; Rajiv Choudhary
We have designed a set of primitives for programming multi-user interfaces by extending a set of existing high-level primitives for programming single-user interfaces. These primitives support both collaboration-transparent and collaboration-aware multi-user programs and allow existing single-user programs to be incrementally changed to corresponding multi-user programs. The collaboration-aware primitives include primitives for tailoring the input and output to a user, authenticating users, executing code in a user's environment and querying and setting properties of it, and tailoring the user interface coupling. We have identified several application-independent user groups that arise in a collaborative setting and allow the original single-user calls to be targeted at these groups. In addition, we provide primitives for defining application-specific groups. Our preliminary experience with these primitives shows that they can be used to easily implement collaborative tasks of a wide range of applications including message systems, multi-user editors, computer conferencing systems, and coordination systems. In this paper, we motivate, describe, and illustrate these primitives, discuss how primitives similar to them can be offered by a variety of user interface tools, and point out future directions for work.
MMM: A User Interface Architecture for Shared Editors on a Single Screen BIBAKPDF 79-86
  Eric A. Bier; Steve Freeman
There is a growing interest in software applications that allow several users to simultaneously interact with computer applications either in the same room or at a distance. Much early work focused on sharing existing single-user applications across a network. The Multi-Device Multi-User Multi-Editor (MMM) project is developing a user interface and software architecture to support a new generation of editors specifically designed to be used by groups, including groups who share a single screen. Each user has his or her own modes, style settings, insertion points, and feedback. Screen space is conserved by reducing the size and number of on-screen tools. The editors use per-user data structures to respond to multi-user input.
Keywords: Conference-aware editors, Single-screen collaboration, Per-user customization, Home areas
Comparing the Programming Demands of Single-User and Multi-User Applications BIBAKPDF 87-94
  John F. Patterson
Synchronous multi-user applications are designed to support two or more simultaneous users. The RENDEZVOUS system is an infrastructure for building such multi-user applications. Several multi-user applications, such as a tic-tac-toe game, a multi-user CardTable application, and a multi-user whiteboard have been or are being constructed with the RENDEZVOUS system.
   We argue that there are at least three dimensions of programming complexity that are differentially affected by the programming of multi-user applications as compared to the programming of single-user applications. The first, concurrency, addresses the need to cope with parallel activities. The second dimension, abstraction, addresses the need to separate the user-interface from an underlying application abstraction. The third dimension, roles, addresses the need to differentially characterize users and customize the user-interface appropriately. Certainly, single-user applications often deal with these complexities; we argue that multi-user applications cannot avoid them.
Keywords: User interface management systems, Computer-supported cooperative work, Groupware, Programming, Dialogue separation, Concurrency, Roles

UI Frameworks

The PICASSO Application Framework BIBAKPDF 95-105
  Lawrence A. Rowe; Joseph A. Konstan; Brian C. Smith; Steve Seitz; Chung Liu
PICASSO is a graphical user interface development system that includes an interface toolkit and an application framework. The application framework provides high-level abstractions including modal dialog boxes and non-modal frames and panels similar to conventional programming language procedures and co-routines. These abstractions can be used to define objects that have local variables and that can be called with parameters.
   PICASSO also has a constraint system that is used to bind program variables to widgets, to implement triggered behaviors, and to implement multiple views of data. The system is implemented in Common Lisp using the Common Lisp Object System and the CLX interface to the X Window System.
Keywords: Graphical user interface development environment, Application framework, User interface toolkit, User interfaces
An Event-Object Recovery Model for Object-Oriented User Interfaces BIBAPDF 107-115
  Haiying Wang; Mark Green
An important aspect of interactive systems is the provision of a recovery facility that allows the user to reverse the effects of his interactions with the system. Due to differences between object-oriented and non-object-oriented methodologies, user recovery approaches used for non-object-oriented software are not suitable for object-oriented software. This paper presents an event-object user recovery model for the construction of recovery facilities in object-oriented user interfaces. Our approach divides traditional history/command lists into per-object lists which fit well with object-oriented structure. Unique features of this framework are the hierarchical structure of the local recovery objects that reflect the application structure, its simple semantics, and its ease of implementation, which greatly reduces the effort required by the interface builder to incorporate it into existing object-oriented user interface structures. We introduce this framework by describing the event-object model, defining the protocol used by the local facilities to perform user recovery, and presenting examples of how the framework is used.
SUIT: The Pascal of User Interface Toolkits BIBAKPDF 117-125
  Randy Pausch; Nathaniel R., II Young; Robert DeLine
User interface support software, such as UI toolkits, UIMSs, and interface builders, are currently too complex for undergraduates. Tools typically require a learning period of several weeks, which is impractical in a semester course. Most tools are also limited to a specific platform, usually either Macintosh, DOS, or UNIX/X. This is problematic for students who switch from DOS or Macintosh machines to UNIX machines as they move through the curriculum. The situation is similar to programming languages before the introduction of Pascal, which provided an easily ported, easily learned language for undergraduate instruction.
   SUIT (the Simple User Interface Toolkit), is a C subroutine library which provides an external control UIMS, an interactive layout editor, and a set of standard screen objects. SUIT applications run transparently across Macintosh, DOS, UNIX/X, and Silicon Graphics platforms. Through careful design and extensive user testing of the system and its documentation, we have been able to reduce learning time. We have formally measured that new users are productive with SUIT in less than three hours. SUIT currently has over one hundred students using it for undergraduate and graduate course work and for research projects.
Keywords: Interface builder, UI toolkit, UIMS, Pedagogy, Portability, Software engineering

Input Techniques

Stylus User Interfaces for Manipulating Text BIBAPDF 127-135
  David Goldberg; Aaron Goodisman
This paper is concerned with pen-based (also called stylus-based) computers. Two of the key questions for such computers are how to interface to handwriting recognition algorithms, and whether there are interfaces that can effectively exploit the differences between a stylus and a keyboard/mouse.
   We describe prototypes that explore each of these questions. Our text entry tool is designed around the idea that handwriting recognition algorithms will always be error prone, and has a different flavor from existing systems. Our prototype editor goes beyond the usual gesture editors used with styli and is based on the idea of leaving the markups visible.
Issues in Combining Marking and Direct Manipulation Techniques BIBAKPDF 137-144
  Gordon Kurtenbach; William Buxton
The direct manipulation paradigm has been effective in helping designers create easy to use mouse and keyboard based interfaces. The development of flat display surfaces and transparent tablets are now making possible interfaces where a user can write directly on the screen using a special stylus. The intention of these types of interfaces is to exploit users' existing handwriting, mark-up and drawing skills while also providing the benefits of direct manipulation. This paper reports on a test bed program which we are using for exploring hand-marking types of interactions and their integration with direct manipulation interactions.
Keywords: Markings, Gestures, Stylus, Pen-based interfaces, Direct manipulation
Smoothly Integrating Rule-Based Techniques into a Direct Manipulation Interface Builder BIBAKPDF 145-153
  Scott E. Hudson; Andrey K. Yeatts
Work in automating the production of user interface software has recently concentrated on two distinct approaches: systems that provide a direct manipulation editor for specifying user interfaces and systems that attempt to automatically generate much or all of the interface. This paper considers how a middle ground between these approaches can be constructed. It presents a technique whereby the rule-base inference methods used in many automatic generation systems can be smoothly integrated into a direct manipulation interface builder. This integration is achieved by explicitly representing the results of inference rules in the direct manipulation framework and by using semantic snapping techniques to give the user direct feedback and interactive control over the application of rules.
Keywords: User interface management systems, Interface builders, Automatic user interface generation, Rule-based inference, Direct manipulation, Semantic feedback, Semantic snapping

Constraint Techniques

The Importance of Pointer Variables in Constraint Models BIBAKPDF 155-164
  Brad Vander Zanden; Brad A. Myers; Dario Giuse; Pedro Szekely
Graphical tools are increasingly using constraints to specify the graphical layout and behavior of many parts of an application. However, conventional constraints directly encode the objects they reference, and thus cannot provide support for the dynamic runtime creation and manipulation of application objects. This paper discusses an extension to current constraint models that allows constraints to indirectly reference objects through pointer variables. Pointer variables permit programmers to create the constraint equivalent of procedures in traditional programming languages. This procedural abstraction allows constraints to model a wide array of dynamic application behavior, simplifies the implementation of structured object and demonstrational systems, and improves the storage and efficiency of highly interactive, graphical applications. It also promotes a simpler, more effective style of programming than conventional constraints. Constraints that use pointer variables are powerful enough to allow a comprehensive user interface toolkit to be built for the first time on top of a constraint system.
Keywords: Constraints, Development tools, Incremental algorithms
A General Framework for Bi-Directional Translation between Abstract and Pictorial Data BIBAKPDF 165-174
  Shin Takahashi; Satoshi Matsuoka; Akinori Yonezawa
The merits of direct manipulation (DM) are now widely recognized. However, DM interfaces incur high cost in their creation. To cope with this problem, we present a model of bi-directional translation between internal abstract data of applications and pictures, and create a prototype system TRIP2 based on this model. Using this model, general mapping from abstract data to pictures, and from pictures to abstract data, is realized merely by giving declarative mapping rules, allowing fast and effortless creation of DM interfaces. We also apply the prototype system to the generation of the interfaces for kinship diagrams, graph diagrams, and an Othello game.
Keywords: Bi-directional translation, Direct manipulation, User interface, User interface management system, Visualization

Internationalization

A Model for Input and Output of Multilingual Text in a Windowing Environment BIBAKPDF 175-183
  Yutaka Kataoka; Masato Morisaki; Hiroshi Kuribayashi; Hiroyoshi Ohara
A multilingual Input/Output (I/O) system has been designed based on topological studies of writing conventions of major world languages. Designed as a layered structure, it unifies common features of writing conventions and is intended to ease international and local functionalities. The input module of the internationalization layer converts phonograms to ideograms. The corresponding output module displays position-independent characters and position-dependent characters. The localization layer positions highly language-specific functions outside the structure. These functions are integrated as tables and servers to add new languages without the necessity of compilation.
   The I/O system interactively generates both stateful and stateless code sets. Beyond the boundaries of the POSIX locale model, the system generates ISO 2022, ISO/DIS 10646, and Compound Text, defined for the interchange encoding format in X11 protocols, for basic polyglot text processing. Possessing the capability of generating multilingual code sets, this I/O system clearly shows that code sets should be selected by applications with purposes beyond the selection of one element from a set of localization. Likewise, functionality and functions relating to text manipulation in an operating system should be determined by such applications.
   A subset of this I/O system was implemented in the X window system as a basic X11R5 I/O capability by supplying basic code set generation and string manipulation to avoid interference from operating systems. To ensure the possibility of polyglot string manipulation, the I/O system clearly should be implemented separately from the operating system with its limitations.
Keywords: Internationalization, Multilingual, Multiwindow, Input method, Output method, X window systems, Linguistics
XJp System: An Internationalized Language Interface for the X Window System BIBAKPDF 185-193
  Masato Morisaki; Etsuo Kawada; Hiroshi Kuribayashi; Seiji Kuwari; Masahiko Narita
This paper discusses the internationalization of the X Window System developed by the MIT X consortium. The main purpose is to enable X Window System Release 4 (X11R4) and earlier versions to support Asian languages, primarily Japanese, Chinese, and Korean. Unlike English and other European-based languages, Asian languages involve ideographic character manipulation. X Window System X11R4 and earlier versions can output such ideograms when the corresponding fonts are provided, but they have no corresponding input feature. Asian language input thus involves more than one keystroke to input a single ideogram, e.g., Japanese-language input uses romaji-kana-kanji conversion. This paper proposes an ideogram input architecture on the X Window System and discusses the interfaces between conversion systems and X application programs. Like the X Window System, our input-conversion feature is oriented to a distributed network environment.
Keywords: Internationalization, X window systems, Multibyte input, Ideographic language input
A Flexible Chinese Character Input Scheme BIBAKPDF 195-200
  S. C. Hsu
A very flexible and easy-to-use scheme which possesses unique advantages over existing systems is presented in this article. The scheme is based on the partitioning of a character into parts. A character is inputted by specifying the sequence of character parts descriptions, which is then matched against the standard sequences of the characters the character set. A character part is either described with a unique key or its stroke count. The matching algorithm allows the characters to be partitioned flexibly and inputted in many different ways. An automatic binding mechanism offers very high adaptability to the input style of the user. The user need not remember all the key bindings before he can input Chinese and the scheme is also capable of tolerating many variations in character style and/or errors.
Keywords: Chinese input, Chinese text processing

UI Builders

A Unidraw-Based User Interface Builder BIBAKPDF 201-210
  John M. Vlissides; Steven Tang
Ibuild is a user interface builder that lets a user manipulate simulations of toolkit objects rather than actual toolkit objects. Ibuild is built with Unidraw, a framework for building graphical editors that is part of the InterViews toolkit. Unidraw makes the simulation-based approach attractive. Simulating toolkit objects in Unidraw makes it easier to support editing facilities that are common in other kinds of graphical editors, and it keeps the builder insulated from a particular toolkit implementation. Ibuild supports direct manipulation analogs of InterViews' composition mechanisms, which simplify the specification of an interface's layout and resize semantics. Ibuild also leverages the C++ inheritance mechanism to decouple builder-generated code from the rest of the application. And while current user interface builders stop at the widget level, ibuild incorporates Unidraw abstractions to simplify the implementation of graphical editors.
Keywords: User interface builders, User interface toolkits, Direct manipulation, Graphical constraints
Separating Application Code from Toolkits: Eliminating the Spaghetti of Call-Backs BIBAKPDF 211-220
  Brad A. Myers
Conventional toolkits today require the programmer to attach call-back procedures to most buttons, scroll bars, menu items, and other widgets in the interface. These procedures are called by the system when the user operates the widget in order to notify the application of the user's actions. Unfortunately, real interfaces contain hundreds or thousands of widgets, and therefore many call-back procedures, most of which perform trivial tasks, resulting in a maintenance nightmare. This paper describes a system that allows the majority of these procedures to be eliminated. The user interface designer can specify by demonstration many of the desired actions and connections among the widgets, so call-backs are only needed for the most significant application actions. In addition, the call-backs that remain are completely insulated from the widgets, so that the application code is better separated from the user interface.
Keywords: Call-back procedures, Dialog boxes, UIMSs, Interface builders
A Demonstrational Technique for Developing Interfaces with Dynamically Created Objects BIBAKPDF 221-230
  David Wolber; Gene Fisher
The development of user interfaces is often facilitated by the use of a drawing editor. The user interface specialist draws pictures of the different "states" of the interface and passes these specifications on to the programmer. The user interface specialist might also use the drawing editor to demonstrate to the programmer the interactive behavior that the interface should exhibit; that is he might demonstrate to the programmer the actions that an end-user can perform and the graphical manner by which the application should respond to the end-user's stimuli. From the specifications, and the in-person demonstrations, the programmer implements a prototype of the interface.
   DEMO is a User Interface Development System (UIDS) that eliminates the programmer from the above process. Using an enhanced drawing editor, the user interface specialist demonstrates the actions of the end-user and the system, just as he would if the programmer were watching. However no programmer is necessary: DEMO records these demonstrations, makes generalizations from them, and automatically generates a prototype of the interface.
Keywords: User interface development system