| User Interfaces in GigaPC Environments | | BIBA | i | |
| Raj Reddy | |||
| Current projections indicate that a PC capable of a billion operations-per-second costing about $3,000 will be available by around 1998. User interfaces provide one of the few areas which can beneficially use such computational power. In this talk, I will present a number of user interface research areas such as multimedia interfaces, self-improving interfaces, intelligent help facilities, interfaces that can provide advice on efficient uses of the system, and systems that can tolerate error and ambiguity. | |||
| Interactive Shadows | | BIBAK | PDF | 1-6 | |
| Kenneth P. Herndon; Robert C. Zeleznik; Daniel C. Robbins; D. Brookshire Conner; Scott S. Snibbe; Andries van Dam | |||
| It is often difficult in computer graphics applications to understand
spatial relationships between objects in a 3D scene or effect changes to those
objects without specialized visualization and manipulation techniques. We
present a set of three-dimensional tools (widgets) called "shadows" that not
only provide valuable perceptual cues about the spatial relationships between
objects, but also provide a direct manipulation interface to constrained
transformation techniques. These shadow widgets provide two advances over
previous techniques. First, they provide high correlation between their own
geometric feedback and their effects on the objects they control. Second,
unlike some other 3D widgets, they do not obscure the objects they control. Keywords: Direct manipulation, 3D widgets, Interactive systems | |||
| Two-Handed Gesture in Multi-Modal Natural Dialogue | | BIBAK | PDF | 7-14 | |
| Richard A. Bolt; Edward Herranz | |||
| Tracking both hands in free-space with accompanying speech input can augment
the user's ability to communicate with computers. This paper discusses the
kinds of situations which call for two-handed input and not just the single
hand, and reports a prototype in which two-handed gestures serve to input
concepts, both static and dynamic, manipulate displayed items, and specify
actions to be taken. Future directions include enlargement of the vocabulary
of two-handed "coverbal" gestures and the modulation by gaze of gestural
intent. Keywords: Gestural input, Multi-modal interaction, Natural dialog | |||
| A Testbed for Characterizing Dynamic Response of Virtual Environment Spatial Sensors | | BIBAK | PDF | 15-22 | |
| Bernard D. Adelstein; Eric R. Johnston; Stephen R. Ellis | |||
| This paper describes a testbed and method for characterizing the dynamic
response of the type of spatial displacement transducers commonly used in
virtual environment (VE) applications. The testbed consists of a motorized
rotary swing arm that imparts known displacement inputs to the VE sensor. The
experimental method involves a series of tests in which the sensor is displayed
back and forth at a number of controlled frequencies that span the bandwidth of
volitional human movement. During the tests, actual swing arm angle and
reported VE sensor displacements are collected and time stamped. Because of
the time stamping technique, the response time of the sensor can be measured
directly, independently of latencies in data transmission from the sensor unit
and any processing by the interface application running on the host computer.
Analysis of these experimental results allows sensor time delay and gain
characteristics to be determined as a functions of input frequency. Results
from tests of several different VE spatial sensors (Ascension, Logitech, and
Polhemus) are presented here to demonstrate use of the testbed and method. Keywords: Virtual environments, Input devices, Spatial sensors, System calibration,
Sensor lag | |||
| The Information Grid: A Framework for Information Retrieval and Retrieval-Centered Applications | | BIBA | PDF | 23-32 | |
| Ramana Rao; Stuart K. Card; Herbert D. Jellinek; Jock D. Mackinlay; George G. Robertson | |||
| The Information Grid (InfoGrid) is a framework for building information access applications that provides a user interface design and an interaction model. It focuses on retrieval of application objects as its top level mechanism for accessing user information, documents, or services. We have embodied the InfoGrid design in an object-oriented application framework that supports rapid construction of applications. This application framework has been used to build a number of applications, some that are classically characterized as information retrieval applications, other that are more typically viewed as personal work tools. | |||
| Frameworks for Interactive, Extensible, Information-Intensive Applications | | BIBAK | PDF | 33-41 | |
| Craig L. Zarmer; Chee Chew | |||
| We describe a set of application frameworks designed especially to support
information-intensive applications in complex domains, where the visual
organization of an application's information is critical. Our frameworks,
called visual formalisms, provide the semantic structures and editing
operations, as well as the visual layout algorithms, needed to create a
complete application. Examples of visual formalisms include tables, panels,
graphs, and outlines. They are designed to be extended both by programmers,
through subclassing, and by end users, through an integrated extension
language. Keywords: Application frameworks, User interface toolkits, User interface management
systems, Builders, End user programming | |||
| An Explanatory and "Argumentative" Interface for a Model-Based Diagnostic System | | BIBA | PDF | 43-52 | |
| Christopher A. Miller; Raymond Larson | |||
| That intelligent systems need an explanatory capability if they are to aid or support human users has long been understood. A system which can justify its decisions generally obtains improved user trust, greater accuracy in use and offers embedded training potential. Extensive work has been done to provide rule-based systems with explanatory interfaces, but little has been done to provide the same benefits for model-based systems. We develop an approach to organizing the presentation of large amounts of model-based data in an interactive format patterned after a model of human-human explanatory and argumentative discourse. Portions of this interface were implemented for Honeywell's model-based Flight Control Maintenance and Diagnostic System (FCMDS). We conclude that sufficient information exists in a model-based system to provide a wide range of explanation types, and that, the discourse approach is a convenient, powerful and broadly applicable method of organizing and controlling information exchange involving this data. | |||
| Techniques for Low Cost Spatial Audio | | BIBA | PDF | 53-59 | |
| David A. Burgess | |||
| There are a variety of potential uses for interactive spatial sound in human-computer interfaces, but hardware costs have made most of these applications impractical. Recently, however, single-chip digital signal processors have made real-time spatial audio an affordable possibility for many workstations. This paper describes an efficient spatialization technique and the associated computational requirements. Issues specific to the use of spatial audio in user interfaces are addressed. The paper also describes the design of a network server for spatial audio that can support a number of users at modest cost. | |||
| Mapping GUIs to Auditory Interfaces | | BIBAK | PDF | 61-70 | |
| Elizabeth D. Mynatt; W. Keith Edwards | |||
| This paper describes work to provide mappings between X-based graphical
interfaces with auditory interfaces. In our system, dubbed Mercator, this
mapping is transparent to applications. The primary motivation for this work
is to provide accessibility to graphical applications for users who are blind
or visually impaired. We describe the design of an auditory interface which
simulates many of the features of graphical interfaces. We then describe the
architecture we have built to model and transform graphical interfaces.
Finally, we conclude with some indications of future research for improving our
translation mechanisms and for creating an auditory "desktop" environment. Keywords: Auditory interfaces, GUIs, X, Visual impairment, Multimodal interfaces | |||
| Tools for Building Asynchronous Servers to Support Speech and Audio Applications | | BIBAK | PDF | 71-78 | |
| Barry Arons | |||
| Distributed client/server models are becoming increasingly prevalent in
multimedia systems and advanced user interface design. A multimedia
application, for example, may play and record audio, use speech recognition
input, and use a window system for graphical I/O. The software architecture of
such a system can be simplified if the application communicates to multiple
servers (e.g., audio servers, recognition servers) that each manage different
types of input and output. This paper describes tools for rapidly prototyping
distributed asynchronous servers and applications, with an emphasis on
supporting highly interactive user interfaces, temporal media, and multi-modal
I/O.
The Socket Manager handles low-level connection management and device I/O by supporting a callback mechanism for connection initiation, shutdown, and for reading incoming data. The Byte Stream Manager consists of an RPC compiler and run-time library that supports synchronous and asynchronous calls, with both a programmatic interface and a telnet interface that allows the server to act as a command interpreter. This paper details the tools developed for building asynchronous servers, several audio and speech servers built using these tools, and applications that exploit the features provided by the servers. Keywords: Audio servers, Remote procedure call, Asynchronous message passing,
Distributed client-server architecture, Speech recognition and synthesis,
Speech and audio applications | |||
| Some Virtues and Limitations of Action Inferring Interfaces | | BIBAK | PDF | 79-88 | |
| Edwin Bos | |||
| An action interring facility for a multimodal interface called Edward is
described. Based on the actions the user performs, Edward anticipates future
actions and offers to perform them automatically. The system uses inductive
inference to anticipate actions. It generalizes over arguments and results,
and detects patterns on the basis of a small sequence of user actions, e.g.
"copy a lisp file; change extension of original file into .org; put the copy in
the backup folder". Multimodality (particularly the combination of natural
language and simulated pointing gestures) and the reuse of patterns are
important new features. Some possibilities and problems of action interring
interfaces in general are addressed. Action interring interfaces are
particularly useful for professional users of general-purpose applications.
Such users are unable to program repetitive patterns because either the
applications do not provide the facilities or the users lack the capabilities. Keywords: Programming by example, Demonstrational interfaces, Multimodal interfaces | |||
| Adding Rule-Based Reasoning to a Demonstrational Interface Builder | | BIBAK | PDF | 89-97 | |
| Gene L. Fisher; Dale E. Busse; David A. Wolber | |||
| This paper presents a demonstrational interface builder with improved
reasoning capabilities. The system is comprised of two major components: an
interactive display manager and a rule-based reasoner. The display manager
provides facilities to draw the physical appearance of an interface and define
interface behavior by graphical demonstration. The behavior is defined using a
technique of stimulus-response demonstration. With this technique, an
interface developer first demonstrates a stimulus that represents an action
that an end user will perform on the interface. After the stimulus, the
developer demonstrates the response(s) that should result from the given
stimulus. As the behavior is demonstrated, the reasoner observes the
demonstrations and draws inferences to expedite behavior definition. The
inferences entail generalizing from specific behavior demonstrations and
identifying constraints that define the generalized behavior. Once behavior
constraints are identified, the reasoner sends them to the display manager to
complete the definition process. When the interface is executed by an
end-user, the display manager uses the constraints to implement the run-time
behavior of the interface. Keywords: UIMSs, Interface builders, Programming by demonstration, Direct manipulation | |||
| A History-Based Macro by Example System | | BIBAK | PDF | 99-106 | |
| David Kurlander; Steven Feiner | |||
| Many tasks performed using computer interfaces are very repetitive. While
programmers can write macros or procedures to automate these repetitive tasks,
this requires special skills. Demonstrational systems make macro building
accessible to all users, but most provide either no visual representation of
the macro or only a textual representation. We have developed a history-based
visual representation of commands in a graphical user interface. This
representation supports the definition of macros by example in several novel
ways. At any time, a user can open a history window, review the commands
executed in a session, select operations to encapsulate into a macro, and
choose objects and their attributes as arguments. The system has facilities to
generalize the macro automatically, save it for future use, and edit it. Keywords: Macros, Demonstrational techniques, Histories, Graphical representations,
Programming by example | |||
| Declarative Programming of Graphical Interfaces by Visual Examples | | BIBAK | PDF | 107-116 | |
| Ken Miyashita; Satoshi Matsuoka; Shin Takahashi; Akinori Yonezawa; Tomihisa Kamada | |||
| Graphical user interfaces (GUI) provide intuitive and easy means for users
to communicate with computers. However, construction of GUI software requires
complex programming that is far from being intuitive. Because of the "semantic
gap" between the textual application program and its graphical interface, the
programmer himself must conceptually maintain the correspondence between the
textual programming and the graphical image of the resulting interface.
Instead, we propose a programming environment based on the programming by
visual example (PBVE) scheme, which allows the GUI designers to "program"
visual interfaces for their applications by "drawing" the example visualization
of application data with a direct manipulation interface. Our system, TRIP3,
realizes this with (1) the bi-directional translation model between the
(abstract) application data and the pictorial data of the GUI, and (2) the
ability to generate mapping rules for the translation from example application
data and its corresponding example visualization. The latter is made possible
by the use of generalization of visual examples, where the system is able to
automatically generate generalized mapping rules from a given set of examples. Keywords: Graphical user interface, Direct manipulation, Constraints, Programming by
example, Layouts, Visualization | |||
| Graphical Styles for Building User Interfaces by Demonstration | | BIBAK | PDF | 117-124 | |
| Osamu Hashimoto; Brad A. Myers | |||
| Conventional interface builders allow the user interface designer to select
widgets such as menus, buttons and scroll bars, and lay them out using a mouse.
Although these are conceptually simple to use, in practice there are a number
of problems. First, a typical widget will have dozens of properties which the
designer might change. Insuring that these properties are consistent across
multiple widgets in a dialog box and multiple dialog boxes in an application
can be very difficult. Second, if the designer wants to change the properties,
each widget must be edited individually. Third, getting the widgets laid out
appropriately in a dialog box can be tedious. Grids and alignment commands are
not sufficient. This paper describes Graphical Tabs and Graphical Styles in
the Gilt interface builder which solves all of these problems. A "graphical
tab" is an absolute position in a window. A "graphical style" incorporates
both property and layout information, and can be defined by example, named,
applied to other widgets, edited, saved to a file, and read from a file. If a
graphical style is edited, then all widgets defined using that style are
modified. In addition, because appropriate styles are inferred, they do not
have to be explicitly applied. Keywords: User interface builder, User interface management system, Demonstrational
interfaces, Styles, Tabs, Garnet, Direct manipulation, Inferencing | |||
| Programming Time in Multimedia User Interfaces | | BIBAK | PDF | 125-134 | |
| Nuno M. Guimaraes; Nuno M. Correia; Telmo A. Carmo | |||
| The new media types used in advanced user interfaces and interactive systems
introduce time as a significant variable. This paper addresses the
architectural support and programming tools that should be provided to the
programmer to manage the time dependencies. The approach considers that the
basic models and programming paradigms adopted in the manipulation and
management of time should be isomorphic with the spatial models used in
existing graphical user interfaces.
The paper describes the architectural principles of a toolkit designed to support the construction of user interfaces with temporal characteristics. The Ttoolkit is an extension of an existing graphical user interface toolkit, the Xt toolkit. Its design is presented and a sample application is described. Keywords: Time programming, Dynamic media, UI toolkits, Multimedia user interfaces | |||
| MediaMosaic -- A Multimedia Editing Environment | | BIBAK | PDF | 135-141 | |
| Jin-Kun Lin | |||
| MediaMosaic is an editing environment developed to provide several features
that are either unavailable or not adequately addressed in current editing
systems. First, it is a multimedia editor of an open architecture. General
media are inserted in documents by embedded virtual screens. Second, it allows
users to do markup editing in context. The marked comments are overlapped and
attached to the commented areas. Third, it provides a mechanism to allow users
to bring data from more than one source to a single document. The views of the
included data can be tailored. Fourth, users can work on an included medium
through its embedded view or through another complete and duplicated view. It
isolates and simplifies the interface design of individual media editors. Keywords: Multimedia, Editor, X window systems, User interface | |||
| The Role of Natural Language in a Multimodal Interface | | BIBA | PDF | 143-149 | |
| Philip R. Cohen | |||
| Although graphics and direct manipulation are effective interface technologies for some classes of problems, they are limited in many ways. In particular, they provide little support for identifying objects not on the screen, for specifying temporal relations, for identifying and operating on large sets and subsets of entries, and for using the context of interaction. One the other hand, these are precisely strengths of natural language. This paper presents and interface that blends natural language processing and direct manipulation technologies, using each for their characteristic advantages. Specifically, the paper shows how to use natural language to describe objects and temporal relations, and how to use direct manipulation for overcoming hard natural language problems involving the establishment and use of context and pronominal reference. This work has been implemented in SRI's Shoptalk system, a prototype information and decision-support system for manufacturing. | |||
| TelePICTIVE: Computer-Supported Collaborative GUI Design for Designers with Diverse Expertise | | BIBAK | PDF | 151-160 | |
| David S. Miller; John G. Smith; Michael J. Muller | |||
| It is generally accepted that it is important to involve the end users of a
Graphical User Interface (GUI) in all stages of its design and development.
However, traditional GUI development tools typically do not support
collaborative design. TelePICTIVE is an experimental software prototype
designed to allow computer-naive users to collaborate with experts at possibly
remote locations in designing GUIs.
TelePICTIVE is based on the PICTIVE participatory design methodology and has been prototyped using the RENDEZVOUS system. In this paper we describe TelePICTIVE, and show how it is designed to support collaboration among a group of GUI designers with diverse levels of expertise. We also explore some of the issues that have come up during development and initial usability testing, such as how to coordinate simultaneous access to a shared design surface, and how to engage in the participatory design of GUIs using a Computer-Supported Cooperative Work (CSCW) system. Keywords: Graphical user interface, Participatory design, CSCW, MUMMS application,
Collaborative, Multi-user, PICTIVE | |||
| Tools for Supporting the Collaborative Process | | BIBA | PDF | 161-170 | |
| James R. Rhyne; Catherine G. Wolf | |||
| Collaborative software has been divided into two temporal categories: synchronous and asynchronous. We argue that this binary distinction is unnecessary and harmful, and present a model for collaboration processes (i.e. the temporal record of the actions of the group members) which includes both synchronous and asynchronous software as submodels. We outline an object-oriented toolkit which implements the model, and present an application of its use in a pen-based conferencing tool. | |||
| Transparency and Awareness in a Real-Time Groupware System | | BIBA | PDF | 171-180 | |
| Michel Beaudouin-Lafon; Alain Karsenty | |||
| This article explores real-time groupware systems from the perspective of both the users and the designer. This exploration is carried out through the description of GroupDesign, a real-time multi-user drawing tool that we have developed. From the perspective of the users, we present a number of functionalities that we feel necessary in any real-time groupware system: Graphic & Audio Echo, Localization, Identification, Age, and History. From the perspective of the designer, we demonstrate the possibility of creating a multi-user application from a single-user one, and we introduce the notion of purely replicated architecture. | |||
| Bringing the Computer into the World | | BIBA | iii | |
| Tony Hoeber | |||
| The computer is in the midst of a long journey. Yesterday it was in the backroom, today it's on the desktop, and in the near future it will be everywhere, in the form of small, lightweight, pen and voice-driven devices combining significant computing power and communication capabilities. These devices will connect people to a worldwide information network of unimaginable complexity. The purchasers of these devices are expected to be ordinary people who use them in the midst of daily life; moving about, standing in line, in public spaces, interacting with others. Tiny screens, ambient distraction, complex tasks -- these are serious challenges to the user interface designer. When computers make it from the desktop to the street, will they turn out to be the transparent socially benign, pleasant tools we hope they will be? Or will we have created a new generation of intrusive technojunk? Will they be gadgets for the technophile, or can there truly be a computer for just plain folks? | |||
| Animation of User Interfaces | | BIB | iii | |
| Chuck Clanton; Jock Mackinlay; Dave Ungar; Emilie Young | |||
| Progress in Building User Interface Toolkits: The World According to XIT | | BIBAK | PDF | 181-190 | |
| Jurgen Herczeg; Hubertus Hohl; Matthias Ressel | |||
| User interface toolkits and higher-level tools built on top of them play an
ever increasing part in developing graphical user interfaces. This paper
describes the XIT system, a user interface development tool for the X Window
System, based on Common Lisp, comprising user interface toolkits as well as
high-level interactive tools organized into a layered architecture. We
especially focus on the object-oriented design of the lower-level toolkits and
show how advanced features for describing automatic screen layout, visual
feedback, application links, complex interaction, and dialog control, usually
not included in traditional user interface toolkits, are integrated. Keywords: User interface toolkits, User interface development tools, Graphical user
interfaces, Interaction techniques, Object-oriented programming | |||
| Using Taps to Separate the User Interface from the Application Code | | BIBAK | PDF | 191-198 | |
| Thomas Berlage | |||
| A new mechanism based on taps is introduced to separate the output from the
application code in a graphical interactive interfaces. The mechanism is
implemented in GINA, an object-oriented application framework. Taps maintain a
functional mapping from application data to interface objects that is described
in a general-purpose programming language. Taps are triggered automatically by
user actions. Compared to constraints or the MVC model, taps do not need
execution or memory support from the application objects, at the expense of a
performance penalty. Screen updates, which pose the largest performance
problem, are minimized by checking for attribute changes and window visibility.
A comparison operation is used to maintain structural consistency between
hierarchies of application and interface objects. Taps can be defined
interactively using formulas in a spreadsheet-like tool. Keywords: User interface management systems, Change propagation, Command objects | |||
| Probabilistic State Machines: Dialog Management for Inputs with Uncertainty | | BIBA | PDF | 199-208 | |
| Scott E. Hudson; Gary L. Newell | |||
| Traditional models of input work on the assumption that inputs delivered to a system are fairly certain to have occurred as they are reported. However, a number of new input modalities, such as pen-based inputs, hand and body gesture inputs, and voice input, do not share this property. Inputs under these techniques are normally acquired by a process of recognition. As a result, each of these techniques makes mistakes and provides inputs which are approximate or uncertain. This paper considers some preliminary techniques for dialog management in the presence of this uncertainty. These techniques -- including a new input model and a set of extended state machine abstractions -- will explicitly model uncertainty and handle it as a normal and expected part of the input process. | |||