HCI Bibliography Home | HCI Conferences | CHI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
CHI Tables of Contents: 8182838586878889909192X92Y92a92b

Proceedings of ACM CHI'85 Conference on Human Factors in Computing Systems

Fullname:Proceedings of CHI'85 Conference on Human Factors in Computing Systems
Editors:Lorraine Borman; Bill Curtis
Location:San Francisco, California
Dates:1985-Apr-14 to 1985-Apr-18
Publisher:ACM
Standard No:ACM ISBN 0-89791-149-0; ACM ISSN 0713-5424; ACM Order Number 608850; ACM DL: Table of Contents hcibib: CHI85
Papers:41
Pages:231
  1. System Response Factors
  2. Panel
  3. Touching and Seeing
  4. Panel
  5. Psychology of Programming
  6. Panel
  7. Menu Systems
  8. Panel
  9. Design and Evaluation
  10. Panel
  11. Naming
  12. Panel
  13. User Assistance
  14. Interface Tools and Structures
  15. Speech I/O
  16. Cognitive Issues
  17. Panel

System Response Factors

The Effect of VDU Text-Presentation Rate on Reading Comprehension and Reading Speed BIBA 1-6
  Jo W. Tombaugh; Michael D. Arkin; Richard F. Dillon
The effect of video display unit presentation rate on reading performance was investigated. Reading material was presented at one of the following presentation-rates: 15, 30, 120, 960 cps, or "instant". In the instant condition, the full text appeared simultaneously on the screen. In the other conditions, text appeared one character at a time starting in the upper left corner of the screen, from left to right and top to bottom. Reading comprehension was highest under the 30 cps and instant presentation conditions. Total time to perform the reading task was equivalent for all conditions except the 15 cps rate which required a longer time to complete the task. In terms of comprehension and time to perform the task, a slow rate of 15 cps, contrary to previous recommendations, is not desirable for novice computer users.
Effects of Cursor Speed on Text-Editing BIBA 7-10
  John D. Gould; Clayton Lewis; Vincent Barnes
Nine participants used a full screen computer text-editor (XEDIT) with an IBM 3277 terminal to edit marked-up documents at each of three cursor speeds (3.3, 4.7, and 11.0 cm/sec.). Results show that 9% of editing time was spent controlling and moving the cursor, regardless of cursor speed. The variations in cursor speed studied did not seem to act as a pacing device for the entire editing task.
The Importance of Percent-Done Progress Indicators for Computer-Human Interfaces BIBA 11-17
  Brad A. Myers
A "percent-done progress indicator" is a graphical technique which allows the user to monitor the progress through the processing of a task. Progress indicators can be displayed on almost all types of output devices, and can be used with many different kinds of programs. Practical experience and formal experiments show that progress indicators are an important and useful user-interface tool, and that they enhance the attractiveness and effectiveness of programs that incorporate them. This paper discusses why progress indicators are important. It includes the results of a formal experiment with progress indicators. One part of the experiment demonstrates that people prefer to have progress indicators. Another part attempted to replicate earlier findings to show that people prefer constant to variable response time in general, and then to show that this effect is reversed with progress indicators, but the results were not statistically significant. In fact, no significant preference for constant response time was shown, contrary to previously published results.

Panel

The Utility of Natural Language Interfaces BIB 19
  Phil Hayes

Touching and Seeing

A Multi-Touch Three Dimensional Touch-Sensitive Tablet BIBA 21-25
  S. K. Lee; W. Buxton; K. C. Smith
A prototype touch-sensitive tablet is presented. The tablet's main innovation is that it is capable of sensing more than one point of contact at a time. In addition to being able to provide position coordinates, the tablet also gives a measure of degree of contact, independently for each point of contact. In order to enable multi-touch sensing, the tablet surface is divided into a grid of discrete points. The points are scanned using a recursive area subdivision algorithm. In order to minimize the resolution lost due to the discrete nature of the grid, a novel interpolation scheme has been developed. Finally, the paper briefly discusses how multi-touch sensing, interpolation, and degree of contact sensing can be combined to expand our vocabulary in human-computer interaction.
A Subjective Judgment Study of Polygon Based Curved Surface Imagery BIBA 27-34
  Peter R. Atherton; Linnda R. Caporael
In the past computer graphics efforts, several researchers have demonstrated that polygon models can be used to produce images of curved surfaces that appear to be smooth and accurate. However, the authors know of no attempt to appraise such imagery by using multiple human observation ratings.
   The effectiveness of curved surface imagery generated from polygon models was investigated in a judgment study. Research subjects evaluated sphere model imagery derived from several polygon densities and shading procedures including flat shading, shade interpolation (Gouraud) and normal interpolation (Phong). Results of the evaluations indicated that little was gained by reducing the average polygon areas below approximately 110 pixels per polygon for spheres of 95 pixel radii displayed on a 512 x 512 resolution monitor. Evaluations for both shade and normal interpolation placed polygon image quality reasonably close to an "ideal" image. Although the evaluations indicated that normal interpolation was slightly superior to the shade interpolation, shade interpolation required significantly less computation. Most significantly, results from this study provide strong support for the notion that polygons can be used effectively to produce smooth shaded imagery of curved surface models.
Videoplace -- An Artificial Reality BIBA 35-40
  Myron W. Krueger; Thomas Gionfriddo; Katrin Hinrichsen
The human-machine interface is generalized beyond traditional control devices to permit physical participation with graphic images. The VIDEOPLACE System combines a participant's live video image with a computer graphic world. It also coordinates the behavior of graphic objects and creatures so that they appear to react to the movements of the participant's image in real-time. A prototype system has been implemented and a number of experiments with aesthetic and practical implications have been conducted.

Panel

Psychological Research Methods in the Human Use of Computers BIBA 41-45
  Thomas K. Landauer; John D. Gould; John A. Anderson; Phil Barnard
Psychological research methods have been used with increasing frequency in work on computer-human interaction. Judging from the state of the literature and from remarks heard in the halls at conferences such as this, the utility and appropriate roles of such methods are not yet clear. Panel members, who are all research psychologists working on issues related to human use of computers, will present a variety of contrasting views on how to go about such research, and on its proper goals. John Gould will describe two different but complementary approaches, applied research on general design issues, and formative human factors participation in development. John Anderson will discuss the use of formal models of human cognition. Phil Barnard will consider the role of applied research in the discovery of underlying principles to guide design. Tom Landauer will propose that psychological research can be the basis for invention of new "cognitive tools". Short synopses of the positions they will take are given below. Panel members hope that the audience will join them in bringing out important differences between the various approaches and methods and arguing their absolute and relative merits.

Psychology of Programming

Where The Bugs Are BIBA 47-53
  James C. Spohrer; Elliot Soloway; Edgar Pope
In this paper we propose one explanation of why some novice programs are buggier than others. Central to our explanation is the notion of merged goals/plans in which multiple goals are achieved in a single integrated plan. Our arguments are based on our theory of the knowledge -- plans and goals -- used by a novice in creating a program, and an analysis of actual buggy novice programs.
Extending the Spreadsheet Interface to Handle Approximate Quantities and Relationships BIBA 55-59
  Clayton Lewis
Conventional spreadsheet programs offer a very convenient user interface for many quantitative tasks, but they are restricted to handling precisely-specified quantities and calculations. ASP is a generalized spreadsheet that extends the basic spreadsheet paradigm to encompass quantities which are not known exactly, and functions which are not known well enough to permit calculation. ASP works by propagating assertions about quantities and functions through the network of relationships that the spreadsheet defines.
Estimating the Distribution of Software Complexity within a Program BIBA 61-64
  Thomas G. Moher
This paper proposes an approach to the characterization of complexity within computer software source texts. We estimate the information content of individual program tokens as the basis for a relative ordering of tokens by their 'uncertainty' or 'peculiarity' within the context of the program in which they reside. The analysis method used is in part an extension of software science methods. The information gained from the analysis highlights language usage anomalies and potential errors. This information may be useful in guiding software review activities.

Panel

Interfaces in Organizations: Supporting Group Work BIBA 65
  Irene Greif; John Seely Brown; Paul M. Cashman; Thomas W. Malone
Research on human factors in computer systems has emphasized supporting individuals. This panel will discuss new issues that emerge when computer systems support groups of people and whole organizations. Malone (see following paper) will suggest a broadening of the definition of user interfaces to include "organizational interfaces" and will indicate how a theoretical base for such an endeavor might be developed. Then Cashman will describe a "coordinator tool" in use at DEC for tracking the assignment of tasks to people in activities such as software maintenance. Finally, Brown will suggest how computer systems can be designed to radically increase the bandwidth of cooperation in groups by, for example, exploiting linguistic notions of context.
Designing Organizational Interfaces BIBA 66-71
  Thomas W. Malone
This paper argues that it will become increasingly important to extend our concept of user interfaces for individual users of computers to include organizational interfaces for groups of users. A number of suggestions are given for how to develop a theoretical base for designing such interfaces. For instance, examples are used to illustrate how traditional cognitive points of view can be extended to include information processing by multiple agents in organizations. Examples of design implications from other perspectives such as motivational, economic, and political are also included.

Menu Systems

Selection from Alphabetic and Numeric Menu Trees Using a Touch Screen: Breadth, Depth, and Width BIBA 73-78
  T. K. Landauer; D. W. Nachbar
Goal items were selected by a series of touch-menu choices among sequentially subdivided ranges of integers or alphabetically ordered words. The number of alternatives at each step, b, was varied, and, inversely, the size of the target area for the touch. Mean response time for each screen was well described by T= k+clogb, in agreement with the Hick-Hyman and Fitts' laws for decision and movement components in series. It is shown that this function favors breadth over depth in menus, whereas others might not. Speculations are offered as to when various functions could be expected.
Designing a Menu-Based Interface to an Operating System BIBA 79-84
  Thomas S. Tullis
The development of a large menu-based interface to an operating system posed a number of interesting user interface questions. Among those were how to determine the user's view of the relationships among the myriad of functions in the system, and how to reflect those relationships in a menu hierarchy. An experiment utilizing a sorting technique and hierarchical cluster analysis was quite effective in learning the user's perception of the relationships among the system functions. A second experiment comparing a "broad" menu hierarchy to a "deep" menu hierarchy showed that users made significantly fewer inappropriate menu selections with the broad hierarchy.
Connecting Theory and Practice: A Case Study of Achieving Usability Goals BIBA 85-88
  Keith A. Butler
This paper describes a case study of the Human Factors design, development, and testing of a computer-based financial analysis package. The project applied the "usability goals" method proposed by Bennett (1984) to structure the definition, design, and testing of the new system. Learnability was defined as a key attribute in the product concept because of its salience in users' perception of system quality. The learnability attribute was assigned an operational definition in terms of time to mastery and error avoidance/recovery. The "back-to-front" strategy of Didner & Butler (1982) was applied for designing the menus. Empirical testing of user performance on sample problems in the alpha stage indicated that the new system surpassed the learnability objective. Lessons learned from this case study concern leverage in getting better managerial attention for Human Factors considerations in development projects, and clearer structure to direct needed research.

Panel

Technology in Use BIBA 89-91
  Lucy A. Suchman; Sharon Traweek; Michael Lynch; Richard Frankel; Brigitte Jordan
(four papers are summarized in this panel paper)

Design and Evaluation

The Use of Logging Data in the Design of a New Text Editor BIBA 93-97
  Michael Good
Many different human factors techniques are available to the designer of a new computer system. This case study examines how one technique, the use of logging data, was used throughout the design of a new text editor which is measurably easy to learn and easy to use. Logging data was used in four areas: keyboard design, the initial design of the editor's command set, refinements made later in the design cycle, and the construction of a system performance benchmark.
The Evaluation of Text Editors: A Critical Review of the Roberts and Moran Methodology Based on New Experiments BIBA 99-105
  Nathaniel S. Borenstein
Three text editors were studied using the editor evaluation methodology developed by Roberts and Moran [3,4]. The results are presented as an extension of the studies by Roberts and Moran, with comparisons to the editors they studied earlier. In addition, supplementary measurements were taken that suggest minor flaws in the Roberts and Moran methodology. Further problems with the methodology are discussed, with an eye toward improving the methodology for future use. Although three significant problems with the methodology are reported, the problems are interesting primarily as lessons for the design of future evaluation methodologies. The Roberts and Moran methodology remains largely useful for the purposes for which it was designed.
Evaluating the User Interface: The Candid Camera Approach BIBA 107-113
  Michelle A. Lund
In the development of a new interactive graphics application, considerable effort was spent on designing a user interface which would be easy to use. When a portion of the application was completed, typical potential users were brought in to help evaluate the interface. They were given a sample task and a short introduction to the application; then their efforts to complete the task were observed and videotaped.
   This method of evaluating the user interface provided the development staff with quite a bit of valuable information. Changes were made, and more testing was done, including using some subjects for a second time.
   This paper describes how this evaluation method was used for two purposes: to point out problem areas in the interface, and to verify that changes made have improved the user interface.

Panel

Communicating with Sound BIBA 115-119
  William Buxton; Sara A. Bly; Steven P. Frysinger; David Lunney; Douglass L. Mansur; Joseph J. Mezrich; Robert C. Morrison
The Communicating with Sound panel for CHI'85 will focus on ways of expanding the user interface by using sound as a significant means of output. As a user's communication from the computer has progressed from large (and often smeary) printout to a teletypewriter and, finally, to the multi-window workstation displays of today, the emphasis has remained primarily on visual output. Although many user terminals and workstations have the capability of generating sound, that capability is rarely used for more than audio cues (indicating status such as an error condition or task completion) and simple musical tunes. Research shows that sounds convey meaningful information to users. With examples of such research, the panel members will demonstrate a variety of uses of sound output, discuss issues raised by the work, and suggest further directions. The intent of the panel is to stimulate thinking about expanding the user interface and to discuss areas for future research.
   In the statements that follow, each panelist will describe his or her own work, including the data and audio dimensions used, the value of the research, remaining issues to be addressed, and suggestions for future research and application. A list of references is included for those who wish further reading.

Naming

When Does an Abbreviation Become a Word? And Related Questions BIBA 121-125
  Jonathan Grudin; Phil Barnard
An experiment is reported in which subjects previously naive to text editing learned to use a set of editing commands. Some subjects used abbreviations from the beginning. Others began by using full command names, then switched to the (optional) use of abbreviations, either of their own devising or of our selection. We found significant differences in the number and nature of the errors produced by subjects in the different conditions. People who created their own abbreviations did most poorly, and did not appear to learn from this experience. Those who used abbreviations from the start were more likely to fall into error through misrecalling the referent names. The results suggest aspects of the underlying cognitive representations, with implications for the design of software interfaces.
A Comparison of Symbolic and Spatial Filing BIBA 127-130
  Susan T. Dumais; William P. Jones
The traditional and still dominant form of object reference in computing systems is symbolic - data files, programs, etc. are initially labeled and subsequently referred to by name. This approach is being supplemented on some systems by a spatial alternative which is often driven by an office or desktop metaphor (e.g. Apple's Lisa and Macintosh systems, or Bolt's 1979 Spatial Data Management System). In such systems, an object is placed in a simulated two- or three-dimensional space, and can later be retrieved by pointing to its location. In order to begin to understand the relative merits of spatial and symbolic filing schemes for representing and organizing information, we compared four ways of filing computer objects. We found location information to be of limited utility, either by itself or in combination with symbolic information. This calls into question the generality and efficacy of the desktop metaphor for information retrieval.
Experience with an Adaptive Indexing Scheme BIBA 131-135
  George W. Furnas
Previous work has shown that there is a major vocabulary barrier for new or intermittent users of computer systems. The barrier can be substantially lowered with a rich, empirically defined, frequency weighted index. This paper discusses experience with an adaptive technique for constructing such an index. In addition to being an easy way for system designers to collect the necessary data, an adaptive system has the additional advantage that data is collected from real users in real situations, not in some laboratory approximation. Implementation considerations, preliminary results and future theoretical directions are discussed.

Panel

Computer Human Factors in Computer Interface Design BIBA 137-138
  Robert Mack; Thomas Moran; Judith Reitman Olson; Dennis Wixon
Human factors psychologist contribute in many ways to improving human-computer interaction. One contribution involves evaluating existing or prototype systems, in order to assess usability and identify problems. Another involves contributing more directly to the design of systems in the first place: that is, not only evaluating systems but bringing to bear empirical methods and theoretical considerations that help specify what are plausible designs in the first place. The goal of this panel is to discuss four case studies emphasizing this role of cognitive human factors, and identify relevant methods and theoretical considerations.
Identifying and Designing Toward New User Expectations in a Prototype Text-Editor BIB 139-141
  Robert Mack
Expanded Design Procedures for Learnable, Usable Interfaces BIB 142-143
  Judith Reitman Olson
Engineering for Usability: Lessons Learned from the User Derived Interface BIB 144-147
  Dennis Wixon; John Whiteside

User Assistance

Prompting, Feedback and Error Correction in the Design of a Scenario Machine BIBA 149-153
  John M. Carroll; Dana S. Kay
The recent technical literature abounds with a variety of studies documenting and analyzing the problems people encounter in learning to use contemporary computer equipment. This has been a major focus of the recent work in our laboratory ([6], [9]). The project such work must entrain is the development of design approaches to these problems. We have been and are developing alternate designs for training manuals and for in-system training ([2], [4], [5], [7]).
Information Sought and Information Provided: An Empirical Study of User/Expert Dialogues BIBA 155-159
  Martha E. Pollack
Transcripts of computer-mail users seeking advice from an expert were studied to investigate the complementary claims that people often do not know what information they need to obtain in order to achieve their goals, and consequently, that experts must identify inappropriate queries and infer and respond to the goals behind them. This paper reports on one facet of the transcript analysis, namely, the identification of the types of relation that hold between the action that an advice-seeker asks about and the action that an expert tells him how to perform. Three such relations between actions are identified: generates, enables, and is-alternative-to. The claim is made that a cooperative advice-providing system, such as a help system or an expert system, must be able to compute these relations between actions.
Knowledge-Based Help Systems BIBA 161-167
  Gerhard Fischer; Andreas Lemke; Thomas Schwab
Our research goals are to understand the nature of, construct and evaluate intelligent interfaces as knowledge-based systems. In this paper we demonstrate the need for help systems as an essential part of human-computer communication. Help strategies are based on a model of the task (to understand what the user is doing or which goals he/she wants to achieve) and a model of the user (to guarantee that these systems are non-intrusive and that they pay attention to the needs of individual users).
   We illustrate that passive and active help systems have to be constructed as knowledge-based systems. Two operational systems (PASSIVIST and ACTIVIST) are described to show the usefulness of this approach.

Interface Tools and Structures

Design Alternatives for User Interface Management Systems Based on Experience with COUSIN BIBA 169-175
  Philip J. Hayes; Pedro A. Szekely; Richard A. Lerner
User interface management systems (UIMSs) provide user interfaces to application systems based on an abstract definition of the interface required. This approach can provide higher-quality interfaces at a lower construction cost. In this paper we consider three design choices for UIMSs which critically affect the quality of the user interfaces built with a UIMS, and the cost of constructing the interfaces. The choices are examined in terms of a general model of a UIMS. They concern the sharing of control between the UIMS and the applications it provides interfaces to, the level of abstraction in the definition of the information exchanged between user and application, and the level of abstraction in the definition of the sequencing of the dialogue. For each choice, we argue for a specific alternative. We go on to present COUSIN, a UIMS that provides graphical interfaces for a variety of applications based on highly abstracted interface definitions. COUSIN's design corresponds to the alternatives we argued for in two out of three cases, and partially satisfies the third. An interface developed through, and run by COUSIN is described in some detail.
ADM - A Dialog Manager BIBA 177-183
  Andrew J. Schulert; George T. Rogers; James A. Hamilton
ADM is a system for developing user interfaces. We call it a dialog manager; it is similar to what others call a "User Interface Management System" [8]. Although ADM is still being developed, it has been used to construct several applications.
   A dialog manager divides an application into an "interaction handler," which interacts with the user, and an "underlying application," which processes user commands and data. With ADM the application designer writes the underlying application in a conventional programming language and defines the interface between interaction handler and underlying application in terms of "tasks," things the user can do, and "states," sets of tasks that are active at one time. The user interface designer defines the interaction handler in terms of "presentation techniques," which present tasks to the user, and "structuring techniques," which describe screen layout. Design decisions made for ADM include using a precompiled, declarative dialog description, a flexible division between interaction handler and underlying application, allowing either interaction handler or underlying application to maintain control, and the inclusion of help and error support.
User Performance with Command, Menu, and Iconic Interfaces BIBA 185-191
  John Whiteside; Sandra Jones; Paula S. Levy; Dennis Wixon
Performance and subjective reactions of 76 users of varying levels of computer experience were measured with 7 different interfaces representing command, menu, and iconic interface styles. The results suggest three general conclusions:
  • there are large usability differences between contemporary systems,
  • there is no necessary tradeoff between ease of use and ease of learning,
  • interface style is not related to performance or preference (but careful
       design is). Difficulties involving system feedback, input forms, help systems, and navigation aids occurred in all styles of interface: command, menu, and iconic. New interface technology did not solve old human factors problems.
  • Speech I/O

    Listener Training for Speech-Output Applications BIBA 193-196
      Mary Beth Rosson
    The specificity of the adaptation to synthetic speech known to occur with practice was examined by giving listeners selective exposure to a subset of English phonemes (a control group was "trained" on analogous materials produced by a human speaker), and then testing their ability to identify words created from both the previously heard and novel phonemes. The results indicated that while synthetic voice training was generally facilitative, it was most helpful in the identification of the sounds heard before. However, this specific learning effect occurred for only certain phonemes. The findings imply that one way to maximize early adaptation to synthetic speech is to identify the "learnable" sounds, and to increase users' exposure to them during introductory or training dialogs.
    Speech Recognition and Manner of Speaking in Noise and in Quiet BIBA 197-199
      Ann M. Rollins
    Currently speech recognition is accomplished by matching spoken utterances with reference patterns of words that were spoken by an individual at an earlier time. Recognition is highly dependent upon background noise. The purpose of this study was to assess the extent to which subjects "manner" of speaking in noise, as separate from the noise itself, affected recognition. Subjects generated reference patterns in quiet and in noise and then spoke lists of digits in quiet and in noise for the speech system to recognize. Noise was delivered over earphones so it would not go into the speech recognition system through the microphone. Training and recognition were done from tape recordings, with the playback level of the tape always set to the same, intermediate level. The data suggest that manner of speaking, for about half of the subjects is very different in noise compared with quiet. The data also imply that if recognition will be done in both quiet and noise, the safest alternative is to start out with patterns generated in noise.
    Why is Synthetic Speech Harder to Remember than Natural Speech? BIBA 201-206
      John A. Waterworth; Cathy M. Thomas
    Previous research has demonstrated that synthetic speech is less well recalled than natural speech. Luce et al (1983) concluded that this was because synthetic speech increases the effort involved in encoding and/or rehearsal of presented information. Results of the experiments described here, which involved ordered recall of lists of ten words spoken in either a synthetic or a natural voice, with repetition of the words as a measure of successful encoding, indicate that most of the memory deficit with synthetic speech is due to encoding difficulties, rather than problems with item retention. There is evidence that encoding synthetic speech involves more processing capacity than does encoding natural speech, but that once it is encoded it is stored just as efficiently.

    Cognitive Issues

    A Quantitative Model of the Learning and Performance of Text Editing Knowledge BIBA 207-212
      Peter G. Polson; David E. Kieras
    A model of manuscript editing, implemented as a simulation program, is described in this paper. The model provides an excellent, quantitative description of learning, transfer, and performance data from two experiments on text editing methods. Implications of the underlying theory for the design process are briefly discussed.
    A Theory of Stimulus-Response Compatibility Applied to Human-Computer Interaction BIBA 213-219
      Bonnie E. John; Paul S. Rosenbloom; Allen Newell
    A GOMS theory of stimulus-response compatibility is presented and applied to remembering computer command abbreviations. Two abbreviation techniques, vowel-deletion and special-character-plus-first-letter, are compared in an encoding task. Significant differences are found in the time to type the first letter of the abbreviation, and in the time to complete the typing of the abbreviation. These differences are analyzed using the theory which produces an excellent quantitative fit to the data (r² = 0.97).
    BASIC versus Natural Language: Is There One Underlying Comprehension Process? BIBA 221-223
      Jennifer L. Dyck; Richard E. Mayer
    This study determined the response time (RT) for subjects to comprehend eight different BASIC statements and eight corresponding English procedural statements. First, there was no significant interaction between language and statement, and there was a high correlation (r = .85) between English and BASIC RT performance. Second, the microstructure of each statement (the number of actions required) and the macrostructure (the number of other statements in the program) were strongly related to RT performance for both BASIC and English. Apparently, comprehension of procedural statements is related to underlying structural characteristics common to both languages.

    Panel

    Microcomputer User Interface Toolkits: The Commercial State-of-the-Art BIBA 225
      Irene Greif; William A. S. Buxton; Scott MacGregor; David R. Reed; Larry Tesler
    A well-designed user interface is a very valuable asset: the best available today are based on hundreds of man-years of work combining results of research in human factors, tasteful design reviewed and modified through extensive end-user testing, and many rounds of implementation effort. As a result, the user interface "toolkit" is emerging as the hottest new software item. A toolkit can provide software developers with a programming environment in which the user interface coding is already done so that new applications programs can automatically be integrated with other workstation functions.
       The panel will evaluate this new trend. Tesler and MacGregor will present the designs of the leading toolkit products from Apple and Microsoft, respectively. Reed will analyze the choices from the point of view of the third party software vendors' requirements. Noting that the effort going into these products may well result in de facto standard setting, Buxton will question the appropriateness of making this commitment based on microcomputer hardware.