HCI Bibliography Home | HCI Journals | About HCI | Journal Info | HCI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
HCI Tables of Contents: 0102030405060708091011

Human-Computer Interaction 1

Editors:Thomas P. Moran
Publisher:Lawrence Erlbaum Associates
Standard No:ISSN 0737-0024
Links:Table of Contents
  1. HCI 1985 Volume 1 Issue 1
  2. HCI 1985 Volume 1 Issue 2
  3. HCI 1985 Volume 1 Issue 3
  4. HCI 1985 Volume 1 Issue 4

HCI 1985 Volume 1 Issue 1


A Principled Design for an Integrated Computational Environment BIBA 1-47
  Andrea A. diSessa
This paper aims at the principled design of a computational environment; it aims at being as explicit as possible about the space of possibilities and about the assumptions made in choosing from among them in the design process. The point is to develop a more systematic, if not yet scientific, basis for the design of complex but understandable artifacts. The particular object of design here is a simple but multifunctional system for naive and inexperienced users.
   We begin theoretically by elaborating the notion of understandability, the key characteristic for which we must design. We present various models people can make of computational systems, each with its own learning curve, advantages, and disadvantages. Then we propose a pragmatic framework for a particular system. The framework includes the principle of naive realism: that users should be able to pretend that they see the system itself in the display. It also includes the pervasive use of a spatial metaphor whereby users' commonsense spatial knowledge is used to make the system easy to understand. The theoretical and pragmatic levels are linked, in that a number of important decisions about issues (such as reference, scoping and the meaning of evaluation) are based on the theoretical modeling considerations.
Systems with Human Monitors: A Signal Detection Analysis BIBA 49-75
  Robert D. Sorkin; David D. Woods
Automated factories, the flightdecks of commercial aircraft, and the control rooms of power plants are examples of decision-making environments in which a human operator performs an alerted-monitor role. These human-machine systems include automated monitor or alerting subsystems operating in support of a human monitor. The automated monitor subsystem makes preprogrammed decisions about the state of the underlying process based on current inputs and expectations about normal/abnormal operating conditions. When alerted by the automated monitor subsystem, the human monitor may analyze input data, confirm or disconfirm the decision made by the automated monitor, and take appropriate further action. In this paper, the combined automated monitor-human monitor system is modeled as a signal detection system in which the human operator and the automated component monitor partially correlated noisy channels. The signal detection analysis shows that overall system performance is highly sensitive to the interaction between the human's monitoring strategy and the decision parameter, Ca, of the automated monitor subsystem. Usual design practice is to set Ca to a value that optimizes the automated monitor's detection and false alarm rates. Our analysis shows that this setting will not yield optimal performance for the overall human-machine system. Furthermore, overall system performance may be limited to a narrow range of realizable detection and error rates. As a result, large gains in system performance can be achieved by manipulating the parameters of the automated monitor subsystem in light of the workload characteristics of the human operator.
Affect in Computer-Mediated Communication: An Experiment in Synchronous Terminal-to-Terminal Discussion BIBA 77-104
  Sara Kiesler; David Zubrow; Anne Marie Moses; Valerie Geller
With the spread of computer networks, communication via computer conferences, electronic mail, and computer bulletin boards will become more common in society, but little is known about the social psychological implications of these technologies. One possibility is a change in physiological arousal, feelings, and expressive behavior -- that is, affect. These computer-mediated communication technologies focus attention on the message, transmit social information poorly, and do not have a well-developed social etiquette. Therefore, these technologies might be associated with less attention to others, less social feedback, and depersonalization of the communication setting. In the present study we examined what would happen to feelings and interpersonal behavior in an experiment in which two people met for the first time and discussed a series of questions in order to get to know one another. We measured physiological arousal (pulse and palmar sweat), subjective affect (emotional state and evaluations), and expressive behavior (self-disclosure and uninhibited behavior) in both synchronous computer-mediated and face-to-face discussions. (For comparison purposes, we also examined these effects under high- and low-evaluation anxiety). Communicating by computer did not influence physiological arousal, and it did not change emotions or self-evaluations. However, people who communicated by computer, evaluated each other less favorably than did people who communicated face-to-face, they felt and acted as though the setting was more impersonal, and their behavior was more uninhibited. These findings suggest that computer-mediated communication, rather than provoking emotionality per se, elicits asocial or unregulated behavior. Of course, our data are based on a laboratory experiment using just one type of computer-mediated communication, but the results are generally consistent with anecdotal evidence and new field research on how people use computers to communicate in organizations

HCI 1985 Volume 1 Issue 2


Introduction to this Special Issue on Novice Programming BIB 105-106
  Elliot Soloway


Novice LISP Errors: Undetected Losses of Information from Working Memory BIBA 107-131
  John R. Anderson; Robin Jeffries
Four experiments study the errors students make using LISP functions. The first two experiments show that frequency of errors is increased by increasing the complexity of irrelevant aspects of the problem. The experiments also show that the distribution of errors is largely random and that subjects' errors seem to result from slips rather than from misconceptions. Experiment 3 shows that subjects' errors tend to involve loss of parentheses in answers when the resulting errors are well-formed LISP expressions. Experiment 4 asks subjects, who knew no LISP, to judge the reasonableness of the answers to various LISP function calls. Subjects could detect many errors on the basis of general criteria of what a reasonable answer should look like. On the basis of these four experiments, we conclude that errors occur when there is a loss of information in the working memory representation of the problem and when the resulting answer still looks reasonable.
Preprogramming Knowledge: A Major Source of Misconceptions in Novice Programmers BIBA 133-161
  Jeffrey Bonar; Elliot Soloway
We present a process model to explain bugs produced by novices early in a programming course. The model was motivated by interviews with novice programmers solving simple programming problems. Our key idea is that many programming bugs can be explained by novices inappropriately using their knowledge of step-by-step procedural specifications in natural language. We view programming bugs as patches generated in response to an impasse reached by the novice while developing a program. We call such patching strategies bug generators. Several of our bug generators describe how natural language preprogramming knowledge is used by novices to create patches. Other kinds of bug generators are also discussed. We describe a representation both for novice natural language preprogramming knowledge and novice fragmentary programming knowledge. Using these representations and the bug generators, we evaluate the model by analyzing four interviews with novice programmers.
A Goal/Plan Analysis of Buggy Pascal Programs BIBA 163-207
  James C. Spohrer; Elliot Soloway; Edgar Pope
In this paper, we present a descriptive theory of buggy novice programs and a bug categorization scheme that is based on this theory. Central to this theory is the cognitively plausible knowledge -- goals and plans -- that underlies programming. The bug categorization scheme makes explicit problem-dependent goal and plan knowledge at many different levels of detail. We provide several examples of how the scheme permits us to focus on bugs in a way that facilitates generating plausible accounts of why the bugs may have arisen. In particular, our approach has led us to one explanation of why some novice programs are buggier than others. A basic part of this explanation is the notion of merged goals and merged plans in which a single integrated plan is used to achieve multiple goals.

HCI 1985 Volume 1 Issue 3


The Prospects for Psychological Science in Human-Computer Interaction BIBA 209-242
  Allen Newell; Stuart K. Card
This paper discusses the prospects of psychology playing a significant role in the progress of human-computer interaction. In any field, hard science (science that is mathematical or otherwise technical) has a tendency to drive out softer sciences, even if the softer sciences have important contributions to make. It is possible that, as computer science and artificial intelligence contributions to human-computer interaction mature, this could happen to psychology. It is suggested that this trend might be prevented by hardening the applicable psychological science. This approach, however, has been criticized on the grounds that the resulting body of knowledge would be too low level, too limited in scope, too late to affect computer technology, and too difficult to apply. The prospects for overcoming each of these obstacles are analyzed here.
An Application of the Birmingham Discourse Analysis System to the Study of Computer Guidance Interactions BIBA 243-282
  Michael J. Coombs; James L. Alty
University researchers need to use computers in their work yet they are not computer professionals and are not able to progress efficiently without adequate guidance. All British universities make such computer-use guidance available, the most important source of information being the advisory service of the local computer center. However, little is known of the structure and effectiveness of advisory interactions.
   The analysis of advisory conversations requires some method for representing their structure. This paper reports such a method based on a scheme devised by a research team at the University of Birmingham, UK (Sinclair & Coulthard, 1975) for the study of teaching discourse. Using this system, we are able to define the problem-solving nature of advisory discourse, along with a number of limitations inherent in the way advisors currently conduct interactions. Various coding extensions are suggested along with proposals for the application of the system to the development of improved advisory strategies.
Exploring Exploring a Word Processor BIBA 283-307
  John M. Carroll; Robert L. Mack; Clayton H. Lewis; Nancy L. Grischkowsky; Scott R. Anderson
Studies of people learning to use contemporary word-processing equipment suggest that effective learning is often "active," proceeding by self-initiated problem solving. The instructional manuals that accompany current word-processing systems often penalize and impede active learning. A set of instructional materials was constructed for a commercial word processor, specifically designed to support and encourage an active learning orientation. These "guided exploration" (GE) materials are modular, task oriented, procedurally incomplete, and address error recognition and recovery. Learners using the GE materials spent substantially less time yet still performed better on a transfer of learning posttest than learners using commercially developed self-study materials. Qualitative analysis of aspects of the learning protocols of participants suggested that active learning mechanisms may underlie this advantage.

HCI 1985 Volume 1 Issue 4


Introduction to this Special Issue on New Perspectives on Human-Computer Interaction BIB 309-310
  Thomas P. Moran


Direct Manipulation Interfaces BIBA 311-338
  Edwin L. Hutchins; James D. Hollan; Donald A. Norman
Direct manipulation has been lauded as a good form of interface design, and some interfaces that have this property have been well received by users. In this article we seek a cognitive account of both the advantages and disadvantages of direct manipulation interfaces. We identify two underlying phenomena that give rise to the feeling of directness. One deals with the information processing distance between the user's intentions and the facilities provided by the machine. Reduction of this distance makes the interface feel direct by reducing the effort required of the user to accomplish goals. The second phenomenon concerns the relation between the input and output vocabularies of the interface language. In particular, direct manipulation requires that the system provide representations of objects that behave as if they are the objects themselves. This provides the feeling of directness of manipulation.
Knowledge-Based User Interface Design BIBA 339-357
  William Mark
A key problem in user interface design is delivering the design model on which a program is based in terms of the running software that users actually have to deal with. This article presents a methodology for helping programmers to explicitly state a design model and link it to the actual functions and data of the programs. Terms in the model are defined according to their relationship to a set of prebuilt abstract categories. The model so defined forms an explicit conceptual framework that enforces the consistency of the programmers' design, and provides the basis of user understanding of the program. Because the model is linked to actual program software, the connection of user understanding to the running code -- the real user interface -- is thus defined in terms of the explicit model. The methodology is presented in terms of techniques implemented in the Consul system, a knowledge-based environment for the design of integrated office automation software.
Issues in Cognitive and Social Ergonomics: From Our House to Bauhaus BIBA 359-391
  John Seely Brown; Susan E. Newman
Intelligibility is one of the key factors affecting the acceptance and effective use of information systems. In this article, we discuss the ways in which recognition of this factor challenges current system design strategies, as well as current theoretical perspectives and research methodologies. In particular, we claim that in order to understand the problem of system intelligibility, we must focus on not only the cognitive, but also on the social aspects of system use.
   After considering some of the sources of users' difficulty in understanding information systems, we propose a new global philosophy for interface design: design for the management of trouble. We discuss the design implications of four mechanisms for improving system intelligibility: (1) useful mental models of the system and its associated subsystems, (2) communicative repair in user-system interaction, (3) new training strategies, and (4) use of the larger social environment as an aid to understanding information systems.
   We then discuss the possibility of developing intelligent systems capable of providing assistance and feedback related specifically to users' actions. We claim that development of such systems requires understanding the mechanisms for achieving mutual intelligibility in interaction and propose new research approaches for investigating these mechanisms.
   In the final section, we elaborate on the relationship between information systems and the larger social environment, suggesting that the functionality and design of information systems can deeply influence the surrounding culture. We propose adopting a goal of socially proactive design and discuss the possibilities for embedding new paradigms for communication and problem solving in specialized information systems.