HCI Bibliography Home | HCI Journals | About HCI | Journal Info | HCI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
HCI Tables of Contents: 010203040506070809101112131415

Human-Computer Interaction 5

Editors:Thomas P. Moran
Dates:1990
Volume:5
Publisher:Lawrence Erlbaum Associates
Standard No:ISSN 0737-0024
Papers:13
Links:Table of Contents
  1. HCI 1990 Volume 5 Issue 1
  2. HCI 1990 Volume 5 Issue 2-3
  3. HCI 1990 Volume 5 Issue 4

HCI 1990 Volume 5 Issue 1

Articles

The Acquisition and Performance of Text-Editing Skill: A Cognitive Complexity Analysis BIBA 1-48
  Susan Bovair; David E. Kieras; Peter G. Polson
Kieras and Polson (1985) proposed an approach for making quantitative predictions on ease of learning and ease of use of a system, based on a production system version of the goals, operators, methods, and selection rules (GOMS) model of Card, Moran, and Newell (1983). This article describes the principles for constructing such models and obtaining predictions of learning and execution time. A production rule model for a simulated text editor is described in detail and is compared to experimental data on learning and performance. The model accounted well for both learning and execution time and for the details of the increase in speed with practice. The relationship between the performance model and the Keystroke-Level Model of Card et al. (1983) is discussed. The results provide strong support for the original proposal that production rule models can make quantitative predictions for both ease of learning and ease of use.
Specific versus General Procedures in Instructions BIBA 49-93
  Richard Catrambone
A good deal of research in cognitive psychology has demonstrated that, although learners can solve problems that are just like the ones they have been trained on, they often have great difficulty solving new types of problems. People also have difficulty trying to understand instructions or training materials that try to teach a procedure at a level that is general enough to apply to many different kinds of cases. These two findings lead to a quandary for people designing instructions for procedural tasks such as operating computer software: Instructions should be written with a good deal of specificity so that new users can understand and use them right away, but at the same time the user will have great difficulty generalizing what they have learned in novel cases. Experiment 1 seems to echo this quandary. Computer novices, in this study, were able to follow specific instructions for using a word processor more easily than general instructions. However, they had great difficulty generalizing the specific instructions to novel tasks. Experiment 2 demonstrates that when specific instructions are rewritten to help users form a more general procedure, novices can easily do new tasks and still maintain their initial quality of performance. A production rule formalism is used to represent the knowledge users obtain from instructions and to explore the conditions under which these productions can be generalized. Experiment 2 suggests that this knowledge can be used to improve the generalizability of instructions.
Inferring User Expertise for Adaptive Interfaces BIBA 95-117
  Kent P. Vaubel; Charles F. Gettys
A technique based on two heuristic rules for inferring expertise is demonstrated by inferring user expertise in word-processing tasks. The heuristic rules were translated into practice by examining command frequencies and requests for on-line help from the 12 participants in the study who were engaged in personal word-processing tasks. These variables were found to be related to word-processing expertise. A scoring rule derived from these variables ranged from 71% to 87% correct in predicting the expertise of the user. The application of this technique to adaptive interfaces that incorporate estimates of user expertise is discussed.

HCI 1990 Volume 5 Issue 2-3

Editorial

Introduction to this Special Issue on Foundations of Human-Computer Interaction BIB 119-123
  Stuart K. Card; Peter G. Polson

Articles

A Research Agenda for the Nineties in Human-Computer Interaction BIBA 125-143
  Clayton H. Lewis
Although the practical importance of user interface technology is now well established, the proper role of research in the development of the technology and the kind of research that is appropriate remain in question. This article takes stock of some of the competing positions and proposes an agenda, identifying areas of work that might command some consensus despite the widely varying viewpoints represented in the research community. The major initiatives proposed are understanding goals and preferences, broadening applied cognitive theory, supporting innovation, and credit assignment.
A Semantic Analysis of the Design Space of Input Devices BIBA 145-190
  Jock Mackinlay; Stuart K. Card; George G. Robertson
A bewildering variety of devices for communication from humans to computers now exists on the market. In this article, we propose a descriptive framework for analyzing the design space of these input devices. We begin with Buxton's (1983) idea that input devices are transducers of physical properties into one, two, or three dimensions. Following Mackinlay's semantic analysis of the design space for graphical presentations, we extend this idea to more comprehensive descriptions of physical properties, space, and transducer mappings. In our reformulation, input devices are transducers of any combination of linear and rotary, absolute and relative, position and force, in any of the six spatial degrees of freedom. Simple input devices are described in terms of semantic mappings from the transducers of physical properties into the parameters of the applications. One of these mappings, the resolution function, allows us to describe the range of possibilities from continuous devices to discrete devices, including possibilities in between. Complex input controls are described in terms of hierarchical families of generic devices and in terms of composition operators on simpler devices. The description that emerges is used to produce a new taxonomy of input devices. The taxonomy is compared with previous taxonomies of Foley, Wallace, and Chan (1984) and of Buxton (1983) by reclassifying the devices previously analyzed by these authors. The descriptive techniques are further applied to the design of complex mouse-based virtual input controls for simulated three-dimensional (3D) egocentric motion. One result is the design of a new virtual egocentric motion control.
Theory-Based Design for Easily Learned Interfaces BIBA 191-220
  Peter G. Polson; Clayton H. Lewis
Many important computer applications require that users be able to use them effectively with little or no formal training. Current examples include bank teller machines and airport information kiosks. Today successful systems of this kind can only be developed by iteration using costly empirical testing. This article aims to provide a theoretical foundation for the design of such systems, a model of learning by exploration, called CE+. The theory incorporates assumptions from (a) the GOMS model and cognitive complexity theory (CCT) on the representation of procedural knowledge as productions, (b) the EXPL model on learning from examples, and (c) research on problem-solving processes for simple puzzlelike problems. Design guidelines for systems that can be learned by exploration, "design for successful guessing," are derived from the theory. These principles are compared to those developed by Norman (1988).
The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS BIBA 221-265
  Judith Reitman Olson; Gary M. Olson
The purpose of this article is to review where we stand with regard to modeling the kind of cognition involved in human-computer interaction. Card, Moran, and Newell's pioneering work on cognitive engineering models and explicit analyses of the knowledge people need to perform a procedure was a significant advance from the kind of modeling cognitive psychology offered at the time. Since then, coordinated bodies of research have both confirmed the basic set of parameters and advanced the number of parameters that account for the time of certain component activities. Formal modeling in grammars and production systems has provided an account for error production in some cases, as well as a basis for calculating how long a system will take to learn and how much savings there is from previous learning. Recently, we were given a new tool for modeling nonsequential component processes, adapting the "critical path analysis" from engineering to the specification of interacting processes and their consequent durations.
   Though these advances have helped, there are still significant gaps in our understanding of the whole process of interacting with computers. The cumulative nature of this empirical body and its associated modeling framework has further highlighted important issues central to research in cognitive psychology: how people move smoothly between skilled performance and problem solving, how people learn, how to design for consistent user interfaces, how people produce and manage errors, how we interpret visual displays for meaning, and what processes run concurrently and which depend on the completion of prior processes.
   In the bigger picture, cognitive modeling is a method that is useful in both initial design (it can narrow the design space and provide early analyses of design alternatives), evaluation, and training. But it does not extend to broader aspects of the context in which people use computers, partly because there are significant gaps in contemporary cognitive theory to inform the modeling and partly because it is the wrong form of model for certain kinds of more global questions in human-computer interaction. Notably, it fails to capture the user's fatigue, individual differences, or mental workload. And it is not the type of model that will aid the designer in designing the set of functions the software ought to contain, to assess the user's judgment of the acceptability of the software, or the change that could be expected in work life and the organization in which this work and person fits. Clearly, these kinds of considerations require modeling and tools of a different granularity and form.
Expertise in a Computer Operating System: Conceptualization and Performance BIBA 267-304
  Stephanie M. Doane; James W. Pellegrino; Roberta L. Klatzky
This article describes a three-part empirical approach to understanding the development of expertise within the UNIX operating system. We studied UNIX users with varying levels of expertise. The first part of our research attempted to ascertain the nature of their conceptualizations of the UNIX system. The second part measured users' performance in tasks requiring them to comprehend and produce UNIX commands. The third part was a longitudinal rather than cross-sectional analysis of the emergence of expertise.
   The conceptualization data suggest important differences in the models of UNIX structure formed by each group. Experts best represent the higher levels of the UNIX system; novices more fully represent the lower, more concrete levels of the system, including specific commands. UNIX users also differ markedly in performance, according to their history of use with the operating system. Only experts could successfully produce composite commands that required use of the distinctive features of UNIX (e.g., pipes and other redirection symbols), even though the intermediates and novices evidenced the component knowledge required for composite commands. This finding is somewhat surprising, inasmuch as there are fundamental design features in UNIX, and these features are taught in elementary classes. These data suggest, however, that these features can be used reliably only after extensive experience.
   The longitudinal data suggest that most subjects increased in expertise. However, expertise can decrease as a function of time, depending on system use. Those subjects who increase in expertise acquired the ability to produce the simple commands and represent the basic modules before they acquired knowledge of complex commands and advanced utilities. The nature of expertise is considered with respect to both system design and user characteristics, including users' conceptual models of system structure.
Designing the Design Process: Exploiting Opportunistic Thoughts BIBA 305-344
  Raymonde Guindon
This study shows that top-down decomposition is problematic in the early stages of design. Instead, an opportunistic decomposition is better suited to handle the ill-structuredness of design problems. Designers are observed interleaving decisions at various levels of abstraction in the solution decomposition. The verbal protocols of three professionals designing a software system of realistic complexity are analyzed to determine the frequency and causes of opportunistic decompositions. The sudden discovery of new requirements and partial solutions triggered by data-driven rules and associations, the immediate development of solutions for newly discovered requirements, and drifting through partial solutions are shown to be important causes of opportunistic design. A top-down decomposition appears to be a special case for well-structured problems when the designer already knows the correct decomposition. Two cognitive models are briefly discussed in relation to opportunistic design. Finally, implications for training, methods, and computational environments to support the early stages of design are outlined.

HCI 1990 Volume 5 Issue 4

Articles

The Cognitive Consequences of Object-Oriented Design BIBA 345-379
  Mary Beth Rosson; Sherman R. Alpert
The most valuable tools or methodologies supporting the design of interactive systems are those that simultaneously ease the process of design and improve the usability of the resulting system. We consider the potential of the object-oriented paradigm in providing this dual function. After briefly reviewing what is known about the design process and some important characteristics of object-oriented programming and design, we speculate on the possible cognitive consequences of this paradigm for problem understanding, problem decomposition, and design result. We conclude with research issues raised by our analyses.
Effective Feedback Content for Tutoring Complex Skills BIBA 381-413
  Jean M. McKendree
Feedback during learning is critical for evaluating new skills. Computer-based tutoring systems have the potential to detect errors and to guide students by providing informative feedback, but few studies have evaluated the real impact of different types of feedback. This article presents results of such a study using the Geometry Tutor for building geometry proofs. It was found that feedback about the goal structure of geometry problems led to better performance than feedback about the reasons for error or than simply being told that an error had occurred. This goal feedback allows students to correct the incorrect action more often than other types of feedback. Also, the goal feedback group continued to deal advantageously with problems when the feedback was subsequently removed. A simulation model, based on Anderson's (1983) ACT* theory and an analogical learning system, presents a preliminary model of the effects of these different feedback types. The model indicates that the advantage of goal-directed feedback is a reflection of its immediate application to the problem, whereas feedback about reasons for the error does not provide any direction to the correct error. According to the model, the feedback allows the student to construct a correct representation of the goal tree involved in various types of proofs more readily that feedback that is not immediately relevant to the current problem.
The Nature of Device Models: The Yoked State Space Hypothesis and Some Experiments with Text Editors BIBA 415-444
  Stephen J. Payne; Helen R. Squibb; Andrew Howes
To construct a conceptual model of a device, the user must conceptualize the device's representation of the task domain. This knowledge can be represented by three components: a device-based problem space, which specifies the ontology of the device in terms of the objects that can be manipulated and their interrelations, plus the operators that perform the manipulations; a goal space, which represents the objects in terms of which user's goals are expressed; and a semantic mapping, which determines how goal space objects are represented in the device space.
   The yoked state space (YSS) model allows an important distinction concerning the mental representation of procedures. If a step in a procedure specifies a transformation of the user's device space, then it has an autonomous meaning for the user, independent of its role in the sequence or method. The device space provides a figurative account of the operator. However, some operators do not affect the minimal device space, and their only meaning for the user derives from their role in a method: The method affords an operational account of the operator. Figurative accounts can be constructed from operational accounts only by elaborating the device space with new concepts.
   The YSS is illustrated through a simple description of a device model for a cut-and-paste text editor. Three experiments addressed the claims of this model. The first experiment used a sorting paradigm to show that users do acquire the novel device space concept of a string of adjacent characters (including space and return). The second and third experiments asked novices to make inferences about text editor behavior on the basis of simple demonstrations. They showed that (a) the availability of the string concept is critically dependent on the details of the interface design, (b) figurative accounts of the copy operation afford more efficient methods and may be promoted by appropriate names for procedure steps, and (c) a conceptual model may transfer from one device to another. Together, the three experiments supported the YSS hypothesis.