| Learning and Remembering Interactive Commands | | BIB | 2-7 | |
| P. Barnard; N. Hammond; A. MacLean; J. Morton | |||
| Learning and Remembering Command Names | | BIB | 8-11 | |
| John B. Black; Thomas P. Moran | |||
| Evaluating the Suggestiveness of Command Names | | BIB | 12-16 | |
| Jarrett Rosenberg | |||
| Computer Commands Labelled by Users versus Imposed Commands and the Effect of Structuring Rules on Recall | | BIB | 17-19 | |
| Dominique L. Scapin | |||
| Psychological Issues in the Use of Icons in Command Menus | | BIB | 20-23 | |
| Kathleen Hemenway | |||
| Typographic Design for Interfaces of Information Systems | | BIBA | 26-30 | |
| Aaron Marcus | |||
| Principles of information-oriented graphic design have been utilized in redesigning the interface for a large information management system. These principles are explained and examples of typical screen formats are shown to indicate the nature of improvements. | |||
| A Systems Analysis of Stress-Strain in VDT Operation | | BIB | 31-35 | |
| Steven L. Sauter; Mark S. Gottlieb; Karen C. Jones | |||
| The Design, Simulation, and Evaluation of a Menu Driven User Interface | | BIB | 36-40 | |
| Ricky E. Savage; James K. Habinek; Thomas W. Barnhart | |||
| Windowing vs Scrolling on a Visual Display Terminal | | BIB | 41-44 | |
| Kevin F. Bury; James M. Boyle; R. James Evey; Alan S. Neal | |||
| Notetaking and Comprehension for Computer-Displayed Messages: Personalized versus Fixed Formats | | BIBA | 45-50 | |
| Ralph E. Geiselman; Michael G. Samet | |||
| An experiment was performed to evaluate the usefulness of an option for users of an automated information system to construct their own preferred formats for receiving intelligence messages. It was hypothesized that such an option would enhance the acquisition and comprehension of intelligence data from each message. The results indicated that users who personalized the format arranged the message elements in an interpretable manner, and they took fewer notes during the subsequent paced presentation of messages in their individualized formats than users who received the messages in a reasonable, pre-experimentally fixed format. In addition, the users with personalized formats learned more with the fixed format. These data suggest that the personalization of the message format was useful and led to improved subjective organization of the intelligence data. | |||
| Tapping Into Tacit Programming Knowledge | | BIB | 52-57 | |
| Elliot Soloway; Kate Ehrlich; Jeffrey Bonar | |||
| Human-Computer Interface Considerations in the Design of Personal Computer Software | | BIB | 58-62 | |
| Sundaresan Jayaraman; Mary Jane Lee; Milos Konopasek | |||
| Heuristics for Designing Enjoyable User Interface: Lessons from Computer Games | | BIBA | 63-68 | |
| Thomas W. Malone | |||
| In this paper, I will discuss two questions: (1) Why are computer games so
captivating? and (2) How can the features that make computer games captivating
be used to make other user interfaces interesting and enjoyable to use?
After briefly summarizing several studies of what makes computer games fun, I will discuss some guidelines for designing enjoyable user interfaces. Even though I will focus primarily on what makes systems enjoyable, I will suggest how some of the same features that make systems enjoyable can also make them easier to learn and to use. | |||
| Political Determinants of System Design and Content | | BIB | 70-73 | |
| Ronald Webb | |||
| How Acceptable are Computers to Professional Persons? | | BIB | 74-77 | |
| Elizabeth Zoltan | |||
| Human Relations, Scientific Management, and Human Factors Research | | BIB | 78-79 | |
| Philips Kraft; David Strauss | |||
| Software Guideline Development: A Proposed Methodology | | BIB | 82-84 | |
| Richard E. Cordes | |||
| A Test-Bed for User Interface Designs | | BIB | 85-88 | |
| Eugene Ball; Phil Hayes | |||
| Controversies in the Design of Computer-Mediated Communication Systems: A Delphi Study | | BIB | 89-100 | |
| Murray Turoff; Starr Roxanne Hiltz; Elaine B. Kerr | |||
| DMS: A Comprehensive System for Managing Human-Computer Dialogue | | BIB | 102-105 | |
| John Roach; H. Rex Hartson; Roger W. Ehrich; Tamer Yunten; Deborah H. Johnson | |||
| Comparison of Two Information Retrieval Methods on Videotex: Tree-Structure versus Alphabetical Directory | | BIB | 106-110 | |
| Jo W. Tombaugh; Scott A. McEwen | |||
| Toward the Design and Development of Style-Independent Interactive Systems | | BIB | 111-116 | |
| Michael B. Feldman; George T. Rogers | |||
| Indentation, Documentation and Programmer Comprehension | | BIB | 118-120 | |
| A. F. Norcio | |||
| An Empirical Evaluation of Software Documentation Formats | | BIB | 121-124 | |
| Sylvia B. Sheppard; Elizabeth Kruesi; John W. Bailey | |||
| A Theoretical Analysis of the Role of Documentation in the Comprehension of Computer Programs | | BIB | 125-129 | |
| Ruven Brooks | |||
| The Impact of Development Aids on the Systems Development Process | | BIB | 130-134 | |
| David K. Goldstein | |||
| Evaluation of Text Editors | | BIBA | 136-141 | |
| Teresa L. Roberts; Thomas P. Moran | |||
| This paper presents a methodology for evaluating computer text editors from
the viewpoint of their users -- from novices learning the editor to dedicated
experts who have mastered the editor. The dimensions which this methodology
addresses are:
- Time to perform edit tasks by experts.
- Errors made by experts.
- Learning of basic edit tasks by novices.
- Functionality over all possible edit tasks.
The methodology is objective and thorough, yet easy to use. The criterion of
objectivity implies that the evaluation scheme not be biased in favor of any
particular editor's conceptual model -- its way of representing text and
operations on the text. In addition, data is gathered by observing people who
are equally familiar with each system. Thoroughness implies that several
different aspects of editor usage be considered. Ease-of-use means that the
methodology is usable by editor designers, managers of word processing centers,
or other non-psychologists who need this kind of information, but have limited
time and equipment resources.
In this paper, we explain the methodology first, then give some interesting empirical results from applying it to several editors. | |||
| An Ease of Use Evaluation of an Integrated Document Processing System | | BIB | 142-147 | |
| Michael Good | |||
| An Analysis of Line Numbering Strategies in Text Editors | | BIB | 148-151 | |
| M. L. Schneider; S. Nudelman; K. Hirsh-Pasek | |||
| Can We Expect to Improve Text Editing Performance? | | BIB | 152-156 | |
| David W. Embley; George Nagy | |||
| An Automated Office Communications Study in an Operational Setting | | BIB | 158-162 | |
| Randall R. Harris | |||
| Communication and Management Support in System Development Environments | | BIBK | 163-168 | |
| Beverly I. Kedzierski | |||
Keywords: Artificial intelligence, Programming environments, System development
support, Knowledge-based systems, Project management, Software engineering,
Knowledge representation, Software psychology, Human-computer interfaces | |||
| LAMP: Language for Active Message Protocols | | BIB | 169-173 | |
| Paul S. Licker | |||
| Communication-Nets for the Specification of Operator Dialogs | | BIB | 174-179 | |
| W. K. Epple | |||
| Performance-Based Evaluation of Graphic Displays for Nuclear Power Plant Control Rooms | | BIB | 182-189 | |
| Rohn J. Petersen; William W. Banks; David I. Gertman | |||
| User Perceptual Mechanisms in the Search of Computer Command Menus | | BIB | 190-196 | |
| Stuart K. Card | |||
| The Role of Integral Displays in Decision Making | | BIB | 197-201 | |
| Timothy E. Goldsmith; Roger W. Schvaneveldt | |||
| An Experimental Evaluation of Multivariate Graphical Point Representations | | BIB | 202-209 | |
| Leland Wilkinson | |||
| A Review of Human Factors Research on Programming Languages and Specifications | | BIBA | 212-218 | |
| Bill Curtis | |||
| This paper presents a partial review of the human factors work on computer programming. It begins by giving an overview of the behavioral science approach to studying programming. Because of space limitations this review will concentrate on cognitive models of programmer problem solving and the experimental research on language characteristics and specification formats. Areas not reviewed include debugging, programming teams, individual differences, and research methods. The conclusions discuss promising directions for future theory and research. | |||
| Cognitive Correlates of Programming Tasks in Novice Programmers | | BIB | 219-222 | |
| Dennis M. Irons | |||
| Analyzer-Generated and Human-Judged Predictors of Computer Program Readability | | BIBA | 223-228 | |
| Gerrit E. DeYoung; Garry R. Kampen; James M. Topolski | |||
| The readability of a computer program has recently attained a high level of
interest deriving in part from its expected close relationship with program
maintainability; debugging and modification expenses are understood to account
for a large proportion of software costs over the life of the software. A
computable measure of readability would therefore be useful to program
developers during coding and to those assuming responsibility for maintenance
of software developed elsewhere. In a series of Algol 68 programs, analyzer
generated (machine-computable) and human-judged program factors were examined.
The first two present authors found that program length and reasonable practice
concerning identifier length were excellent predictors of judgments of
readability. These predictors were chosen from a large set of
analyzer-generated predictors including software science measures as defined by
Halstead and several others; the analyzer-generated predictors were found to
replicably estimate a high proportion (41 percent) of variance in readability
in new readability judgments.
While an estimate of readability based only on analyzer-generated predictors would be clearly useful, human ratings (such as quality of comments, logicality of control flow, and meaningfulness of identifier names) were examined to determine whether they could add significantly to the quality of estimates of readability. The addition of the rating of well structured control flow to the set of analyzer-generated predictors increased the proportion of replicably estimated variance in new readability judgments from 41 to 72 percent. | |||
| The Subjective Nature of Programming Complexity | | BIB | 229-234 | |
| Daniel G. McNicholl; Ken Magel | |||
| Error-Correcting Strategies and Human Interaction with Computer Systems | | BIBA | 236-238 | |
| Adam V. Reed | |||
| Human problem-solving strategies may be classified as error-preventing (no response is chosen until one can be selected with relatively high confidence) or error-correcting (a tentative solution is formulated immediately, subject to revision in the light of subsequent evidence). Recent work in the author's laboratory indicates a strong preference for error-correcting over error-preventing strategies on the part of human problem-solvers. Unfortunately, most contemporary computer languages and programming environments enforce an error-preventing rather than error-correcting strategy. Using Marvin Minsky's concept of a frame, an error-correcting programming strategy may be thought of as obtaining a program frame with all parameters pre-set to their default values, and then revising those values until a script corresponding to a successful solution is arrived at. The present paper defines a frame-based programming environment which can accommodate error-correcting programming strategies, and discusses the application of such environments to different types of programming languages. | |||
| Learning Performance and Attitudes as a Function of the Reading Grade Level of a Computer-Presented Tutorial | | BIB | 239-244 | |
| Joan M. Roemer; Alphonse Chapanis | |||
| Warming Up to Computers: A Study of Cognitive and Affective Interaction Over Time | | BIBA | 245-250 | |
| David M. Gilfoil | |||
| This experiment studies how people learn to use computers. Four computer-naive persons performed six computer tasks at each of 20 task sessions over a one month period. Participants were allowed to choose a menu-driven or command-driven dialogue at any point during the study. Cognitive, affective, and performance variables were closely monitored. Results generally support the appropriateness of a menu-driven dialogue for novice users and the transition to a command-driven dialogue after approximately 16 - 20 hours of task experience. With experience, users were shown to a) choose b) perform better, and c) be more satisfied with a command driven dialogue. Results are explained within the context of a "cognitive schema" theory. | |||
| Statistical Semantics: How Can a Computer Use What People Name Things to Guess What Things People Mean When They Name Things? | | BIB | 251-253 | |
| George W. Furnas; Louis M. Gomez; Thomas K. Landauer; Susan T. Dumais | |||
| Assessing the Climate for Change: A Methodology for Managing Human Factors in a Computerized Information System Implementation | | BIB | 256-261 | |
| David G. Hopelain | |||
| IBM System/38 -- An IBM Usability Experience | | BIB | 262-267 | |
| David E. Peterson; J. Howard Botterill | |||
| Some Human Factors Aspects of Computers in Air Traffic Control | | BIB | 268-274 | |
| David Whitfield | |||
| Experience with Advanced Office Automation Techniques for Project Management | | BIB | 276-277 | |
| Duncan C. Miller | |||
| Electric Mail Usage Analysis | | BIB | 278-280 | |
| Harry M. Hersh | |||
| The Impact of Electronics on Humans and Their Work Environment | | BIB | 281-286 | |
| Panayotis Eric DeVaris | |||
| Designing the Human-Computer Interface | | BIB | 288-291 | |
| Albert N. Badre | |||
| Teaching the Design and Evaluation of User-Computer Interfaces | | BIB | 292-294 | |
| James D. Foley | |||
| Applying Cognitive Psychology to Computer Systems: A Graduate Seminar in Psychology | | BIB | 295-298 | |
| Thomas P. Moran; Stuart K. Card | |||
| Teaching Software Psychology Experimentation Through Team Projects | | BIB | 299-301 | |
| Ben Shneiderman | |||
| Further Developments Toward Using Formal Grammar as a Design Tool | | BIB | 304-308 | |
| Phyllis Reisner | |||
| Towards Specifying and Evaluating the Human Factors of User-Computer Interfaces | | BIB | 309-314 | |
| Teresa Bleser; James D. Foley | |||
| Using Formal Specifications in the Design of a Human-Computer Interface | | BIB | 315-321 | |
| Robert J. K. Jacob | |||
| The Acquisition of Text Editing Skills | | BIB | 324-325 | |
| Sherman W. Tyler; Steven Roth; Timothy Post | |||
| User Models of Text Editing Command Languages | | BIB | 326-331 | |
| Lisa J. Folley; Robert C. Williges | |||
| Reducing Manual Labor: An Experimental Analysis of Learning Aids for a Text Editor | | BIB | 332-336 | |
| Donald J. Foss; Mary Beth Rosson; Penny L. Smith | |||
| Learner Characteristics that Predict Success in Using a Text-Editor Tutorial | | BIB | 337-340 | |
| Dennis E. Egan; Cheryll Bowers; Louis M. Gomez | |||
| Patterned Prose for Automatic Specification Generation | | BIB | 342-346 | |
| Sidney L. Smith | |||
| An Exploratory, Human Engineering Study of DARCOM Human-Computer Interfaces in Management Information Systems | | BIB | 347-349 | |
| Daniel E. Hendricks | |||
| The Development of Dialogue Design Guidelines for a Computer Based Local Information System to be Used by the General Public | | BIB | 350-354 | |
| Martin Maguire | |||
| Decision Situations, Decision Processes, and Decision Functions: Towards a Theory-Based Framework for Decision-Aid Design | | BIB | 355-358 | |
| W. Zachary; R. Wherry; F. Glenn; J. Hopson | |||
| Eyes at the Interface | | BIB | 360-362 | |
| Richard A. Bolt | |||
| The Intelligent Voice-Interactive Interface | | BIBA | 363-366 | |
| Christopher Schmandt; Eric A. Hulteen | |||
| "Put That There" is a voice and gesture interactive system implemented at
the Architecture Machine Group at MIT. It allows a user to build and modify a
graphical database on a large format video display. The goal of the research
is a simple, conversational interface to sophisticated computer interaction.
Natural language and gestures are used, while speech output allows the system
to query the user on ambiguous input.
This project starts from the assumption that speech recognition hardware will never be 100% accurate, and explores other techniques to increase the usefulness (i.e., the "effective accuracy") of such a system. These include: redundant input channels, syntactic and semantic analysis, and context-sensitive interpretation. In addition, we argue that recognition errors will be more tolerable if they are evident sooner through feedback and easily corrected by voice. | |||
| Composing Letters with a Simulated Listening Typewriter | | BIBA | 367-370 | |
| John D. Gould; John Conti; Todd Hovanyecz | |||
| Speech recognition is not yet advanced enough to provide people with a reliable listening typewriter with which they could compose documents. The aim of this experiment was to determine if an imperfect listening typewriter would be useful for highly experienced dictators. Participants dictated either in isolated words or in continuous speech, and used a simulated listening typewriter which recognized a limited vocabulary as well as one which recognized an unlimited one. Results suggest that reducing the rate at which people dictate, either through limitations in vocabulary size or through speaking in isolated words, led to reductions in people's performance. For these first-time users, no version of the listening typewriter was better than traditional dictating methods. | |||
| Presenting Information in Sound | | BIB | 371-375 | |
| Sara Bly | |||
| Steps Toward a Cognitive Engineering: Design Rules Based on Analyses of Human Error | | BIBA | 378-382 | |
| Donald A. Norman | |||
| This paper uses the analysis of human error to provide a tool for the development of principles of system design, both to minimize the occurrence of error and to minimize the effects. Eventually, it should be possible to establish a systematic set of guidelines, with explicit, quantitative cost-benefit tradeoffs that can lead toward a design discipline -- a "Cognitive Engineering." This short note starts the process. | |||
| Analogy Considered Harmful | | BIB | 383-386 | |
| Frank Halasz; Thomas P. Moran | |||
| Learning to Use a Text Processing System: Evidence from "Thinking Aloud" Protocols | | BIB | 387-392 | |
| Clayton Lewis; Robert Mack | |||
| A Production-System Model of Human-Computer Interaction | | BIB | 393-399 | |
| John Durrett; Theron Stimmel | |||