HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 111213141516171819202122232425262728293031

International Journal of Man-Machine Studies 21

Editors:B. R. Gaines; D. R. Hill
Dates:1984
Volume:21
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:38
Links:Table of Contents
  1. IJMMS 1984 Volume 21 Issue 1
  2. IJMMS 1984 Volume 21 Issue 2
  3. IJMMS 1984 Volume 21 Issue 3
  4. IJMMS 1984 Volume 21 Issue 4
  5. IJMMS 1984 Volume 21 Issue 5
  6. IJMMS 1984 Volume 21 Issue 6

IJMMS 1984 Volume 21 Issue 1

Guest Editorial: Cognitive Ergonomics Research at SAPU, Sheffield BIB 1-6
  T. R. G. Green
Organization and Learnability in Computer Languages BIBA 7-18
  T. R. G. Green; S. J. Payne
A number of "guiding principles" have been put forward for the design of easily-learnt languages, but little attention has been given to the overall structure or organization of the language. We propose a new principle, that of consistency between language rules, and show that this "organizing" principle is strongly related to the concepts of hyper-rules and meta-rules in van Wijngaarden's two-level grammar. We report an experiment comparing four separate namesets for a subset of a word-processing language, which demonstrates that the organization of the lexical rules is more important than the match between any one command and its name; performance with the nameset in which two conflicting organization principles were at work was much poorer than with consistent namesets whose organizing principles could readily be perceived by subjects.
Perceptual Structure Cueing in a Simple Command Language BIBA 19-29
  S. J. Payne; M. E. Sime; T. R. G. Green
Computer languages are special cases of information displays. Successful information display, presented here as a mapping between the internal structure of the information and its external representation, enables the reader to use perceptual features as cues to the internal structure. Such "perceptual parsing" is notably difficult or impossible in many command languages: instead, the structure of their inscrutable commands defies quick analysis. Taking a miniature context-oriented editor as a demonstration system, we show that an extremely simple and purely "surface" change to the syntax, namely putting operation codes in upper case to distinguish them from literals, produces a large decrease in error frequencies in each of the three experimental tasks. We conclude that designers should pay close attention to making the command structure easily perceived.
Comprehension and Recall of Miniature Programs BIBA 31-48
  D. J. Gilmore; T. R. G. Green
Differences in the comprehensibility of programming notations can arise because their syntax can make them cognitively unwieldy in a generalized way (Mayer, 1976), because all notations are translated into the same "mental language" but some are easier to translate than others (Shneiderman & Mayer, 1979), or because the mental operations demanded by certain tasks are harder in some notations than in others (Green, 1977). The first two hypotheses predict that the relative comprehensibility of two notations will be consistent across all tasks, whereas the mental operations hypothesis suggests that particular notations may be best suited to particular tasks. The present experiment used four notations and 40 non-programmers to test these hypotheses. Two of the notations were procedural and two were declarative, and one of each pair contained cues to declarative or procedural information, respectively. Different types of comprehension question were used ("sequential" and "circumstantial"); a mental operations analysis predicted that procedural languages would be "matched" with sequential questions, and declarative languages with circumstantial questions. Questions were answered first from the printed text, and then from recall. Subjects performed best on "matched pairs" of tasks and languages. Perceptually-based cues improved the performance on "unmatched pairs" better than non-perceptual cues when answering from the text, and both types of cues improved performance on "unmatched pairs" in the recall stage. These results support the mental operations explanation. They also show that the mental representation of a program preserves some features of the original notation; a comprehended program is not stored in a uniform "mental language".
Speech-Controlled Text-Editing: Effects of Input Modality and of Command Structure BIBA 49-63
  D. L. Morrison; T. R. G. Green; A. C. Shaw; S. J. Payne
Performance measures and satisfaction ratings were obtained from skilled typists and from non-typists using two different designs of editor, one requiring more commands but simpler ("short transactions") the other needing fewer commands but more complex ("long transactions"). Each subject used the same editor in two versions, one with all input from the keyboard, the other with spoken commands but typed parameter strings. The results indicate that short transactions were preferred, although they were not always most error-free. Speech input was consistently rated lower than keyboard by typists; non-typists initially preferred speech but swung to preferring keyboard. Although the dislike of speech may have been due to the limited hardware, subjects' comments suggested that switching modality during a command was inherently disruptive.
The Doctor's Use of a Computer in the Consulting Room: An Analysis BIBA 65-90
  Garry Brownbridge; Mike Fitter; Max Sime
With the current general advance of information technology there has been considerable interest in the potential for interactive computer systems in medical consultations. If such computer applications are to be successful then the human, as well as the technological, factors involved in the ensuing change must first receive close attention. Here we report a human factors assessment of an interactive computer aid to history-taking and diagnosis, used during consultations in a hospital out-patient clinic, closely observing three different doctors during more than 50 consultations each. Systematic analyses of video-recorded consultations before and after the computer's installation enabled an assessment of the computer's effects on the routine of the clinic and the processes of the consultation. The computer's influence on the doctors' information gathering and processing is also investigated.
   Computer consultations followed a pattern which was very similar to that of pre-computer consultations. The disruption to the clinic's normal routine was minimal. Alternative strategies for incorporating computer use into the consultation were identified, and the pros and cons of each are discussed. The system caused a slightly increased workload for doctors and nurses, reflected in a minor increase in the amount of time devoted to each patient, partly because doctors gathered more explicit information about patients' symptoms in computer consultations. The order in which the computer presented topics for discussion seemed to reflect the "natural" order in which the topics would be discussed, but there was some mismatch between the information doctors entered at the terminal and the information the system was designed to accept. Such evidence of the user incompatibilities of the system could help in the identification of criteria for the design of future consulting room systems and will lead to a better understanding of the issues involved in the interactions between patient, doctor and computer and result in more appropriate interfaces between the doctor and computer.

IJMMS 1984 Volume 21 Issue 2

The Effect of Semantic Complexity on the Comprehension of Program Modules BIBA 91-103
  Barbee T. Mynatt
An important variable affecting the comprehension of programs is their psychological complexity. While some work has been done on surface or low-level semantic features which affect complexity (e.g. variable-naming and indentation), little has been done on the effects of higher-level semantic features. This article presents an experiment in which pairs of program modules were equated on surface complexity and on function, while the complexity of the semantic constructs involved varied. The constructs chosen for study were iteration vs recursion, the type of data structures employed (arrays vs linked lists) and the straightforwardness of the algorithm used. The modules were presented to student programmers to memorize. They were asked for immediate recall, to perform hand execution and for recall again 48 h later. The more semantically complex modules produced significantly worse hand-execution performance and worse delayed-recall. These results are described in relation to the Shneiderman & Mayer syntactic/semantic model of programming behavior.
A Structured Approach to Designing Human-Computer Dialogues BIBA 105-126
  Izak Benbasat; Yair Wand
This article presents a conceptual model and software tool for designing and implementing flexible human-computer dialogues. The tool is referred to as a dialogue generator. The rationale for building such a tool is discussed in the context of other work in the literature, and is based on a fully operational version written in APL. A dialogue generator is important because of the cost of designing dialogues, and the rapidly increasing importance of dialogue generation due to the proliferation of interactive applications of computers. A command language model is used for the target dialogue because of its inherent simplicity, which promotes a structured and streamlined approach based on interaction events, which are described in tabular form in the dialogue data-base. "Help" and "abort" facilities are provided, as are facilities for controlling the flow of the dialogue. The implementation is discussed, particularly in terms of how to represent the various types of information pertaining to an interaction event, and how to store and organize the information. The concepts of user guided and system guided dialogues are re-examined in the context of the flow-control mechanism. Both styles of dialogue are needed, depending on the user, and both can be covered by the model described. The model is compared with others in the literature, and its capabilities are evaluated against published rules for dialogue programming.
On Conflicts BIBA 127-134
  Z. Pawlak
In this article a mathematical model of conflict situations, based on three binary relations: alliance, conflict and neutrality, is introduced. Axioms for alliance and conflict relations are given and some properties of these relations are investigated.
   Further, the strength of an object is introduced. The set of the three relations mentioned above, together with the strength of all objects, is called the situation. Some rules of transformation of situations are introduced and investigated.
   Finally, the notion of a capture is defined and the rules of sharing of the capture among objects in a given situation are formulated. Some theorems concerning capture sharing are given.
   The approach presented can be used as a starting point for an easy computer simulation of conflict situations.
On Frequency-Based Menu-Splitting Algorithms BIBA 135-148
  Ian H. Witten; John G. Cleary; Saul Greenberg
If a menu-driven display is to be device-independent, the storage of information must be separated from its presentation by creating menus dynamically. As a first step, this article evaluates menu-construction algorithms for ordered directories whose access profile is specified. The algorithms are evaluated by the average number of selections required to retrieve items. While it is by no means suggested that the system designer should ignore other relevant information (natural groupings of menu items, context in terms of prior selections, and so on), the average selection count provides an unambiguous quantitative criterion by which to evaluate the performance of menu-construction algorithms.
   Even in this tightly-circumscribed situation, optimal menu construction is surprisingly difficult. If the directory entries are accessed uniformly, theoretical analysis leads to a selection algorithm different from the obvious one of splitting ranges into approximately equal parts at each stage. Analysis is intractable for other distributions, although the performance of menu-splitting algorithms can be bounded. The optimal menu tree can be found by searching, but this is computationally infeasible for any but the smallest problems.
   Several practical algorithms, which differ in their treatment of rounding in the menu-splitting process and lead in general to quite different menu trees, have been investigated by computer simulation with a Zipf distribution access profile. Surprisingly, their performance is remarkably similar. However, our limited experience with optimal menu trees suggests that these algorithms leave some room for improvement.
The Relationship of Problem-Solving Ability and Course Performance among Novice Programmers BIBA 149-160
  Ronald H. Nowaczyk
This study attempted to identify those problem-solving skills that predict success for a college student enrolled in a computer science course. During the first week of the course, students enrolled in Introductory Programming, Cobol, or Advanced Computer Science courses completed a form that requested information on previous academic performance, computer programming experience, attitude toward computer science, and personal locus of control. Students also worked seven problems involving either logical operations, algebraic solutions, transformations, or identification of mathematical relationships. Performance on the test form was significantly related to performance in the courses. The results support the view that individual differences in semantic knowledge of novice programmers are related to performance in the Introductory Programming and Cobol courses. The findings are discussed in terms of refinement of the test form and the need for further investigation of the way problems are represented and solved by successful and unsuccessful novice programmers.
Conditional Statements and Program Coding: An Experimental Evaluation BIBA 161-190
  Iris Vessey; Ron Weber
Prior research supports the superiority of the nested conditional over the branch-to-label conditional. However, when examining programmer performance using these two forms of the conditional, the prior research has confounded several programming tasks. If these tasks are disentangled and programmers are trained to perform the tasks using language-independent paradigms, the relative performance of the nested conditional versus the branch-to-label conditional is no longer clear-cut.

IJMMS 1984 Volume 21 Issue 3

An Economical Approach to Modeling Speech Recognition Accuracy BIBA 191-202
  Thomas M. Spine; Beverly H. Williges; Joseph F. Maynard
Accuracy of speech recognizer decisions is an important criterion for maintaining both system effectiveness and user satisfaction. A central-composite design methodology is recommended as an economical means to develop empirical prediction equations for speech recognizer performance incorporating a number of influential factors. Factors manipulated in the central-composite design included number of training passes, reject threshold, difference score, and size of the active vocabulary. The factorial combination of two noncontinuous variables, sex of the speaker and inter-word confusability, was also investigated by replicating the central-composite design to create four sets of data. Standard least-squares multiple regression analysis was used to develop the four sets of prediction equations, each of which accounted for at least 50% of the variance in recognizer performance. A cross-validation study revealed that shrinkage was not excessive. Subsequently, these empirical models were incorporated into an interactive design tool for a dialogue author where the percentage of correct recognition is automatically optimized when the dialogue author enters the size of the vocabulary to be used or both the vocabulary size and desired number of training passes. The design tool can also be used to make predictions anywhere within the response surface. Use of these efficient data collection procedures along with the interactive design tool should greatly assist the dialogue author in predicting the impact of various language, task, environmental, algorithmic, human, and performance evaluation factors on speech recognition accuracy.
A Display Editor with Random Access and Continuous Control BIBA 203-212
  John M. Hammer
An analysis of human information-processing during editor positioning led to a text editor with two significant features: continuous control and random access to text. Continuous control is a feature that allows the user to control the editor while it executes a positioning command. It will be shown that such a style of interaction eliminates difficult design decisions and leads to new methods of positioning an editor which are also less sensitive to human error. Random access to the text file means that the editor can be positioned to any point in the file in a constant time. The advantage of random access is that it is noticeably faster than the sequential access used by most editors. The implementation of continuous control and random access is discussed.
An Analysis of Formal Logics as Inference Mechanisms in Expert Systems BIBA 213-227
  E. H. Mamdani; Janet Efstathiou
Logic plays an important role within expert systems of enabling inference and representing meaning. We analyse and compare several logics in terms of their topic-neutral items, emphasizing the importance of semantic, as well as syntactic, validity. Fuzzy logic and PRUF are assessed for mechanizability. It is concluded that the existing logics are limited in their applicability and that a much closer analysis of the semantics of logic is required, with respect to computational feasibility and power of meaning representation.
Visual Momentum: A Concept to Improve the Cognitive Coupling of Person and Computer BIBA 229-244
  David D. Woods
Computer display system users must integrate data across successive displays. This problem of across-display processing is analogous to the question of how the visual system combines data across successive glances (fixations). Research from cognitive psychology on the latter question is used in order to formulate guidelines for the display designer. The result is a new principle of person-computer interaction, visual momentum, which captures knowledge about the mechanisms that support the identification of "relevant" data in human perception so that display system design can support an effective distribution of user attention. The negative consequences of low visual momentum on user performance are described, and display design techniques are presented to improve user across-display information extraction.
Users and Experts in the Document Retrieval System Model BIBA 245-252
  Czeslaw Danilowicz
Introduced into the model of a document retrieval system were users who are represented by their profiles. A separate group among them are experts. The possibilities of taking advantage of experts for retrieving documents are analysed. The procedures of selecting a competent expert and the ways of utilizing him for the ordering of documents in the system response to the user's query are described.
Measuring the Quality of Linguistic Forecasts BIBA 253-257
  Ronald R. Yager
We suggest a method for representing linguistic forecasts via fuzzy sets. We then use this representation to obtain a measure of quality of forecasts which takes into account both the validity and the specificity of the forecast.
An Experimental Expert System for Genetics BIBA 259-268
  Toshinori Munakata; Barry Kornreich
An experimental knowledge-based expert system for genetics, called GENETICS-I, has been developed. The system works on a simple genetic model in which only one phenotype character trait is considered. The phenotype is determined by a gene-pair, each gene having a value of 0 or 1 (two alleles). A global data structure for the entire family is represented by a complete binary tree with the siblings for each tree node kept as a linked list using pointers. A local data structure is defined for each person in the family tree representing the phenotypes and genotypes for the individual and the parents. A knowledge base is established as a collection of production rules, such as "one of the two genes of each individual comes from the mother, the other from the father". These rules are repeatedly applied on the database defined above to deduce new information. Although GENETICS-I is a simple model, it includes basic concepts for possible structures of the database, knowledge base, and inference mechanisms for more general genetic systems. Several possible extensions of the model are discussed.
Reading Continuous Text from a One-Line Visual Display BIBA 269-277
  Andrew F. Monk
Continuous text may be presented via a one-line visual display by dividing it into "frames", each of which is displayed for some specified time. Two different approaches to determine the contents of these frames can be distinguished: character-stepped display and word-stepped display. In the former the start of each frame is "stepped" some number of characters through the text for each frame presented. Viewed in this way the text appears to be moving jerkily behind a slot. In a word-stepped display the contents of each frame depend on word boundaries (e.g. having a separate word in each frame).
   Experiments are described which compare different ways of displaying text. Readers can cope with character-and word-stepped displays at high rates of presentation. The parameter which was identified as having most influence on performance was the expected proportion of words occurring whole on some frame.

IJMMS 1984 Volume 21 Issue 4

Guest Editorial: Intelligent User Interfaces BIB 279-280
  Edwina L. Rissland
The Role of Context and Adaptation in User Interfaces BIBA 283-292
  W. Bruce Croft
A user interface can be viewed as a means of mapping user tasks to system tools. Context and adaptation are important features of a user/system interaction that can be used to simplify the task to tool mapping and thereby improve the interface. A system based on these features would be able to adapt its actions to be appropriate for a given context. Two systems are used as examples of the use of context and adaptation. The POISE system provides assistance to the users of an office system based on models of office tasks. The adaptive document-retrieval system chooses the most effective search strategy for retrieving relevant documents in a given context. The techniques used to implement context and adaptation in these systems are considerably different, but in both systems the user interface is made more effective.
Experience with the ZOG Human-Computer Interface System BIBA 293-310
  Donald L. McCracken; Robert M. Akscyn
This article is primarily a reflection on more than 8 years of research with the ZOG human-computer interface system. During that time we have experienced extensive use of ZOG. We begin the article with a short description of the current ZOG implementation; then we proceed to a higher plane to describe a general ZOG philosophy that has evolved from our experience. Following the philosophy, we briefly describe the applications we have explored with ZOG, including a major application project for the Navy. Then we provide a critique of the current ZOG implementation by elucidating its strong and weak points. We end the paper with a brief glimpse at our plans for ZOG in the future.
Seeing What Your Programs are Doing BIBA 311-331
  Henry Lieberman
The advent of personal computers with high resolution displays and pointing devices will permit a drastic improvement in the quality of user interfaces for programming environments. Programming environments already are beginning to make use of interactive graphics as a tool for helping us visualize the operation of programs we write. Watching a program work step-by-step, where each step is reflected in visible changes to the display screen, greatly facilitates understanding of the internal workings of a program. But the power of interactive graphics for program visualization has yet to be exploited in a programming environment as a tool for creating programs, as opposed to merely observing already-written programs.
   Tinker is a programming environment for Lisp in which a program is constructed by demonstrating its steps on representative examples, and the system displays graphically the result of each step as it is performed. The programmer can "see what the program is doing" while the program is being constructed. The processes of writing a program and debugging it on test cases are combined into a single interactive activity, rather than separated as they are in conventional programming environments. To help the reader visualize the operation of Tinker itself, an example is presented of how Tinker may be used to construct an alpha-beta tree search program.
What Makes RABBIT Run? BIBA 333-352
  Michael David Williams
Once one completes the design and construction of a novel interface that claims to embody a new interface paradigm one is confronted with two problems: (1) Is the interface better than what exists? and (2) If it is better, what makes it better? The answer to the second question is important if one as any hope of extending the paradigm. Also, paradoxically, it is important to have the answer to the second question to be able to address the first. This is because new paradigms often derive their value from introducing new functionality in the system and thus highlight criteria not previously recognized. This is exactly the case with a novel database retrieval interface we have constructed called RABBIT. This article describes some recent work on understanding where the apparent power of the interface comes from.
The Electronic Classroom: Workstations for Teaching BIBA 353-363
  Andries Van Dam
This article describes an "electronic classroom" consisting of a network of 55 powerful Apollo DN300 workstations running the BALSA algorithm simulation and animation environment. Several examples (parameter passing and linked-list manipulation) from an introductory programming course are briefly discussed, and some conclusions about the user interface are drawn, based on participant observation and student responses to detailed questionnaires.
Stages and Levels in Human-Machine Interaction BIBA 365-375
  Donald A. Norman
The interaction between a person and a computer system involves four different stages of activities -- intention, selection, execution, and evaluation -- each of which may occur at different levels of specification. Analysis of these stages and levels provides a useful way of looking at the issues of human-computer interaction.
Ingredients of Intelligent User Interfaces BIBA 377-388
  Edwina L. Rissland
In this paper, we discuss certain general features of intelligent user interfaces, such as the sources of knowledge needed by an interface to be considered intelligent, and characteristics desirable in an interface. We illustrate these ideas by examining two examples of interfacing between a user and a system: on-line HELP and tutoring. We conclude by briefly surveying some of the challenges to designers of interfaces.

IJMMS 1984 Volume 21 Issue 5

General Multiple-Objective Decision Functions and Linguistically Quantified Statements BIBA 389-400
  Ronald R. Yager
The concept of linguistically quantified propositions is used to develop a whole family of forms for the representation of multiple-objective decision functions.
Customers' Requirements for Natural Language Systems: Results of an Inquiry BIBA 401-414
  Katharina Morik
Application-oriented work on natural language systems (NLSs) is not only to be justified by linguistic or software ergonomic considerations, but also by the needs and requirements of the users and customers. While several user studies exist, systematic evaluation studies for particular NLSs, and experience with applied NLSs, no study has yet been published that determines the demand for NLSs and the requirements of potential customers.
   This article presents in detail the results from a market inquiry in the field of German NLSs. The statistical data from the quantitative inquiry and examples stemming from the qualitative inquiry are analysed.
The Effect of Indentation on Program Comprehension BIBA 415-428
  Thomas E. Kesler; Randy B. Uram; Ferial Magareh-Abed; Ann Fritzsche; Carl Amport; H. E. Dunsmore
An experiment was conducted to study how different methods of indentation affect the ability of programmers to understand programs. The subjects were 72 students from an intermediate programming course. Each subject received one of three implementations of a short Pascal program. Each implementation used a different method of source code indentation: no indentation, "excessive indentation", and Purdue University Department of Computer Science standard (moderate indentation). The subjects answered a 10-question test about the program. The scores of those subjects who received the program written in the departmental standard were better than the scores of the other two groups.
An Approach to CAD System Performance Evaluation BIBA 429-444
  Y. N. Strelnikov; G. Pulkkis; G. D. Dmitrevich
This article discusses an appropriate approach to the evaluation of the performance and the operational workload characteristics of computer-assisted design (CAD) systems and CAD components planned for or having the possibility to be modified to embrace some particular application.
   A formalization of the CAD process is presented in order to represent it in a form suitable for discrete simulation modelling, for the choice of performance measures and for the formulation of main conceptions of a CAD system model.
   Two main aspects of the CAD process modelling are examined; the information of product specifications and the resource allocation between particular design tasks in a CAD system. Taken together, these aspects define dynamic states of a CAD system during the CAD process, and lead to the formation of a discrete event simulation model.
   A simulation modelling approach based on structural-algorithmic models using Pro-Nets is presented. The model is described in abstract terms.
QWERTY and Keyboard Reform: The Soft Keyboard Option BIBA 445-450
  Geoff Cumming
The familiar QWERTY keyboard has become an international standard, and is the universal layout for computer and typewriter keyboards despite evidence that other layouts are easier to learn and can be used more rapidly. The hard-won skill of the many QWERTY users has been an effective bar to keyboard reform. However, the "soft keyboard", in which the assignment of characters to keys can easily be changed, now offers the possibility that an improved keyboard could be chosen and implemented along with QWERTY on dual-layout keyboards. Keyboards now serve principally for computer input, rather than only as part of the typewriter: this expansion of role modifies and complicates the choice of a new standard keyboard layout. This article discusses keyboard reform and concludes that the soft keyboard does, indeed, offer a promising opportunity for progress.
An Icon-Driven End-User Interface to UNIX BIBA 451-461
  D. T. Gittins; R. L. Winder; H. E. Bez
Some aspects of the end user interface are considered and an icon-driven system recently developed by us, called UNICON (after "UNIX-icon"), is described. UNICON uses colour graphic icons, instead of commands, as the means by which a user and operating system interact. It is designed, as far as is possible, to be independent of graphics devices and to be portable between different operating systems. The current version, which runs under Berkeley UNIX 4.1 bsd, is an implementation of a minimum kernel of filestore management commands relevant to all applications. Various problems and potential applications highlighted during the project are also discussed.

IJMMS 1984 Volume 21 Issue 6

The METANET: A Means for the Specification of Semantic Networks as Abstract Data Types BIBA 463-492
  Werner Dilger; Wolfgang Womann
The axiomatic definitions of the data structures METANET and PARTITION by means of production schemata is given. From these definitions the specification of arbitrary semantic networks with or without partitions as abstract data types can be derived by instantiation of node and edge types for the corresponding variables in the schemata. The equation schemata can be interpreted as rewrite rules, and the rewrite rule systems METANET and PARTITION are proved to be noetherian and confluent. The normal forms of these systems consist only of constructor operations. METANET and PARTITION can be used to compare different sorts of semantic networks and to develop new ones.
An Empirical Investigation of Voice as an Input Modality for Computer Programming BIBA 493-520
  John Leggett; Glen Williams
Recently, automatic speech recognition systems have shown the potential of becoming a useful means of data entry and control. The most successful of these speech recognition systems accept an isolated utterance as input and use a task-oriented syntactically-constrained vocabulary for increased recognition accuracy. At the same time, language-directed editors are beginning to be introduced into the programmer's workplace. A language-directed editor is an editor that has knowledge of the underlying syntax (and basic semantics) of a language. Program entry, then, is syntax-driven and program editing may proceed on a syntactic (semantic) basis.
   This article discusses the design, implementation, and results of a controlled experiment to evaluate voice versus keyboard (the standard input mode) in a language-directed editing environment. Twenty-four subjects inputted and edited program segments under control of a language-directed editor via the two input modes. Measures of speed, accuracy, and efficiency were used to compare these two modes of input.
   In general, the results showed that the subjects were able to complete more of the input and edit tasks by keyboard (70%) than by voice (50-55%), but that keyboard input had a higher error rate than did voice input. Also, the use of voice was just as efficient as keyboard for the inputting of editing commands. These results must be viewed with the understanding that the subjects were novices with respect to voice input, but were very experienced with keyboard input. In this light, it can be seen that voice holds much promise as mode of input for computer programming.
Some Cognitive Aspects of Interface Design in a Two-Variable Optimization Task BIBA 521-539
  R. S. Bridger; J. Long
Three experiments on human performance strategy in a two-variable optimization task are presented. Subjects were required to locate a minimum value on a third dimension by repeatedly specifying values on two other dimensions. Two preliminary experiments investigated subjects' informational requirements in performing the task and attempted an initial characterization of strategy. Experiment 1 assessed the effect of a total record of system responses in the form of a list. This was found to aid performance. Prior knowledge of the minimum value, but not its location, was also investigated. This was not found to aid performance. Experiment 2 compared the list with a partial record of system responses known as the current minimum -- the most optimal state attained up to any particular point in the task. No significant differences between these two performance aids were found. Experiment 3 compared the total record in list form with a total record in the form of a matrix. Superior performance using the matrix was attributed to the two-variable strategy which accompanied its use, in contrast to the one-variable strategy that occurred with the list.
   Although outstanding hypotheses exist and alternative interpretations are possible, some agreement with previous research was found. Suggestions for the design of optimal user interfaces are given, emphasizing the need to identify critical information for task performance and the relationship between this and the subjects' or operators' strategy.
Voice-Input Aids for the Physically Disabled BIBA 541-553
  Robert I. Damper
Speech technology, an amalgam of speech sciences and microelectronics technology, offers new potential to aid the disabled. Low-cost very-large-scale integration (VLSI) speech synthesizers have been on the market for some time and, consequently, the use of synthetic speech output devices in aids has received a great deal of attention. Only recently, however, have VLSI chip-sets for speech recognition started to appear. This article reviews the possibilities for exploiting low-cost speech recognizers in aids for the severely physically disabled.
   Five areas of application are highlighted: wheelchair control, control of the domestic environment, text composition and editing, and control of manipulators are four of these. The ubiquitous nature of microprocessors and microcomputers, however, makes it difficult nowadays to distinguish sensibly between a general-purpose computer and a computer-based aid. A further important application, therefore, is likely to be provision of a voice interface between a disabled user and a "personal" (or remote) computer, irrespective of the function or functions implemented on that machine. As a consequence, some or all of the above applications could be combined within the same device.
   It seems sensible, however, at this stage to concentrate on a single application. It is argued that control of the domestic environment by voice is likely to be the most fruitful area for the initial application of automatic speech recognition technology to aid the disabled. The development of a speech interface to an environmental control (ECU) is described, and the principles on which the system design (hardware, software and man-machine dialogue) is based are explained and justified. Evaluation of the system has demonstrated the feasibility of employing speech recognition in environmental control. This is a continuing development, and many possibilities for future improvement are apparent.