HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 181920212223242526272829303132333435363738

International Journal of Man-Machine Studies 28

Editors:B. R. Gaines; D. R. Hill
Dates:1988
Volume:28
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:31
Links:Table of Contents
  1. IJMMS 1988 Volume 28 Issue 1
  2. IJMMS 1988 Volume 28 Issue 2/3
  3. IJMMS 1988 Volume 28 Issue 4
  4. IJMMS 1988 Volume 28 Issue 5
  5. IJMMS 1988 Volume 28 Issue 6

IJMMS 1988 Volume 28 Issue 1

Reading from Screen versus Paper: There is No Difference BIBA 1-9
  David J. Oborne; Doreen Holton
This paper considers the effect of presentation medium on reading speed and comprehension. By directly comparing performance using screen and paper presentations, it examines the argument that it takes longer to read from a screen-based display than from paper, and that comprehension will be lower. The hypothesis is also tested that it takes longer to read light characters on a dark background compared with dark characters on a light background, and that comprehension will be lower with light-character displays. Altogether four conditions were used, with two passages read in each condition: screen with dark characters, screen with light characters, paper with dark characters, and paper with light characters. Subjects also ranked the four conditions for preference. No significant difference was found in either reading speed or comprehension between screen and paper, or between dark and light character displays. Some preference differences were found, however. Reasons for the lack of reading and comprehension differences are discussed, and it is argued that this reflects the close attention to experimental detail paid in the present experiment, which has often been missing in past studies.
Prompting, Feedback and Error Correction in the Design of a Scenario Machine BIBA 11-27
  John M. Carroll; Dana S. Kay
A scenario machine limits the user to a single action path through system functions and procedures. Four scenario machines were designed to embody different approaches to prompting, feedback, and automatic error correction for a "learning-by-doing" training simulator for a commercial, menu-based word processor. Compared with users trained directly on the commercial system, scenario machine users demonstrated an overall advantage in the "getting started" stage of learning. Initial training on a "prompting + automatic correction" system was particularly efficient, encouraging a DWIM (or "do what I mean") approach to training system design. Curiously, training on a "prompting + feedback" system led to relatively impaired performance on a set of transfer of learning tasks. It was suggested that too much training information support may obscure the task coherence of the action scenario itself relative to a design that provides less explicit direction.
Robust Dictionary Lookup Using Associative Networks BIBA 29-43
  Orjan Ekeberg
Associative networks are parallel pattern-processing structures, capable of handling disturbed patterns in a robust manner. In this paper an implementation of a fast and robust dictionary-lookup algorithm for misspelt words using this technique is described. To code the words to be processed in a way that is suitable for processing in a network, the "rubber trigram" code is introduced. The program is capable of retrieving words from a dictionary of at least 25000 words, even if the key contains severe misspellings. The presented algorithm is suitable for interactive dictionary lookup, e.g. in a word-processing system, being more flexible than the use of conventional dictionaries and lookup methods.
Conditional Statements, Looping Constructs, and Program Comprehension: An Experimental Study BIBA 45-66
  Errol R. Iselin
The major objective of this research was to study the effects of positive/negative and true/false conditions, and a loop taxonomy, on the program-readability performance of programmers and programming students. Task learning was also included as an independent variable. It was proposed from prior theory that: (1) positive conditions would be easier to process than negative; (2) the positive/negative and true/false variables would interact such that the order of performance from high to low would be positive/true, positive/false, negative/false, negative/true; (3) the read/process loop would be easier to process than the process/read loop; (4) learning would improve performance; and (5) programmers would outperform students. In a laboratory experiment conducted to test these propositions, support was found for propositions (1), (4) and (5). Propositions (2) was largely supported in the programmer data. In the student data the positive/negative and true/false variables did not interact, with true conditions being easier than false. Proposition (3) was supported in the student data. This finding did not generalize to programmers.
Verifying Identity via Keystroke Characteristics BIBA 67-76
  John Leggett; Glen Williams
This paper reports on an experiment that was conducted to assess the viability of using keystroke digraph latencies (time between two successive keystrokes) as an identity verifier. Basic data are presented and discussed that characterize the class of keystroke digraph latencies that are found to have good potential as static identity verifiers as well as dynamic identity verifiers. Keystroke digraph latencies would be used in conjunction with other security measures to provide a total security package.
Analysis of Competition-Based Spreading Activation in Connectionist Models BIBA 77-97
  P. Y. Wang; S. B. Seidman; J. A. Reggia
In this paper we analyse a connectionist model of information processing in which the spread of activity in the network is controlled by the nodes actively competing for available activation. This model meets the needs of various artificial-intelligence tasks and has demonstrated several useful properties, including circumscribed spread of activation, stability of network activation following termination of external influences, and context-sensitive "winner-take-all" phenomena without explicit inhibitory links between nodes representing mutually exclusive concepts. We examine three instances of the competition-based connectionist model. For each instance, we show that the differential equations modelling the changes in the activation level of each node has a solution, and we prove that given any initial activity values of the nodes, certain equilibrium activation levels are reached. In particular, we demonstrate that lateral inhibition, i.e. mutually exclusive activity for nodes in the same layer, is possible without explicitly including links between nodes in the same layer. We believe that our results for these instances of the model give important insights into the behaviour observed in the general model.

IJMMS 1988 Volume 28 Issue 2/3

Editorial

Special Issue on "Multimodal Computer-Human Interaction" BIB 99-100
  M. M. Taylor

Introduction

ISIS: The Interactive Spatial Information System BIBA 101-138
  C. A. McCann; M. M. Taylor; M. I. Tuori
Spatial display, especially dynamic spatial display, may help the symbiosis between human intuition and computer logic. An interactive spatial information system (ISIS) can be designed to support this symbiosis. It can act as a decision support system both by aiding the user to discover options as well as by assisting in their evaluation. An ISIS should be able to perform intelligent operations on its data, detecting inconsistencies between new and old information, assisting in the evaluation of plans, and so forth. DCIEM has developed a miniature ISIS called SDBMS-1, using an extension of a commercial relational database management system. It allows the user to interact by keyboard, voice, or graphic gesture, and provides output alphanumerically or graphically. The information consists of topographic data from digitized standard maps and synthetic tactical data representing military operations. SDBMS-1 has been implemented using MASCOT technology, to permit easy modification as a consequence of experience. Development of an improved ISIS requires analysis of the nature of the human-computer dialogue and the potential contribution of artificial intelligence. Ideas on "intelligent dialogue", and better methods for browsing are explored in the context of ISIS.

Methods

Evaluating the Intelligence in Dialogue Systems BIBA 139-173
  Jack L. Edwards; James A. Mason
The notions of dialogue, dialogue systems and intelligence are explored. Eight general aspects of dialogue and dialogue systems are identified to help describe current efforts at making communication with computers more intelligent. Four evaluative dimensions are discussed on which computer dialogue systems can be assessed, and then this descriptive/evaluative scheme is applied to three example dialogue systems: ELIZA, an early conversational system, GUS, a frame-based dialogue system and GUIDON, a medical diagnostic tutor. The paper ends with a few comments on the current state of evaluative work and future directions for our methodology.
Layered Protocols for Computer-Human Dialogue. I: Principles BIBA 175-218
  M. M. Taylor
A consistent trend in the development of computer systems has been the attempt to separate considerations of how to use the computer from considerations of how to solve the problems for which the computer is used. The concept of layering was introduced early, first assemblers and then compilers providing higher levels of abstraction with which programmers could work. The recent development of User Interface Management Systems has extended to interactive systems the separation of problem and technique by layering. Psychologists have long recognized the likelihood that humans behave as if they used layers of abstraction in both perception and performance. In communication between two partners, both must use the same forms and signals, or communication fails. Together, the forms of messages and the signals that indicate alternations of message direction can be considered to be a protocol. Protocols at several layers of abstraction form the basis of current models for communication between computers. This paper proposes that communication between humans and computers should likewise be regarded as a series of layered protocols, and that interfaces should be designed to take advantage of the natural tendency of humans to process communication in a layered manner, using protocols learned in other interactions.

Practice

Layered Protocols for Computer-Human Dialogue. II: Some Practical Issues BIBA 219-257
  M. M. Taylor
The concepts of the Layered Protocol reference model of user interaction are developed through consideration of the design process, using a multimodal spatial interaction as an example. Specific issues are addressed: multiplexing (which is seen as a way of describing interface modes); feedback, with special consideration of voice recognition systems; type-ahead and asynchronous interaction; embedded help and the development of autonomous means for the computer to assist the user; the tension among robustness, modularity, and efficiency; learning and transfer of training; standardization issues; and evaluation of interfaces, with examples from the Apple Macintosh and the Adagio workstation. Finally, the layered model is considered in the light of published guidelines for user interfaces.
Surveying Projects on Intelligent Dialogue BIBA 259-307
  James A. Mason; Jack L. Edwards
Four projects developing intelligent dialogue systems are surveyed, including work at Bolt Beranek and Newman, at Carnegie Mellon University (the XCALIBUR project), at the University of Hamburg (the HAM-ANS project), and at SRI International (the KLAUS project). The projects are compared using an evaluation method involving eight aspects of intelligent dialogue: Control, Models, Connectivity, Modality, Form, Knowledge Representation and Inferencing, Knowledge Acquisition, and External Information Sources and Targets. The evaluation method used is proposed as a standard method for comparing and rating intelligent dialogue systems.
Towards Intelligent Dialogue with ISIS BIBA 309-342
  Jack L. Edwards; James A. Mason
A proposal for the design of intelligent dialogue systems is developed within the context of the work on SDBMS, a Spatial DataBase Management System, part of the ISIS project at the Canadian Defence and Civil Institute of Environmental Medicine. The approach uses the Models aspect of Edwards & Mason's (1988a) methodology as an organizing principle for design, including the two central notions of explicit-model design and a self-referencing model configuration. Further, a design space is presented that combines the aspects and dimensions of the methodology in order to provide clarity of focus for the developer who must consider a multitude of options in designing and implementing intelligent systems. Aspects of the methodology particularly important to the design and development of intelligent dialogue systems are discussed, namely, Connectivity, Control, Form and Modality. The design methodology is elaborated through discussions and recommendations for how it might be applied in building future versions of SDBMS.

IJMMS 1988 Volume 28 Issue 4

Four Different Perspectives on Human-Computer Interaction BIBA 343-362
  John Kammersgaard
This paper will stress the value of a multi-perspective view on the use of computers. It will argue that the ability to apply more than one perspective is valuable to designers of computer applications, to researchers dealing with human-computer interactions, as well as to users of a particular computer application. As a means for that the paper will present the systems perspective, the dialogue partner perspective, the tool perspective, and the media perspective. All four perspectives will primarily be characterized in relation to human-computer interaction, and the characterizations will be based on a common set of concepts presented in the beginning of the paper. The last section of the paper will, with the help of a few examples, illustrate the value of applying multiple perspectives.
Representing the Structure of Jobs in Job Analysis BIBA 363-390
  Clive G. Downs
A major problem in job analysis is that jobs have intrinsic structural properties, i.e. they consist of many constituents which are interrelated in complex ways. It is desirable to represent these structural properties formally and precisely. Three principal manifestations of structure are identified:
  • (1) Intrinsic structure in the constituents of a job.
  • (2) The overall structure of a job.
  • (3) Certain properties of constituents (such as `importance'). A prominent cause of structure in jobs is identified as the intentionality of human action. It is argued that, in principle, certain elements from Q-analysis (a social science methodology which enables structural features of phenomena to be formally represented) can be used to represent both the structure of job constituents and the overall structure of jobs. The concept of set definition from Q-analysis methodology is examined in some detail to determine how it may be applied in job analysis. The metric mathematical basis of certain conventional techniques used for comparison of jobs is contrasted with the topological character of Q-analysis. It is concluded that using this methodology to evaluate intentional structure may provide considerable insight into the many complexities of jobs.
  • Principles of Intelligent Learning Systems Design BIBA 391-416
      Gotcha G. Tchogovadze; Georgij G. Gogichaishvili; Igor I. Abbasov
    One of the possible system approaches to the construction of artificial intelligence (AI) systems is described in this paper. The approach integrates three well-known AI trends: heuristic programming, structural modelling and simulated evolution. The structure of an experimental learning system ELSY is described. The system is designed in accordance with the proposed principles. The main feature of the system is a dynamic generation of AIS architectures on the basis of means-ends analysis method. An AIS architecture can be subjected to mutation in order to obtain a more intelligent one. Every AI system is capable of self-organization on two levels: the first one formed with an associative computing memory, and the second one with arrangement of knowledge structures.
    Adapting Menu Layout to Tasks BIBA 417-435
      James E. McDonald; Tom Dayton; Deborah R. McDonald
    Menus are an increasingly popular style of user-system interface. Although many aspects of menu design can affect user performance (e.g. item names and selection methods), the organization of items in menus is a particularly salient aspect of their design. Unfortunately, empirical studies of menu layout have yet to resolve the basic question of how menus should be organized to produce optimal performance. Furthermore, a disturbingly common finding has been that any initial effects of menu layout disappear with practice. Thus it is tempting to conclude that menu organization is not important or that it only affects performance during learning. In this paper we present some reasons to doubt this conclusion. In particular, we have found persistent effects of layout with multiple-item selection tasks, in contrast with studies employing a single-item selection paradigm. The results of a controlled study comparing various menu designs (fast-food keyboards) show that the types of tasks to be performed by users must be considered in organizing items in menus and that there may be sustained effects of menu organization with some tasks. In addition, the results of this study support the use of a formal methodology based on user knowledge for menu design. By comparing the performance of subjects using menus designed using our methodology with the performance of subjects using "personalized" menus, we were able to demonstrate the general superiority of our method for designing menus, and for tailoring menus to meet task requirements as well.
    Extending Petri Nets for Specifying Man-Machine Dialogues BIBA 437-455
      Willem R. van Biljon
    The requirements of man-machine dialogue-specification techniques are examined. Petri nets are identified as possible candidates for a modelling technique for dialogues on the basis of their applicability to concurrent, asynchronous systems. Labelled Petri nets are extended to nested Petri nets, allowing transitions to invoke subnets. It is shown that this extension allows nested Petri nets to generate at least the set of context-free languages. Further extensions are made to simplify the modelling of input and output in the user interface, resulting in input-output nets. Transitions labelled by error conditions and meta functions on nets are introduced to increase the usability of the model. Finally, the use of the model is demonstrated by modelling a small hypothetical command language.

    IJMMS 1988 Volume 28 Issue 5

    Undo Support Models BIBA 457-481
      Yiya Yang
    One of the important features for error handling and recovery provided by a user interface management system is undo support. Undo support allows a user to reverse the effects of commands that have already been executed. In this paper, characteristics of undo support are reviewed. Two classic kinds of undo support, history undo/undo and linear undo/redo, are respectively specified by two models, the primitive undo model and the meta undo model. Their properties are carefully analysed in terms of formal specifications. Requirements for a more general undo support facility are discussed in terms of these models. A new undo model that addresses these requirements is formally specified and its more powerful functionality is demonstrated.
    Student Models: The Genetic Graph Approach BIBA 483-504
      Barbara Brecht; Marlene Jones
    In this paper we examine the student model component of an intelligent computer-assisted instruction (ICAI) system. First, we briefly discuss the desirable capabilities of the student model and then describe, in detail, one approach to student modelling which is based on Goldstein's genetic graph. We expand Goldstein's definition and test it's feasibility in new domains, since his original domain was a limited, straightforward adventure game. In addition to modelling two diverse domains, subtraction and ballet, we also discuss the role of certain ICAI components in generating and maintaining the genetic graph.
    The TTS Language for Music Description BIBA 505-523
      Mira Balaban
    A language for describing hierarchical music structures in the Twelve Tone system is presented. The language, entitled `the Twelve Tone Strings (MS) Language', is based on temporal combination of Twelve Tone Strings. It is unique in its generality and flexibility; it can be expanded to support many diverse musical activities. Twelve Tone Strings can serve as the basic structures of cognitive models, as well as data structures for practical applications, like, typesetting and editing. We claim that it can be used to standardize music description languages. An all purpose music workstation, that is based on the TTS language, might provide a powerful input/output facility for computer research in cognitive musicology and formal theories of music. The music workstation, currently under development in Ben Gurion University, is based on the TTS language.
    Experience of Programming Beauty: Some Patterns of Programming Aesthetics BIBA 525-550
      Laura Marie Leventhal
    There is, in the folklore of computer science, a strong suggestion that programs and the processes of programming can be beautiful. Surprisingly, these same issues of programming aesthetics have not been a focus of empirical research, to date. The purpose of the present work is to begin to address programming aesthetics from a 'non-folklore' perspective. Related literature is reviewed and an exploratory empirical study of programming aesthetics is described. The results of this study suggest that patterns of programming aesthetics share common features with patterns of aesthetics in other domains. In particular, programs which are familiar, highly structured, and contain suggestions of hidden information tend to be aesthetically pleasing. Novelty also appears to enhance the aesthetic character of programs.
    User-Friendly Syntax: Design and Presentation BIBA 551-572
      J. Henno
    A user-friendly programming system should have a user-friendly syntax; natural and systematic, easy to understand and use. Syntax should allow substitution of semantically similar constructs by each other. Syntax should also be flexible and allow several variants of syntactic notations to reduce the burden of memorizing rigid syntactic notations and to make it possible for a user to think more about 'what' instead of 'how'. Nowadays programming languages, e.g. Ada, do not obey these principles. Their complexity is mainly caused by their nonsystematic and rigid syntax. This together with unformal and ambiguous presentation of syntax makes Ada difficult to use and Ada parsers inefficient. Uniform, natural and flexible syntax, where several variants of syntactic notations and abbreviations are allowed and minor syntactic errors automatically corrected, can be introduced by systematic top-down design using multi-level grammars. Systematic design and presentation allows greatly improved syntax without increasing size of the syntax grammar and the complexity of the parser.

    IJMMS 1988 Volume 28 Issue 6

    Optimization of String Length for Spoken Digit Input with Error Correction BIBA 573-581
      W. A. Ainsworth
    No matter how much the performance of speech recognition systems improves, it is unlikely that perfect recognition will always be possible in practical situations. Environmental sounds will interfere with the recognition. In such circumstances it is sensible to provide feedback so that any errors which occur may be detected and corrected. In some situations, such as when the eyes are busy or over the telephone, it is necessary to provide feedback auditorily. This takes time, so the most efficient procedure should be determined. In the case of entering digits into a computer the question arises as to whether feedback should be provided after each digit has been spoken or after a string of digits has been recognized. It has been be found that this depends upon the accuracy of the recognizer and on the times required for recognizing the utterances and for changing from recognizing to synthesizing speech.
    Talking to Computers: An Empirical Investigation BIBA 583-604
      Alexander G. Hauptmann; Alexander I. Rudnicky
    This paper describes an empirical study of man-computer speech interaction. The goals of the experiment were to find out how people would communicate with a real-time, speaker-independent continuous speech understanding system. The experimental design compared three communication modes: natural language typing, speaking directly to a computer and speaking to a computer through a human interpreter. The results show that speech to a computer is not as ill-formed as one would expect. People speaking to a computer are more disciplined than when speaking to each other. There are significant differences in the usage of spoken language compared to typed language, and several phenomena which are unique to spoken or typed input respectively. Usefulness for work in speech understanding systems for the future is considered.
    Enhancing PIXIE's Tutoring Capabilities BIBA 605-623
      J. L. Moore; D. Sleeman
    This paper discusses the overall design of the PIXIE Intelligent Tutoring System, and more specifically a series of recent enhancements. The original system has been implemented to involve three separate phases: the offline, or model generation, phase; the online, or tutoring, phase; and the analysis phase. The offline phase, which is completed prior to any interaction with a student, involves the construction of a set of student models for a given domain. Considerable effort has been expended to ensure that these sets are complete and non-redundant. The online phase involves the tutorial interaction with a student, consisting of both diagnosis and remediation of errors. During the post-interaction analysis phase, undiagnosed errors are examined and, if consistent, added to the existing domain knowledge base. Four recent enhancements to the system are then discussed, each arising from a shortcoming that was noted in the system as a result of student trials. Two of these additions to the system involve the diagnosis of errors, and two involve the remediation of errors.
    Structural Displays as Learning Aids BIBA 625-635
      J. Patrick; L. Fitzgibbon
    This experiment investigates the effect of providing a structural display on the learning of a computer-based editing task. The structural display is a spatial representation of the functional and procedural relationships within the editing task. The display is either available in advance of (AD) or after (PD) the learning modules and also there is no display (ND) control condition. Generally the structural display improves learning, as indicated by a reduction of errors and time to complete the criterion task. The display is particularly beneficial in the AD condition in which these effects are more pronounced and also there is an increase in the number of sub-tasks completed compared to the ND condition. Different types of errors are analysed and the display reduces redundant actions. Some of these effects are interpreted in terms of facilitating the assimilation of new material.
    Changes in Contrast Sensitivity Function Produced by VDT Use BIBA 637-642
      H. H. Mikaelian
    The threshold for detecting sine wave contrast gratings (contrast sensitivity function, of CSF) were obtained before and after reading text on a CRT screen as well as reading print for 30 min. Small but statistically significant decreases in the CSF (increased detection threshold) were observed after reading text on the CRT screen. Reading print produced no reliable changes. The decrease in the CSF produced by reading videotext were confined to the low (0.5 cyc/deg) and high (12 and 16 cyc/deg) spatial frequencies, and represented changes of between 15-25%. The results for the high frequency roll off were said to reflect the combined outcome of fatigue in the high spatial frequency channels as well as in the accommodative mechanisms (instrument myopia) of the visual system. Decreases in sensitivity to low spatial frequencies were more difficult to account, and were said to reflect possible changes in the responsivity of the visual 'transient' mechanisms.
    DM²: An Algorithm for Diagnostic Reasoning that Combines Analytical Models and Experiential Knowledge BIBA 643-670
      Newton S. Lee
    This paper presents DM² (Dynamic Mental Models) as a general algorithm that combines both analytic models and experiential knowledge in diagnostic problem solving. The algorithm mimics a human expert in formulating and using an internal, cognitive representation of a physical system during the process of diagnosis. This internal representation, known as a mental model, originates from an analytical model but it changes dynamically to various levels of abstraction that are most appropriate for efficient diagnosis. An analytical model is represented as structure and behaviour, whereas experiential knowledge is expressed in terms of pattern-recognition, topological clustering, topological pruning, and recommendation rules. The DM² algorithm was implemented and tested on a hypothetical diagnostic problem as well as on a real-world expert system prototype for telecommunication networks maintenance at AT&T. These two applications demonstrate that the dynamic mental model approach promotes system robustness, program correctness, software re-use, and the ease of knowledge base modification and maintenance.
    A Hybrid Approach to Deductive Uncertain Inference BIBA 671-681
      X. Liu; A. Gammerman
    Deductive uncertain inference has been one of the most important ways of handling uncertainty. In this paper we report the development of a hybrid approach to such an inference. This approach has been implemented in a system which is based on INFERNO but integrates the strength of probabilistic logic and Dempster's rule.
    Effects of Breadth, Depth and Number of Responses on Computer Menu Search Performance BIBA 683-692
      Stanley R. Parkinson; Martin D. Hill; Norwood Sisson; Cynthia Viera
    Several menu configurations were designed to provide an independent assessment of the influence of breadth, depth and number of responses on computer menu search performance. The menu hierarchy consisted of a binary tree of category descriptor terms with 64 terminal options. Standard menus tested were 2 options on each of 6 sequential frames (26) and 4 options on 3 frames (43). Another menu (Upcoming Selections) was developed with 6 frames in which the binary choice on each frame was shown in the presence of options at the next menu level. Menus developed for separating the effects of number of frames and responses were configured with two menu levels per frame and responses were required to either one or both levels. Number of responses was the most important factor affecting execution time. The highest accuracy was found with the Upcoming Selections menu but that menu also resulted in the slowest execution time. A modified Upcoming Selections menu was developed which allowed participants to respond to each level or to bypass the higher level on each frame. Considering both speed and accuracy, that configuration yielded the best performance of all menus tested.