HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 2627282930313233343536373839

International Journal of Man-Machine Studies 36

Editors:B. R. Gaines; D. R. Hill
Dates:1992
Volume:36
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:49
Links:Table of Contents
  1. IJMMS 1992 Volume 36 Issue 1
  2. IJMMS 1992 Volume 36 Issue 2
  3. IJMMS 1992 Volume 36 Issue 3
  4. IJMMS 1992 Volume 36 Issue 4
  5. IJMMS 1992 Volume 36 Issue 5
  6. IJMMS 1992 Volume 36 Issue 6

IJMMS 1992 Volume 36 Issue 1

Program Design Methodologies and the Software Development Process BIBA 1-19
  Deborah A. Boehm-Davis; Lyle S. Ross
This research examined program design methodologies which claim to improve the design process by providing strategies to programmers for structuring solutions to computer problems. In this experiment, professional programmers were provided with the specifications for each of three non-trivial problems and asked to produce pseudo-code for each specification according to the principles of a particular design methodology. The measures collected were the time to design and code, percent complete, and complexity, as measured by several metrics. These data were used to develop profiles of the solutions produced by different methodologies and to develop comparisons among the various methodologies. These differences are discussed in light of their impact on the comprehensibility, reliability, and maintainability of the programs produced.
The Role of Program Structure in Software Maintenance BIBA 21-63
  Deborah A. Boehm-Davis; Robert W. Holt; Alan C. Schultz
A number of claims have been made by the developers of program design methodologies, including the claim that the code produced by following the methodologies will be more understandable and more easily maintained than code produced in other ways. However, there has been little empirical research to test these claims. In this study, student and professional programmers were asked to make either simple or complex modifications to programs that had been generated using each of three different program structures. Data on the programmers' modification performance, cognitive representations formed of the programs and subjective reactions to the programs suggested that problem structure (as created by the different methodologies), problem content, complexity of modification, and programmer experience all play a crucial role in determining performance and the representation formed.
The Effects on Decision Task Performance of Computer Synthetic Voice Output BIBA 65-80
  Michael J. DeHaemer; William A. Wallace
Computer-synthesized voice has reached technological maturity and is expected to help resolve some of the human-computer interface difficulties. Research was conducted which focused on the utility of adding computer voice output to a microcomputer work station for decision support. Specifically the computer voice duplicated instructions that were printed on the screen for a visual display task in order to facilitate "eyes on" the visual problem. Response time and the number of errors were compared with conditions without computer voice. Since cognitive style, or decision style, has been recognized as an important individual difference for interface design, subjects were classified as having an analytic, heuristic or neutral decision style.
   The results found a surprising interaction effect between decision style and computer synthetic voice. Response time and errors improved for the analytic subjects, were degraded for the heuristic subjects, and were unchanged for the neutral subjects. These findings are important for the design of the human-computer interface because 45% of the subjects were in the affected groups.
   This paper is thought to be the first evaluation of the effects of adding computer synthesized voice instructions to a computer work station for decision making.
Anatomy of the Design of an Undo Support Facility BIBA 81-95
  Yiya Yang
This paper presents decision making elements in an anatomy of the design of undo support in the GNU Emacs environment. Apart from providing design guidelines for undo support, it illustrates how to bring a design from an abstract conception to a concrete realization and how to balance trade-offs in the process. Undo support is a usable feature of interactive computer systems which allows a user to reverse the effects of executed commands. GNU Emacs was chosen as a suitable environment to demonstrate how to design undo support because of its sophistication and practical significance. User's opinions about which aspects of the existing undo support facility in Emacs need to be improved were solicited by conducting an informal survey among Emacs users. The results of the survey are discussed and were used to tailor a proposal for an improved undo support facility for Emacs. In order to test the adequacy of the proposal, it was subjected to an informal expert walk-through and a review of Emacs users opinions was conducted through a computer network. These evaluations are discussed and revisions to the proposal elicited. After the revised prototype of the design was implemented, a post-mortem evaluation was carried out and its results were incorporated in the final implementation.
Using Memory for Events in the Design of Personal Filing Systems BIBA 97-126
  Mark Lansdale; Ernest Edmonds
There is considerable interest in the question of how users can be supported in the management of large information systems. Existing systems are received as being difficult to use, and future systems will handle very much more information than at present, exacerbating the problem. This paper describes a prototype interface, MEMOIRS (Managing Episodic Memory for Office Information Retrieval Systems), which is designed to support the management of personal information in a new way. The approach treats a personal filing system as a history of events (of which documents are a particular type), and focuses upon users's recall for those events. MEMOIRS therefore exemplifies a mnemonic support system which aims to optimize performance in two ways: It aims to improve user's recall for the information they have handled; and it is also designed to exploit as much of what is recalled as possible. The rationale behind this approach is discussed and a broad specification of the system presented, with examples of MEMOIRS in use. The approach is compared and contrasted with other filing systems based upon models of human memory which are associative, rather than event-driven, in character.
A New Method for Discovering Rules from Examples in Expert Systems BIBA 127-143
  A. Mrozek
The experts often cannot explain why they choose this or that decision in terms of formalized "if-then" rules; in these cases we have a set of examples of their real decisions, and it is necessary to reveal the rules from these examples. The existing methods of discovering rules from examples either demand that the set of examples be in some sense complete (and it is often not complete) or they are too complicated. We present a new algorithm which is always applicable. This algorithm is based on the formalization of rough set theory, a formalization which describes the case of incomplete information.

IJMMS 1992 Volume 36 Issue 2

Preface. Symbolic Problem Solving in Noisy, Novel and Uncertain Task Environments BIB 145
  M. J. Coombs; C. A. Fields
Introduction 1. Theoretical Approaches to Noise and Novelty BIB 147-148
  Mike Coombs; Chris Fields
Qualitative Reasoning with Bluff and Beliefs in a Multi-Actor Environment BIBA 149-165
  Ruddy Lelouch; Stephane Doublait
Bluff, or false information deliberately introduced as true in somebody's knowledge base, is often inherently present in many real-life situations, such as political or military situations. It is most likely to occur in a multi-actor environment where the actors are competing to reach conflicting goals. We think that the modelling of the actors' bluff and beliefs in a knowledge-based system can enhance the decision-taking process by considering explicitly the possible falsehood of some information sources. We propose a model for these concepts in the context of a scene (a special case of multi-actor environment) that intends to uniformly take into account an actor's bluff intentions and beliefs. We also present a logical formalization of our model in order to allow a more concise notation, eliminate possible natural language ambiguities, and also for the purpose of computer implementation. We illustrate this work with three examples from three different applications: one from everyday life requiring no domain-specific knowledge to introduce the various concepts, a military battlefield assessment situation, and a strategy game. The last two applications are well-suited since bluff is a major concept in these domains.
Some Approaches to Handle Noise in Concept Learning BIBA 167-181
  Ganesh Mani
Noise is a veritable problem in almost all real-world systems. Noise is something the system should effectively recognize and gracefully handle. In this paper, we focus on the effects of attribute and quantization noise on concept learning. Different noise-handling methodologies can be employed depending on whether inductive approaches or theory-based approaches are being employed. We find that associating weights and thresholds with concepts hypothesized using conventional inductive approaches can help combat noise. In an explanation-based framework, we suggest the use of a meta-domain theory for controlling the effects of quantization noise and for capturing meta-level notions such as the amount of precision called for in the domain. Based on the idea of near-misses, two new algorithms which can insulate standard explanation-based learning from the effects of noise are presented. These approaches have been validated within the framework of a system that models the process of learning and reasoning about everyday objects and their interrelationships.
Representing and Manipulating Uncertain Data BIBA 183-189
  J. M. Morrissey
Conventional Information Systems are limited in their ability to represent uncertain data. A consistent and useful methodology for representing and manipulating such data is required. One solution is proposed in this paper. Objects are modeled by selecting representative attributes to which values are assigned. Any attribute value can be one of the following: a regular precise value, a special value denoting "value unknown", a special value denoting "attribute not applicable", a range of values or a set of values. If there are uncertain data then the semantics of query evaluation are no longer clear and uncertainty is introduced. To handle the uncertainty two sets of objects are retrieved in response to each query: the set known to satisfy the query with complete certainty, and the set of objects which possibly satisfy the query with some degree of uncertainty. Two methods of estimating this uncertainty are examined.
Lexical and Pragmatic Disambiguation and Re-Interpretation in Connectionist Networks BIBA 191-220
  Trent E. Lange
Lexical and pragmatic ambiguity is a major source of uncertainty in natural language understanding. Symbolic models can make high-level inferences necessary for understanding text, but handle ambiguity poorly, especially when later context requires a re-interpretation of the input. Structured connectionist networks, on the other hand, can use their graded levels of activation to perform lexical disambiguation, but have trouble performing the variable bindings and inferencing necessary for language understanding. We have previously described a structured connectionist model, ROBIN, which overcomes many of these problems and allows the massively-parallel application of a large class of general knowledge rules. This paper describes how ROBIN uses these abilities and the contextual evidence from its semantic networks to disambiguate words and infer the most plausible plan/goal analysis of the input, while using the same mechanism to smoothly re-interpret the input if later context makes an alternative interpretation more likely. We present several experiments illustrating these abilities and comparing them to those of other connectionist models, and discuss several directions in which we are extending the model.
Overcoming Rule-Based Rigidity and Connectionist Limitations through Massively-Parallel Case-Based Reasoning BIBA 221-246
  John Barnden; Kankanahalli Srinivas
Symbol manipulation as used in traditional Artificial Intelligence has been criticized by neural net researchers for being excessively inflexible and sequential. On the other hand, the application of neural net techniques to the types of high-level cognitive processing studied in traditional artificial intelligence presents major problems as well. We claim that a promising way out of this impasse is to build neural net models that accomplish massively parallel case-based reasoning. Case-based reasoning, which has received much attention recently, is essentially the same as analogy-based reasoning, and avoids many of the problems leveled at traditional artificial intelligence. Further problems are avoided by doing many strands of case-based reasoning in parallel, and by implementing the whole system as a neural net. In addition, such a system provides an approach to some aspects of the problems of noise, uncertainty and novelty in reasoning systems. We are accordingly modifying our current neural net system (Conposit), which performs standard rule-based reasoning, into a massively parallel case-based reasoning version.
e-MGR: An Architecture for Symbolic Plasticity BIBA 247-263
  M. J. Coombs; H. D. Pfeiffer; R. T. Hartley
The e-MGR architecture was designed for symbolic problem solving in task environments where data are noisy and problems are ill-defined. e-MGR is an operator-based system which integrates problem-solving ideas from symbolic artificial intelligence (AI) and adaptive systems research.
Introduction 2. Strategies and Tactics for Workable Systems BIB 265-266
  Mike Coombs; Chris Fields
Tolerating Noisy, Irrelevant and Novel Attributes in Instance-Based Learning Algorithms BIBA 267-287
  David W. Aha
Incremental variants of the nearest neighbor algorithm are a potentially suitable choice for incremental learning tasks. They have fast learning rates, low updating costs, and have recorded comparatively high classification accuracies in several applications. Although the nearest neighbor algorithm suffers from high storage requirements, modifications exist that significantly reduce this problem. Unfortunately, its applicability is limited by several other serious problems. First, storage reduction variants of this algorithm are highly sensitive to noise. Second, these algorithms are sensitive to irrelevant attributes. Finally, the nearest neighbor algorithm assumes that all instances are described by the same set of attributes. This inflexibility causes problems when subsequently processed instances introduce novel attributes that are relevant to the learning task. In this paper, we present a comprehensive sequence of three incremental, edited nearest neighbor algorithms that tolerate attribute noise, determine relative attribute relevances, and accept instances described by novel attributes. We outline evidence indicating that these instance-based algorithms are robust incremental learners.
Use of Causal Models and Abduction in Learning Diagnostic Knowledge BIBA 289-307
  M. Botta; L. Saitta
This paper presents a new approach to learning diagnostic knowledge, based on the use of a causal model of the domain and abductive reasoning. Justifications supplied by the casual model serve both to focus the search during one-step learning and to localize failures and propose changes during knowledge refinement.
MERCURY: A Heterogeneous System for Spatial Extrapolation for Mesoscale Meteorological Data BIBA 309-326
  C. A. Fields; J. E. Newberry; H. D. Pfeiffer; C. A. Soderlund; S. F. Kirby; G. B. McWilliams
MERCURY is an integrated software system for acquiring, analysing, and extrapolating meteorological data in real time for regions a few hundred square kilometers in size. MERCURY employs a variety of data structures, including real scalars and vectors, three-dimensional models, chords, grids, and qualitative descriptions to represent its input data, and both procedural and declarative methods to represent meteorological knowledge. MERCURY employs the extrapolation method that it is designed to improve upon, mesoscale objective analysis, when it is likely to be sufficient, and supplements this method with additional heuristic and analytic methods when necessary. Initial performance comparisons indicate that MERCURY's extrapolations are better than those obtained with objective analysis alone in regions of complex terrain and surface cover. MERCURY is implemented in C, with graphics in X-windows, and runs on a Sun 3/60 workstation under Unix 4.2 bsd.
Chemical Environmental Models Using Data of Uncertain Quality BIBA 327-336
  D. A. Swayne; D. C.-L. Lam; A. S. Fraser; J. Storey
Application of knowledge-based systems to modelling lake acidification is outlined. The approaches taken by our group are described, using expert systems to incorporate human understanding and qualitative reasoning in selection of models of progress of acidity, and to overcome the problems of data noise and scale variations in the spatial and temporal domains.
Managing Uncertainty in Time-Critical Plan Evaluation BIBA 337-356
  Stephen Downes-Martin; Stephen Deutsch; Glenn Abrett
In complex real-world domains, uncertainty in predicting the results of plan execution drives the evaluation component of the planning cycle to explore a combinatorial explosion of alternative futures. This evaluation component is critical in evaluating the feasibility, strengths and weaknesses of a proposed plan. In time critical situations the planner is thus faced with a trade-off between timeliness and evaluation completion. Furthermore, a human planner is faced with the additional problem of evaluation credibility when using fast automatic evaluation in a complex and uncertain domain. An approach to handling these problems of time-criticality, uncertainty, and credibility is explored using the wargaming component of the military operational planning cycle. The Semi-Automated Forces Wargamer has been developed using two techniques. The first techniques integrates procedural representations of plans and intentions with heuristic representations of simulated probabilistic execution. This facilitates the simulated execution of plans with multiple worlds corresponding to the possible results of actions taken in the real and uncertain world. The second provides a what-if capability via a tree representation of the possible combat outcomes. This provides the user with a tool for intelligent and focussed exploration of the space of possible outcomes to the plan. These techniques combine to generate a manageable and useful subset of the space of simulated plan results from which the user can apply human expertise to guide plan exploration.
Integrated Knowledge-Based System Design and Validation for Solving Problems in Uncertain Environments BIBA 357-373
  Ed P. Andert
Symbolic problem solving, specifically which knowledge-based systems (KBSs), in new and uncertain problem domains is a difficult task. An essential part of developing systems for these environments is determining whether the system is adequately and reliably solving the problem. KBSs that utilize heuristics have a development cycle not conducive to formal control and have high potential for error or incorrect characterizations of the problem they are meant to solve. A method of validating and testing such systems to increase and quantify their reliability is needed. Software engineering strategies for accessing and projecting the reliability of traditional software have been developed after years of experience with the cause and effect of errors. Since KBSs are new, methods for accessing and projecting their reliability are not well understood. However, validation techniques from traditional software development can be applied to KBSs. Validation and testing techniques unique to KBSs can also be used to determine system reliability. In essence, tools and techniques can be used to meet the need for a legitimate, integrated approach to validation and testing of KBSs as they are developed.

IJMMS 1992 Volume 36 Issue 3

Scheduling Home Control Devices: Design Issues and Usability Evaluation of Four Touchscreen Interfaces BIBA 375-393
  Catherine Plaisant; Ben Shneiderman
This article describes four different user interfaces supporting scheduling two-state (ON/OFF) devices over time periods ranging from minutes to days. The touchscreen-based user interfaces including a digital 12-h clock, 24-h linear and 24-h dial prototypes are described and compared on a feature by feature basis. A formative usability test with 14 subjects, feedback from more than 30 reviewers, and the flexibility to add functions favour the 24-h linear version.
A Review and Analysis of the Usability of Data Management Environments BIBA 395-417
  Dinesh Batra; Ananth Srinivasan
Our objective in this paper is to provide a thorough understanding of the usability of data management environments with an end to conducting research in this area. We do this by synthesizing the existing literature that pertains to (i) data modelling as a representation medium and (ii) query interface evaluation in the context of data management. We were motivated by several trends that are prevalent in the current computing context. First, while there seems to be a proliferation of new modelling ideas that have been proposed in the literature, commensurate experimental evaluation of these ideas is lacking. Second, there appears to exist a significant user population that is quite adept at working in certain computing environments (e.g. spreadsheets) with a limited amount of computing skills. Finally, the choices in terms of technological platforms that are now a available to implement new software designs allow us to deal with the implementation issue more effectively. The outcomes of this paper include a delineation of what constitutes an appropriate conceptualization of this area and a specification of research issues that tend to dominate the design of a research agenda.
Information Management in Research Collaboration BIBA 419-445
  H. Chen; K. J. Lynch; A. K. Himler; S. E. Goodman
Much of the work in business and academia is performed by groups of people. While significant advancement has been achieved in enhancing individual productivity by making use of information technology, little has been done to improve group productivity. Prior research suggests that we should know more about individual differences among group members as they respond to technology if we are to develop useful systems that can support group activities.
   We report results of a cognitive study in which researchers were observed performing three complex information entry and indexing tasks using an Integrated Collaborative Research System. The observations have revealed a taxonomy of knowledge and cognitive processes involved in the indexing and management of information in a research collaboration environment. A detailed comparison of knowledge elements and cognitive processes exhibited by senior researchers and junior researchers has been made in this article. Based on our empirical findings, we have developed a framework to explain the information management process during research collaboration. Directions for improving design of Integrated Collaborative Research Systems are also suggested.
A Petri-Net Based Approach for Verifying the Integrity of Production Systems BIBA 447-468
  Ritu Agarwal; Mohan Tanniru
The production rule formalization has become a popular method for knowledge representation in expert systems. Current development environments for rule-based systems provide few automated mechanisms for verifying the consistency and completeness of rule bases as they are developed in an evolutionary manner. We describe an approach to verifying the integrity of a rule-based system. The approach models the rule-base as a Petri-Net and uses the structural properties of the net for verification. Procedures for integrity checks at both local and chained inference levels are described.
User Characteristics -- DSS Effectiveness Linkage: An Empirical Assessment BIBA 469-505
  K. Ramamurthy; William R. King; G. Premkumar
Despite extensive research on various factors affecting the acceptance and effectiveness of decision support systems (DSS), considerable ambiguity still exists regarding the role and influence of user characteristics. Although researchers have advocated DSS effectiveness as a multi-dimensional construct, specific guidelines regarding its dimensions or the approach to derive it is lacking. The study reported here attempts to contribute to the existing body of knowledge by proposing a multi-dimensional construct for DSS effectiveness and identifying a comprehensive set of user characteristics that influences DSS effectiveness. It critically examines the relationship between these two sets through canonical correlation analysis technique. Thirty seven students, taking a graduate level course in financial management, in a large university located in the north eastern part of the United States participated in the study acting as surrogates for real-world managers. The results of the study highlight that user's domain-related expertise, system experience, gender intelligence, and cognitive style have important influence on one or more dimensions of DSS effectiveness. However, their relative importance vary with the outcome measure of choice.

European Association for Cognitive Ergonomics: Book Reviews

"The Nurnberg Funnel: Designing Minimalist Instruction for Practical Computer Skill," by John M. Carroll BIB 507-510
  David Benyon
"Formal Methods in Human-Computer Interaction," edited by M. Harrison and H. Thimbleby BIB 507-510
  Rinsjel Winder
"An Introduction to Human-Computer Interaction," by P. A. Booth BIB 507-510
  Simon P. Davies

IJMMS 1992 Volume 36 Issue 4

Can Experts' Explanations Help Students Develop Program Design Skills? BIBA 511-551
  Marcia C. Linn; Michael. J. Clancy
This paper reports an experimental investigation of the effectiveness of case studies for teaching programming. A case study provide an "expert commentary" on the complex problem-solving skills used in constructing a solution to a computer programming problem as well as one or more worked-out solutions to the problem. To conduct the investigation, we created case studies of programming problems and evaluated what high school students in ten Pascal programming courses learned from them. We found that the expert's commentary on the decisions and activities required to solve a programming problem helped students gain integrated understanding of programming. Furthermore, the expert's commentary imparted more integrated understanding of programming than did the worked-out solution to the problem without the expert's explanation. These results support the contention that explicit explanations can help students learn complex problem-solving skills.
   We developed case studies for teaching students to solve programming problems for the same reasons that they have been developed in other disciplines. The case method for teaching complex problem solving was first used at Harvard College in 1870 and has permeated curricula for business, law and medicine across the country. These disciplines turned to the case method to communicate the complexity of real problems, to illustrate the process of dealing with this complexity and to teach analysis and decision making skills appropriate for these problems.
Second Order Structures in Multi-Criteria Decision Making BIBA 553-570
  Ronald R. Yager
We introduce the concept of higher order criteria in the decision making problem. These types of criteria are manifested by situations in which we desire to satisfy a criterion if it is possible without sacrificing the satisfaction to other primary criteria.
Hierarchical Search Support for Hypertext On-Line Documentation BIBA 571-585
  T. R. Girill; Clement H. Luk
Effectively finding relevant passages in a full-text database of software documentation calls for a user interface that does more than mimic a printed book. A hypertext approach, with a network of links among passages, offers great flexibility but often at the cost of high cognitive overhead and a disorienting lack of contextual cues. A tree-based approach guides users along branching paths through a hierarchy of text nodes. The "natural", sequential implementation of such hierarchical access, however, is psychologically inept in large databases because it is order-dependent, discriminates awkwardly among key terms, clarifies each node's context incompletely, and often involves much semantic redundancy. An alternative, mixed approach, recently implemented in the on-line documentation system at the National Energy Research Supercomputer Center (NERSC), overcomes three of these four problems. It displays only local tree structure in response to "zoomin" or "zoomout" commands issued to focus a search begun with typical hypertext moves. This combination approach enjoys the benefits of cued, spatially interpreted hierarchical search while avoiding most of its known pitfalls. Usage monitoring at NERSC shows the ready acceptance of both zoom commands by documentation readers.
Network and Multidimensional Representations of the Declarative Knowledge of Human-Computer Interface Design Experts BIBA 587-615
  Douglas J. Gillan; Sarah D. Breedin; Nancy J. Cooke
A two-part experiment investigated human computer interface (HCI) experts' organization of declarative knowledge about the HCI. In Part 1, two groups of experts in HCI design -- human factors experts and software development experts -- and a control group of non-experts sorted 50 HCI concepts concerned with display, control, interaction, data manipulation and user knowledge into categories. In the second part of the experiment, the three groups judged the similarity of two sets of HCI concepts related to display and interaction, respectively. The data were transformed into measures of psychological distance and were analyzed using Pathfinder, which generates network representations of the data, and multidimensional scaling (MDS), which fits the concepts in a multidimensional space. The Pathfinder networks from the first part of the experiments differed in organization between the two expert groups, with human factors experts' networks consisting of highly interrelated subnetworks and software experts' networks consisting of central nodes and fewer, less interconnected sub-networks. The networks also differed across groups in concepts linked with such concepts as graphics, natural language, function keys and speech recognition. The networks of both expert groups showed much greater organization than did the non-experts' network. The network and MDS representations of the concepts for the two expert groups showed somewhat greater agreement in Part 2 than in Part 1. However, the MDS representations from Part 2 suggested that software experts organized their concepts on dimensions related to technology, implementation and user characteristics, whereas the human factors experts' organized their concepts more uniformly according to user characteristics. The discussion focuses on (1) the differences in cognitive models as a function of the amount and type of HCI design experience and (2) the role of cognitive models in HCI design and in communications within a multidisciplinary design team.
Learning Expert Systems by Being Corrected BIBA 617-637
  Ross Peter Clement
This paper describes a new method of knowledge acquisition for expert systems. A program, KABCO, interacts with a domain expert and learns how to make examples of a concept. This is done by displaying examples based upon KABCO's partial knowledge of the domain and accepting corrections from the expert. When the expert judges that KABCO has learnt the domain completely a large number of examples are generated and given to a standard machine learning program that learns the actual expert system rules. KABCO vastly eases the task of constructing an expert system using machine learning programs because it allows expert system rule bases to be learnt from a mixture of general (rules) and specific (examples) information. At present KABCO can only be used for classification domains but work is proceedings to extend it to be useful for other domains. KABCO learns disjunctive concepts (represented by frames) by modifying an internal knowledge base to remain consistent with all the corrections that have been entered by the expert. KABCO's incremental learning uses the deductive processes of modification, exclusion, subsumption and generalization. The present implementation is primitive, especially the user interface, but work is proceeding to make KABCO a much more advanced knowledge engineering tool.

IJMMS 1992 Volume 36 Issue 5

Cognitive Modelling of Fighter Aircraft Process Control: A Step Towards an Intelligent On-Board Assistance System BIBA 639-671
  Rene Amalberti; Francois Deblon
A baseline description of a cognitive model that has been successfully implemented on high-speed, low-altitude navigation fighter plane missions illustrated designs for an intelligent assistance system for future French combat aircraft. The outcomes are based on several empirical studies. Task complexity (risk, uncertainty, time pressure) is extreme and provides a prototypical example of a rapid process control situation which requires specific assistance problem. The paper is divided into three sections:
   1. A general review discusses implications of the specific requirements for coupling an intelligent assistance system to pilots. Special attention is paid to understanding and coherence of the aid, both of which directly influence the nature of the system.
   2. An empirical analysis of missions carried out by novice and experienced pilots forms the basis for a cognitive model of in-flight navigation problem solving. Because of time pressure and risk, pilots have as much difficulty applying solutions as diagnosing problems. Pilots tend to develop a sophisticated model of the situation in order to anticipate problems and actively avoid or minimize problem difficulty. In contrast, poor solutions tend to be found for unexpected problems and generally result in renunciation of the mission and/or crash.
   3. The cognitive model described above serves as the basis for a computer cognitive model for flying high-speed, low-altitude navigation missions. The model splits functional knowledge into two levels: the local level deals with sub-goals and short-term activities; the global level deals with mission objectives and handles medium- and long-term activities. A resource manager coordinates the two levels. The program uses an Al actor programming style. This computer cognitive model serves to develop an intelligent navigation assistance system which can function as an automaton or as a tactical support system.
Probing the Mental Models of System State Categories with Multidimensional Scaling BIBA 673-696
  Bruce G. Coury; Monica Zubritzky Weiland; V. Grayson Cuqlock-Knopp
Identifying the underlying decision criteria used by people to classify system state is one of the major challenges facing designers of decision aids for complex systems. This research describes the use of multidimensional scaling (MDS) to probe the structure and composition of the mental models employed by users to identify system state, and to evaluate the impact of different display formats on those models. Twenty people were trained to classify instances of system data. Pairwise similarity ratings of instances of system data were analysed by MDS to reveal the dominant dimensions used in the task. Results showed that significant individual differences emerged, and that the dimensions used by people were also a function of the type of display format.
Skill Metrics on a Genetic Graph as a Mechanism for Driving Adaptation in an Operating System Interface BIBA 697-718
  Chris J. Copeland; Simon R. Eccles
Although simplified interfaces offer a convenient means of access to complex facilities, they do not help users to master the true complexity of a system. An interface which can gradually familiarize users with the commands necessary to perform complex tasks, adapting with them to reflect their individual skill acquisition, provides an attractive means of learning difficult command sets. This paper describes an adaptive shell to the cryptic command set of the UNIX operating system. Implementation of adaptivity requires mechanisms which can adequately, but unobtrusively, model the individual user's progress in attaining skill. The use of a genetic graph to represent states of skill acquisition is discussed, together with the application of techniques from a simple Keystroke Level Model to generate suitable performance prediction metrics against which to determine skill levels. Experience with an exemplar for the shell in a normal work environment is presented to show that the approach taken can succeed in producing a system which is both sensitive to individual user variability and practical to use.
Analysing the Novice Analyst: Cognitive Models in Software Engineering BIBA 719-740
  A. G. Sutcliffe; N. A. M. Maiden
Cognitive problem-solving by novice systems analysts during requirements analysis task was investigated by protocol analysis. Protocols were collected from 13 subjects who analysed a scheduling problem. Reasoning, planning, conceptual modelling and information gathering behaviours were recorded and subjects' solution were evaluated for completeness and accuracy. The protocols showed an initial problem scoping phase followed by more detailed reasoning. Performance in analysis was not linked to any one factor although reasoning was correlated with success. Poor performance could be ascribed to failure to scope the problem, poor formation of a conceptual model of the problem domain, or insufficient testing of hypotheses. Good performance concorded with well-formed conceptual models and good reasoning/testing abilities. The implication of these results for structured systems development methods and Computer-Aided Software Engineering (CASE) tools are discussed.
Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces BIBA 741-773
  Peter G. Polson; Clayton Lewis; John Rieman; Cathleen Wharton
This paper presents a new methodology for performing theory-based evaluations of user interface designs early in the design cycle. The methodology is an adaptation of the design walkthrough techniques that have been used for many years in the software engineering community. Traditional walkthroughs involve hand simulation of sections of code to ensure that they implement specified functionality. The method we present involves hand simulation of the cognitive activities of a user, to ensure that the user can easily learn to perform tasks that the system is intended to support. The cognitive walkthrough methodology, described in detail, is based on a theory of learning by exploration presented in this paper. These is a summary of preliminary results of effectiveness and comparisons with other design methods.

IJMMS 1992 Volume 36 Issue 6

The Cognitive Apprenticeship Analogue: A Strategy for Using ITS Technology for the Delivery of Instruction and as a Research Tool for the Study of Teaching and Learning BIBA 775-795
  Michael L. Burger; John F. DeSoi
In this paper, a general overview of the components which have characterized the development of intelligent tutoring systems (ITS) over the past fifteen years is provided. Accompanying the overview of each component is a discussion of limitations which, we feel, restrict the extent to which ITS technology can be useful as an instructional delivery vehicle and as a tool which can be used to learn about the processes which underlie teaching and learning.
   These limitations, however, can be compensated for by altering how and for what purposes ITSs are developed and implemented. Our goal in writing this paper, in addition to discussing the problems associated with present ITS approaches, is to present a set of suggestions which we feel can guide the development and implementations of ITSs such that their potential for useful instructional tools can be enhanced and extended. We argue for the development of ITSs which: (i) progressively and as such as possible reduce the a priori restrictions that are placed on learners as they learn new content. Technology should empower the learner as a learner, enabling him/her to uncover the "mysteries" of new knowledge. Technology should not rob learners of the joy of discovery the "aha!" experience. It should, however, facilitate the integration of new findings into existing cognitive frameworks, and provide opportunities for learners to examine and expose misunderstandings and misconception about how the "universe" operates; (ii) enable educators to reliably determine and report what is being learned and mastered against some set of standards which exist independent of the learning environment; and (iii) provide opportunities for educators to learn about how learning is occurring and intercede (real-time if necessary) in ways that can alter and improve (either temporarily or permanently) the environment within which student interactions are occurring.
Decline in Accuracy of Automatic Speech Recognition as Function of Time on Task: Fatigue or Voice Drift? BIBA 797-816
  Clive Frankish; Dylan Jones; Kevin Hapeshi
Recognition accuracy of speech recognition devices tends to decline during an extended period of continuous use. Although this deterioration in performance is commonly acknowledged, there has been little systematic observation of the phenomenon, and no clear account of its causes is available. The aim of this present study was to provide some indication of the magnitude and time course of this decline in performance, and to clarify the nature of underlying changes in speech behaviour. Three experiments are described. Experiment 1 confirmed that there is a fall-off in recognition accuracy during a half-hour session of a data entry task, and that this occurs for both naive and practised subjects. In Experiment 2, no recovery was observed in recognition performance when short rest breaks were scheduled, indicating that vocal fatigue was not a major factor. The effects of template retraining in mid-session were investigated in Experiment 3. This procedure was found to be effective in restoring recognition accuracy, and the retrained templates were relatively robust. The implications of these findings for operational use of speech recognition devices are briefly discussed. For most application, one-off template retraining is seen as a more appropriate solution to the problem of voice drift than more complex solutions based on adaptive templates.
Organizational Decision Support Systems BIBA 817-832
  Varghese S. Jacob; Hasan Pirkul
Decision support systems have traditionally been discussed within the context of individual or group decision making. In this paper we study decision support systems from an organizational perspective. We propose a framework for designing an organizational decision support system that is based on a network of knowledge-based systems. Nodes of this network interact with each other, as well as various other organizational systems, to provide comprehensive decision support. This network is also utilized to provide effective support for formal multi-participant decision making.
Feedback Strategies for Error Correction in Speech Recognition Systems BIBA 833-842
  W. A. Ainsworth; S. R. Pratt
In a noisy environment speech recognizers make mistakes. In order that these errors can be detected the system can synthesize the word recognized and the user can respond by saying "correction" when the word was not recognized correctly. The mistake can then be corrected.
   Two error-correcting strategies have been investigated. In one, repetition-with-elimination, when a mistake has been detected the system eliminates its last response from the active vocabulary and then the user repeats the word that has been misrecognized. In the other, elimination-without-repetition, the system suggests the next-most-likely word based on the output of its pattern-matching algorithm. It was found that the former strategy, with the user repeating the word, required less trials to correct the recognition errors.
   A model which relates the average number of corrections to the recognition rate has been developed which provides a good fit to the data.
Nested IF-THEN-ELSE Constructs in End-User Computing: Personality and Aptitude as Predictors of Programming Ability BIBA 843-859
  Houn-Gee Chen; Robert P. Vecchio
Information technology has assumed an important role in organizations during the past decade. No longer the private preserve of small groups of computer specialists, end-user computing is placing information technology into the hands of employees at all levels and in virtually all functional areas. This study attempts to gain a fuller understanding of factors associated with the ability to use IF-THEN-ELSE (or Nested IF-THEN-ELSE) constructs in end-user computing. It proposes a model and summarizes empirical findings which suggest that performance in programming tasks can be partially accounted for by a consideration of personality and aptitude constructs.
Diagrammatic Displays for Engineered Systems: Effects on Human Performance in Interacting with Malfunctioning Systems BIBA 861-895
  David Kieras
Computer graphics displays make it possible to display both the topological structure of a system in the form of a schematic diagram and information about its current state using color-coding and animation. Such displays should be especially valuable as user interfaces for decision support systems and expert systems for managing complex systems. This report describes three experiments on the cognitive aspects of such displays. Two experiments involved both fault diagnosis and system operation using a very simple artificial system; one involved a complex real system in a fault diagnosis task. The major factors of interest concerned the topological content of the display -- principally, the extent to which the system structural relationships were visually explicit, and the availability and visual presentation of state information. Displays containing a topologically complete diagram presenting task-relevant state information at the corresponding point on the diagram appear to be superior to displays that violate the principles. A short set of guidelines for the design of such displays is listed.

European Association for Cognitive Ergonomics: Book Reviews

"Human Factors and Typography for More Readable Programs," by R. M. Baecker and A. Marcus BIB 897-903
  T. R. G. Green
"Engineering in Complex Dynamic Worlds," edited by E. Hullnagel, G. Mancini and D. D. Woods BIB 897-903
  Jean-Michel Hoc
"Robotics, Control and Society," edited by N. Moray, W. R. Ferrell and W. B. Rouse BIB 897-903
  Veronique De Keyser
"Cognitive Aspects of Computer-Supported Tasks," by Y. Wærn BIB 897-903
  J. Richardson