HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 171819202122232425262728293031323334353637

International Journal of Man-Machine Studies 27

Editors:B. R. Gaines; D. R. Hill
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Links:Table of Contents
  1. IJMMS 1987 Volume 27 Issue 1
  2. IJMMS 1987 Volume 27 Issue 2
  3. IJMMS 1987 Volume 27 Issue 3
  4. IJMMS 1987 Volume 27 Issue 4
  5. IJMMS 1987 Volume 27 Issue 5/6

IJMMS 1987 Volume 27 Issue 1

Analogy and Other Sources of Difficulty in Novices' Very First Text-Editing BIBA 1-22
  Carl Martin Allwood; Mikael Eliasson
In this study, an analysis is made of different causes of novices' errors and inefficient commands in a text-editing task. Twenty-eight novices were taught a subset of a text-editing program and asked to perform some simple text-editing tasks on the computer. The results show that deficient analogical thinking may have contributed to a large part of the subjects' errors. When only subjects' errors were considered, analogies from the currently used program were more important than analogies based on knowledge about typewriters. However, when inefficient commands were included in the analyses, the two sources of analogies were of equal importance. High self-estimated typing speed was shown to be associated with a high error frequency. Other causes of errors, such as an unfortunate design of specific features in the text-editing program were also suggested.
A Methodology of Design Knowledge Acquisition for Use in Learning Expert Systems BIBA 23-32
  Tomasz Arciszewski; M. Mustafa; Wojciech Ziarko
This paper presents an approach to design conceptual knowledge acquisition. The approach was basically developed for knowledge acquisition in BRZDY1, a learning expert system for conceptual design currently under development. A formal identification of qualitative, conceptual design decisions, based on typology by coverings and the description of design problems by qualitative variables, is discussed. Also, we describe the fundamentals of the method of generating design rules from examples of decisions made by an expert. This method is based on the extended concept of rough set. A comprehensive example of the application of this method to conceptual design of steel members under bending is provided at the end of the paper.
Creating Categories for Databases BIBA 33-63
  Baruch Fischhoff; Donald MacGregor; Lyn Blackshaw
The value of a database is bounded by the accessibility of the information it contains. The present studies provide a multifaceted approach to designing and evaluating entry-level menus using, as a case in point, the Statistical Abstract of the United States. They consider different ways of organizing material into categories, developing labels for those categories, and presenting them to users. As performance criteria, the studies consider both the transparency of the resulting system, how easily users can identify the location of items, and its metatransparency, how well users can access the system's transparency. The latter criterion, which measures the realism of users' expectations regarding their success with the system, is relevant to how willing users are to attempt a search, how carefully they scrutinize its products, and how satisfied (or frustrated) they are with their progress. Aside from demonstrating a general method, these studies provide some potentially useful substantive results. One is the persistent superiority of the Statistical Abstract's 33 chapters as an entry-level menu, as compared with various attempts to create superordinate categories. A second is subjects' relatively poor ability to predict success in locating individual items. A third is the relatively good performance obtained with superordinate categories whose internal structure and labels were determined by individuals like the eventual users. These results replicate and amplify results using more restricted and artificial databases, and offer some promise for designing interfaces as well as some insight into subjective categorization processes.
On Matching Programmers' Chunks with Program Structures: An Empirical Investigation BIBA 65-89
  Iris Vessey
Expertise in a given domain is generally regarded as being manifested in the possession of a large body of knowledge stored as chunks or schema in long-term memory. Recall experiments in a variety of domains have demonstrated that experts possess larger chunks of knowledge on meaningful tasks, while their performance falls to that of novices on non-meaningful tasks. Three experiments are reported, two recall and one construction, that were designed to provide information on programmers' (COBOL) knowledge structures. In the initial experiment, the chunking ability of computer programmers, as revealed by program recall, was less successful in predicting performance on a debugging task than were programmers' problem-solving processes. A second experiment sought to determine whether the lack of a match between programmers' chunks and the information structures in the program used for recall was responsible for the poor differentiation of programming skill afforded by the recall test. Although expert programmers recalled more than novice programmers, there were no qualitative differences in the types of structures the two groups recalled. A third experiment required expert programmers to construct a routine to accomplish a similar function to that of the program used for recall. The programmers constructed routines with diverse program structures. In general, the results show that both expert and novice programmers possess a wide variety of chunks of the kind incorporated into the recall program. It appears, however, that even professional programmers do not have well-formulated scripts for validation stored in long-term memory.
The Use of Hand-Drawn Gestures for Text Editing BIBA 91-102
  Catherine G. Wolf; Palmer Morrel-Samuels
This paper reports results from a paper and pencil study of the use of hand-drawn gestures for simple editing tasks. The use of gesture is of particular interest in an interface which allows the user to write directly on the surface of a display with a stylus. The results of the study provided encouragement for the development of gesture-driven user interfaces. There was very good intra-subject consistency in the spatial form of gestures used for an editing operation, and also, good agreement across subjects in the form selected for a particular operation. Subjects' reactions to the use of gesture indicated that gesture commands were perceived as easy to use and remember. Specific implications for the design gestural interfaces are discussed.

IJMMS 1987 Volume 27 Issue 2

The KREME Knowledge Editing Environment BIBA 103-126
  Glenn Abrett; Mark H. Burstein
One of the major bottlenecks in large-scale expert-system development is the problem of knowledge acquisition: the construction, maintenance, and testing of large knowledge bases. This paper provides an overview of the current state of development of the KREME Knowledge Representation Editing and Modeling Environment. KREME is an extensible experimental environment for developing and editing large knowledge bases in a variety of representation styles. It provides tools for effective viewing and browsing in each kind of representational base, automatic consistency checking, macro-editing facilities to reduce the burdens of large-scale knowledge-base revision and some experimental automatic generalization and acquisition facilities.
Knowledge Elicitation Using Discourse Analysis BIBA 127-144
  N. J. Belkin; H. M. Brooks; P. J. Daniels
This paper is concerned with the use of discourse analysis and observation to elicit expert knowledge. In particular, we describe the use of these techniques to acquire knowledge about expert problem solving in an information provision environment. Our method of analysis has been to make audio-recordings of real-life information interactions between users (the clients) and human intermediaries (the experts) in document retrieval situations. These tapes have then been transcribed and analysed utterance-by-utterance in the following ways: assigning utterances to one of the prespecified functional categories; identifying the specific purposes of each utterance; determining the knowledge required to perform each utterance; grouping utterances into functional and focus-based sequences. The long-term goal of the project is to develop an intelligent document retrieval system based on a distributed expert, blackboard architecture.
Acquisition of Uncertain Rules in a Probabilistic Logic BIBA 145-154
  John G. Cleary
The problem of acquiring uncertain rules from examples is considered. The uncertain rules are expressed using a simple probabilistic logic which obeys all the axioms of propositional logic. By using three truth values (true, false, undefined) a consistent expression of contradictory evidence is obtained. As well the logic is able to express the correlations between rules and to deal with uncertain rules where the probabilities of correlations between the rules can be directly computed from examples.
Cognitive Biases and Corrective Techniques: Proposals for Improving Elicitation Procedures for Knowledge-Based Systems BIBA 155-166
  David A. Cleaves
Expert system output is only as good as the expert judgements on which the system is based. One component of expertise may be defined as the ability to distinguish casual from random occurrence. Judgements of expert and novice alike have been shown to reflect systematic biases in comparison with normative statistical logic. These biases may be important where accuracy, consistency, and coherence are important attributes of the required judgment. Several biases and their cognitive origins are discussed in the context of building knowledge-based systems for wildland fire control. Preliminary guidelines are offered for recognizing and correcting biases during the knowledge-elicitation process.
A Mixed-Initiative Workbench for Knowledge Acquisition BIBA 167-179
  Gary S. Kahn; Edwin H. Breaux; Peter DeKlerk; Robert L. Joseph
The TEST Development Environment (TDE) enables knowledge engineers and trained domain experts to interactively build knowledge bases representing troubleshooting knowledge. TEST is an application shell, providing a domain-independent diagnostic problem solver together with a library of schematic prototypes. TDE provides both system-directed interrogation and a user-directed editor for building up the knowledge base. Novice users of TDE rely heavily on the guidance offered by system prompts, while experienced users tend to use the direct manipulation of graphic items as a preferred method. This paper examines four facets of TDE: first, the core concepts of the underlying diagnostic system; second, the knowledge acquisition mechanism; third, workbench functions for knowledge-base modification; and finally debugging support within the workbench.
Generalization and Noise BIBA 181-204
  Yves Kodratoff; Michel Manago; Jim Blythe
This paper describes a research project which aims at applying Machine Learning (ML) techniques to ease Knowledge Acquisition (KA) for Knowledge Based systems. Since noise in real life data has a drastic effect on ML, we examine in detail problems connected with noise. The learning system integrates two apparently distinct approaches: the numeric approach and the symbolic approach. It uses a filtering mechanism that is driven by statistical information and by comparison between several sources of knowledge (multi-expertise and expert-users "cross-examination" of input). The system also attempts to generate concepts which are resilient to noise and to improve the language of description. While it is usually thought that noise prevents using ML techniques in real applications, we attempt to show that on the contrary existing techniques can be stretched to cope with noise and to obtain better results than traditional KA techniques.
Analysis of the Performance of a Genetic Algorithm-Based System for Message Classification in Noisy Environments BIBA 205-220
  Elaine J. Pettit; Michael J. Pettit
The process of knowledge acquisition must occur continually in those knowledge-based systems which must operate in noisy, contextually rich environments. One very important application with this requirement involves the inferring of the occurrence of events which cannot be exhaustively predefined from variably noisy sensor messages. Our paper describes on-going basic research for construction of an adaptive system which can perform high-level, rapid classification of sensor messages, possibly very noisy, concerning objects in its environment. The paper concentrates on experiments to determine optimal parameters for this bi-level, genetic algorithm-based system in low, medium, and high noise environments.

IJMMS 1987 Volume 27 Issue 3

Simplifying Decision Trees BIBA 221-234
  J. R. Quinlan
Many systems have been developed for constructing decision trees from collections of examples. Although the decision trees generated by these methods are accurate and efficient, they often suffer the disadvantage of excessive complexity and are therefore incomprehensible to experts. It is questionable whether opaque structures of this kind can be described as knowledge, no matter how well they function. This paper discusses techniques for simplifying decision trees while retaining their accuracy. Four methods are described, illustrated, and compared on a test-bed of decision trees from a variety of domains.
Creating the Domain of Discourse: Ontology and Inventory BIBA 235-250
  Stephen Regoczei; Edwin P. O. Plantinga
The paper describes the foundations of a methodology for natural language-based knowledge acquisition. It concentrates on a special type of context: the case in which an analyst interviews an informant who is a domain expert and the text of the discussion is carefully recorded. In this context the following paradox arises: the analyst is after knowledge, but all he gets are words. Matching concepts to the words -- or, more precisely, constructing conceptual structures which model the mental models of the informant -- is the task of the analyst. The conceptual structures are to be specified as sets of conceptual graphs.
   To carry out this task, the clear specification of the domain of discourse in terms of an ontology and an inventory becomes necessary. The discourse is considered to include not only the text of the discussion between the analyst and the informant, but also the ever-changing mental models of both parties. The mental models are construed as modelling some object domain "out there", but the domain of discourse is created through discourse.
   A step-by-step technique is given for specifying the domain of discourse with careful attention paid to version control. It is noted that different interviews about the "same" object domain may give rise to several different domains of discourse.
KITTEN: Knowledge Initiation and Transfer Tools for Experts and Novices BIBA 251-280
  Mildred L. G. Shaw; Brian R. Gaines
This paper gives a state-of-the-art report on the use of techniques based on personal construct psychology to automate knowledge engineering for expert systems. It presents the concept of knowledge support systems as interactive knowledge engineering tools, states the design criteria for such systems, and outlines the structure and key components of the KITTEN implementation. KITTEN includes tools for interactive repertory grid elicitation and entailment analysis that have been widely used for rapid prototyping of industrial expert systems. It also includes tools for text analysis, behavioral analysis and schema analysis, that offer complementary and alternative approaches to knowledge acquisition. The KITTEN implementation integrates these tools around a common database with utilities designed to give multiple perspectives on the knowledge base.
Knowledge Base Refinement by Monitoring Abstract Control Knowledge BIBA 281-293
  David C. Wilkins; William J. Clancey; Bruce G. Buchanan
An explicit representation of the problem solving method of an expert system shell as abstract control knowledge provides a powerful foundation for learning. This paper describes the abstract control knowledge of the HERACLES expert system shell for heuristic classification problems, and describes how the ODYSSEUS apprenticeship learning program uses this representation to semi-automate "end-game" knowledge acquisition. The problem solving method of HERACLES is represented explicitly as domain-independent tasks and metarules. Metarules locate and apply domain knowledge to achieve problem solving subgoals, such as testing, refining, or differentiating between hypothesis; and asking general or clarifying questions.
   We show how monitoring abstract control knowledge for metarule premise failures provides a means of detecting gaps in the knowledge base. A knowledge base gap will almost always cause a metarule premise failure. We also show how abstract control knowledge plays a crucial role in using underlying domain theories for learning, especially weak domain theories. The construction of abstract control knowledge requires that the different types of knowledge that enter into problem solving be represented in different knowledge relations. This provides a foundation for the integration of underlying domain theories into a learning system, because justification of different types of new knowledge usually requires different ways of using an underlying domain theory. We advocate the construction of a definitional constraint for each knowledge relation that specifics how the relation is defined and justified in terms of underlying domain theories.
A Conceptual Clustering Program for Rule Generation BIBA 295-313
  Edward Wisniewski; Howard Winston; Reid Smith; Michael Kleyn
We present an Interesting Situation Generator (ISG) that assists in the synthesis of interpretation rules from basic domain knowledge. The ISG is a hierarchial clustering program that discovers equivalence classes of situation (e.g. types of geological formations) that give rise to qualitatively distinct manifestations (e.g. different patterns of geophysical measurements corresponding to types of geological formations). The equivalence classes can be used by a rule generator to construct an initial set of interpretation rules of the form manifestation => situation.

IJMMS 1987 Volume 27 Issue 4

Pictorial Communication with Computers BIBA 315-336
  P. G. Barker; M. Najah; K. A. Manji
Human-computer interaction involves the movement of information between a human and a computer by means of suitably designed interface systems. Conventional interfaces for the transmission of text and other basic forms of data are now well established. Increasingly, various types of pictorial interface are being used to fabricate "user-friendly" dialogues with computers. This paper describes some approaches to human-computer communication via the use of conventional paper-based pictorial forms. Some attempts at evaluating end-user reactions to the use of this type of interface are described. The results of the evaluation are very encouraging. Our findings suggest that this interface system is acceptable, sound, robust, easy to use and learn, and is sufficiently expressive.
Program Design Languages: How Much Detail Should They Include? BIBA 337-347
  Deborah A. Boehm-Davis; Sylvia B. Sheppard; John W. Bailey
This experiment evaluated the effectiveness of using a program design language (PDL) specifically designed to aid in coding a corresponding programming language. PDLs were designed to reflect the constructs and level of detail of three particular programming languages (i.e. MACRO-11, FORTRAN and APL). We measured the performance of programmers coding from these various PDLs in MACRO-11 and FORTRAN. Each participant was presented with three programs in one of the two programming languages. Several lines had been deleted from each program. A participant's task, performed online, was to complete the code using the PDLs.
   For programmers coding in MACRO-11, the MACRO-like PDL was associated with the shortest coding times. Further, the participants said they found the MACRO-like PDL easiest to use, and they relied on it most heavily. For programmers coding in FORTRAN, the FORTRAN-like PDL was associated with the shortest coding times; the participants said they found the FORTRAN-like PDL easiest to use, and they relied on it most heavily. From these data we conclude that optimal use of a PDL requires that it be tailored to the target programming language in terms of type of construct and level of detail.
PRISM: An Algorithm for Inducing Modular Rules BIBA 349-370
  Jadzia Cendrowska
The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees.
Context-Fixing Semantics for Instructable Robots BIBA 371-400
  Colleen Crangle; Patrick Suppes
Instructable robots must be able to interpret a wide range of ordinary natural-language commands. This paper presents an approach to the interpretation of commands that takes into account the context in which the commands are given. It shows how the precise interpretation of many ordinary English words can be fixed only within their context of use and not before. It examines the role of the perceptual situation in fixing that interpretation, the role of the cognitive and perceptual functioning of the robot, and the role of the immediate linguistic surround. Our approach draws on the model-theoretic tradition in semantics in that it defines a set of models in terms of which the English commands to the robot are interpreted. At the same time, it uses a procedural semantics for the lexicon, thus addressing the question of how the robot can use the instruction it is given to perform the task described by that instruction. Examples are drawn primarily from instruction in elementary mathematics. Other examples come from our recent work with a robotic aid for the physically disabled.
Human-Computer Interaction in the Provision of an Interpersonal Communication Mechanism for the Nonvocal BIBA 401-412
  M. C. Fairhurst; M. Bonaventura; C. Stephanidis
Picture-symbol languages can provide an appropriate alternative communication mechanism for the non-vocal, and a microcomputer-based system to mediate pictographic linguistic expression has been developed. The design of such a system imposes specific requirements at the human-computer interface both directly, in relation to the ease of user interaction, and indirectly, in terms of the suitability of the symbol forms adopted to convey message information. Both aspects have been investigated, and it is shown that emphasizing the first of these areas gives rise to design criteria which can largely overcome problems arising in current practice within the second.
FOCUS as a Phenomenological Technique for Job Analysis: Its Use in Multiple Paradigm Research (MPR) BIBA 413-433
  John Hassard
This paper is an answer to the call for new, innovative applications of interactive systems -- it outlines how a "conversational" repertory grid package is used for collecting phenomenological data. First, we introduce a research programme analysing work systems in the British Fire Service from perspectives representative of Burrell and Morgan's (1979) four sociological paradigms (functionalist; interpretive; radical humanist; radical structuralist). Following this, we describe a methodology used in the second research stage -- Kelly's (1955) personal construct theory forming a basis for conducting phenomenological research consistent with an "interpretive" paradigm. Finally, Shaw's (1980) 2-way cluster analysis package, FOCUS, is used to identify the "recipes" (Schutz, 1967) subjects consult in making sense of work processes. The method is developed in combination with ethnographic analysis, the output representing a grounded, essentially anti-positivist, technique for job analysis.
Experiments with a Cognitive Industrial Robot BIBA 435-448
  Derek Partridge; Victor Johnston; Patricia Lopez
Input-expectation discrepancy reduction is a ubiquitous mechanism; it permeates the human nervous system. This mechanism thus appears to be a generic strategy underlying many aspects of intelligent behavior. We have applied this paradigm to the domain of industrial robotics. In addition, we have explored some applications of human perceptual mechanisms in the visual system of the robot; the general strategy employed yielded a trade-off between efficient, intelligent decisions and errors.
   The result is a cognitive industrial robot that exemplifies a novel view of the industrial robotics field and serves to cast some fundamental problems, of AI as well as of robotics, in a new light. In particular, we describe a concrete application of our ideas which can be contrasted with most AI projects, functioning as they do in purely abstract domains. The concrete application introduces subproblems such as inexact matching and uncertainty with respect to all interactions with the real world, problems that abstract applications of AI theories can, and often do, avoid.
Adapting to the Speaker in Automatic Speech Recognition BIBA 449-457
  Mike Talbot
Many automatic speech recognisers work on the principle of matching incoming utterances to a library of stored voice templates. There are two main shortcomings of this approach, which can potentially be overcome by careful interface design. Firstly, the templates, collected under strictly controlled conditions, are not necessarily representative of the speaker's normal voice. Secondly, although the speaker's voice is likely to alter during the course of using the speech recogniser, the templates representing that voice will remain unchanged. This will result in a gradual lessening of the similarity of template and utterance.
   In the context of an information-retrieval task using fully automatic speech recognition, attempts were made to overcome the above problems. It was found that a modified means of templates formation, giving rise to more representative templates, could improve recognition figures, especially for female speakers. However, attempts at constantly updating the templates in accordance with drifts in the speaker's diction were ineffectual in this instance. This latter result conflicts with the results of earlier, comparable studies.

IJMMS 1987 Volume 27 Issue 5/6

Introduction BIB 459-461
  Giuseppe Mancini; Dave Woods; Erik Hollnagel

Cognitive Tools: Instruments or Prostheses?

Cognitive Aids in Process Environments: Prostheses or Tools? BIBA 463-470
  James Reason
Human fallibility in one form or another is the major contributor to catastrophic failures in complex and hazardous process environments. Few would disagree with this assertion, especially in the aftermath of Chernobyl. Nor would many quarrel with the claim that human operators need more help in operating such systems, particularly during disturbances. Where opinion divides, however, is on such questions as:
  • Why this help is needed,
  • Who should have it, and
  • What forms it should take.
  • How Can Computer-Based Visual Displays Aid Operators? BIBA 471-478
      V. De Keyser
    In this paper first I assess how different information channels are utilized in control rooms based on results from field investigations of operator activities in four different complex processes. Second, I use these findings for formulate some recommendations about how computer-based visual displays an be designed to support improved performance.
    Human Interaction with an "Intelligent" Machine BIBA 479-525
      E. M. Roth; K. B. Bennett; D. D. Woods
    In this paper we report the results of a study of technicians diagnosing faults in electro-mechanical equipment with the aid of an expert system. Technicians varying in level of experience and interactive style (active or passive) diagnosed faults varying in level of difficulty. The results indicate that the standard approach to expert system design, in which the user is assigned the role of data gatherer for the machine, is inadequate. Problem solving was marked by novel situations outside the machine's competence, special conditions, underspecified instructions, and error recovery, all of which required substantial knowledge and active participation on the part of technicians. We argue that the design of intelligent systems should be based on the notion of a joint cognitive system architecture: computational technology should be used to aid the user in the process of solving his problem. The human's role is to achieve total system performance as a manager of knowledge resources that can vary in kind and amount of "intelligence" or power.
    Trust Between Humans and Machines, and the Design of Decision Aids BIBA 527-539
      Bonnie M. Muir
    A problem in the design of decision aids is how to design them so that decision makers will trust them and therefore use them appropriately. This problem is approached in this paper by taking models of trust between humans as a starting point, and extending these to the human-machine relationship. A definition and model of human-machine trust are proposed, and the dynamics of trust between humans and machines are examined. Based upon this analysis, recommendations are made for calibrating users' trust in decision aids.
    Operator Assistant Systems BIBA 541-554
      Guy A. Boy
    This paper presents a knowledge-based system (KBS) methodology to study human-machine interactions and levels of autonomy in allocation of process control tasks, with a view to designing operational systems. In practice, operators are provided with operation manuals (paper KBS) to assist them in normal and abnormal situations. Unfortunately, operation manuals usually try to represent only the designer's understanding of the system to be controlled. The logic of the operator is often totally different. Operator logic integration is difficult, long, incomplete, and sometimes impossible. This paper focuses on a situational/analytical representation and a method for eliciting operator logic to refine a KBS shell called an operator assistant (OA). For the OA to be an efficient on-line aid, it is necessary to know what level of autonomy gives the optimal performance of the overall man-machine system. The optimal level of autonomy can be determined experimentally following an iterative process: testing a specific level of autonomy/building the corresponding level of explanation in the OA/experimental evaluation. OA structure has been used to design a working KBS called HORSES (Human-Orbital Refueling System-Expert System). Protocol analysis of pilots interacting with this system has revealed that the a priori analytical knowledge becomes more structured with training and the situation patterns more complex and dynamic. This approach can improve our understanding of human and automatic reasoning, and their most efficient interactions.
    Human Error Detection Processes BIBA 555-570
      Antonio Rizzo; Sebastiano Bagnara; Michele Visciola
    The way in which humans detect their own errors has been a relatively neglected issue. The following study presents data on the relationship between types of errors and behavioural patterns of error detection, with the aim to define the psychological mechanism that allows the detection of errors. The results suggest that different kinds of psychological mechanisms are involved in the detection of different types of error. Effect of practice as a function of the distribution of attentional resources among levels of control of human behaviour is also discussed.
    Commentary: Cognitive Engineering in Complex and Dynamic Worlds BIB 571-585
      David D. Woods

    Models of Decision Makers in Accident Conditions

    Accidents at Sea: Multiple Causes and Impossible Consequences BIBA 587-598
      Willem A. Wagenaar; Jop Groeneweg
    Accidents are the consequences of highly complex coincidences. Among the multitude of contributing factors human errors play a dominant role. Prevention of human error is therefore a promising target in accident prevention. The present analysis of 100 accidents at sea shows that human errors were not as such recognizable before the accident occurred. Therefore general increase of motivation or of safety awareness will not remedy the problem. The major types of human error that contribute to the occurrence of accidents are wrong habits, wrong diagnoses, lack of attention, lack of training and unsuitable personality. These problems require specific preventive measures, directed at the change of undesired behaviors. Such changes should be achieved without the requirement that people comprehend the relation between their actions and subsequent accidents.
    Modelling Operators in Accident Conditions: Advances and Perspectives on a Cognitive Model BIBA 599-612
      A. Amendola; U. Bersini; P. C. Cacciabue; G. Mancini
    In this paper the developments and issues identified in modelling humans and machines are discussed; the possibility of combining them through the system response analyser (SRA) methodology is presented as a balanced approach to be applied when the objective is the study of safe management of systems during abnormal sequences. Attention is then devoted to the human behaviour model implemented or being developed for SRA, and the general frame in which manual and cognitive human activities interact with each other and are carried over by operators in controlling the evolution of transients in complex plants. The model considers two levels of cognitive processes: a high-level decision making, whereby the reasoning about the plant as a whole takes place and a low-level decision making, in which the actual actions of control are carried over. Accounts are also given for the mechanisms of error making and recovery.
    Human Supervisor Modelling: Some New Developments BIBA 613-618
      Henk G. Stassen
    Three levels of human behavior can be distinguished: skill-, rule- and knowledge-based behavior. It has been stated before, Stassen (1986), that modelling of this behaviour can only be successfully achieved at the skill- and rule-based level. Knowledge-based behavior is quantitatively difficult to model. Recent developments in our research group have made the sharp boundary to what can or cannot be modelled, more or less fuzzy.
       This fact is illustrated by some new developments on the application of fuzzy set theory in modelling the navigator's behavior during the maneuvering of large vessels, on the use of expert systems in diagnostic problems, on human reliability analyses and on the influence of flow modelling on human supervisory control behavior.
    Intelligent Aids, Mental Models, and the Theory of Machines BIBA 619-629
      Neville Moray
    The purpose of this paper is to establish an analytic theory of the content of an operator's mental models. Using Ashby's general theory of systems, it can be shown that a model can be regarded as a homomorph, rather than an isomorph, of the real system. Homomorphs provide a reasonable way to represent a system which is too complex, in all its details, to be understood. The mental model is probably a set of quasi-independent subsystems into which the total system can be decomposed. Analytic and empirical methods for identifying candidate homomorphs from the structure of the real system are proposed. It is suggested that a theory of design for intelligent displays and decision aids can be developed by regarding the mental model as a lattice, and the role of intelligent displays and aids as providing paths in the lattice which will be otherwise inaccessible to the operator. These proposals are related to recent work on induction.
    Commentary: Models of the Decision Maker in Unforeseen Accidents BIB 631-639
      G. Mancini

    Reasoning and Intelligence

    Integrating Shallow and Deep Knowledge in the Design of an On-Line Process Monitoring System BIBA 641-664
      Massimo Gallanti; Luca Gilardoni; Giovanni Guida; Alberto Stefanini; Lorenzo Tomada
    Monitoring and malfunction diagnosis of complex industrial plants involves, in addition to shallow empirical knowledge, knowledge about plant operation, also deep knowledge about structure and function. This paper presents the results obtained in the design and experimentation of PROP and PROP-2 systems, devoted on-line monitoring, and diagnosis of pollution phenomena in the cycle water of a thermal power plant. In particular, it focuses on PROP-2 architecture, which encompasses a four-level hierarchial knowledge-base including both empirical knowledge and a deep model of the plant. Shallow knowledge is represented by production rules and event-graphs (a formalism for expressing procedural knowledge), while deep knowledge is expressed using a representation language based on the concept of component. One major contribution of the proposed approach has been to show in a running experimental system that a suitable blend of shallow and deep knowledge can offer substantial advantages over a single paradigm.
    Note: Errata for this article appear in Volume 29, Number 5, p. 612.
    Information and Reasoning in Intelligent Decision Support Systems BIBA 665-678
      Erik Hollnagel
    There are many formal theories of decision making seen as a whole as well as for its separate aspects. Few of these are, however, sufficiently developed to serve as a basis for actually designing decision support systems. That is because they generally consider decision making under idealized rather than real circumstances, hence cope with only part of the complexity. Some of the unsolved problems refer to the design of artificial reasoning mechanisms, the structure and representation of knowledge, and the use of information across the man-machine interface. This catalogue of "things we do not know" about Intelligent Decision Support Systems is described in the three main sections of this paper. The final section discusses the problems in validating the function of an artificial reasoning system, since this is an important factor in determining both their applicability and their acceptability.
    The MGR Algorithm and its Application to the Generation of Explanations for Novel Events BIBA 679-708
      M. J. Coombs; R. T. Hartley
    This paper presents an algorithm for reasoning about novel events. Termed Model Generative Reasoning (MGR), we replace deductive reasoning with an abductive procedure based on the generation of alternative, intentional domain descriptions (models) to cover problem assumptions, which are then evaluated against domain facts as alternative explanations for queried events. The algorithm is principally illustrated using a problem from process control.
    A Comparison of Some Rules for Probabilistic Reasoning BIBA 709-716
      Paolo Garbolino
    Generalized Bayesian conditionals and Dempster-Shafer's conditionals are considered as probabilistic kinematics which hold under different conditions. In particular, generalized Bayes can be applied whenever the available evidence allows to partition the frame of reference. It will be pointed out how, in this case, it is always possible to get a probability function by a belief function by means of minimum (relative) entropy kinematics.
    A Tentative Comparison of Numerical Approximate Reasoning Methodologies BIBA 717-728
      Didier Dubois; Henri Prade
    A critical discussion of approximate reasoning methods in artificial intelligence is proposed. The focus is on numerical approaches based on certainty factor, probability, possibility or evidence theory. The discussion is organized around three topics, namely knowledge representation, inductive versus deductive reasoning, and control strategies. The aim of the paper is to outline a tentative classification of emerging trends in uncertain reasoning, and to point out problems which are not solved yet or are sometimes overlooked by proponents of a single approach. The style of the paper is concise and assumes some familiarity of the reader with the referenced material.
    Bayesian Theory and Artificial Intelligence: The Quarrelsome Marriage BIBA 729-742
      Paolo Garbolino
    The problem of knowledge-base updating is addressed from an abstract point of view in the attempt to identify some general desiderata the updating mechanism should satisfy. They are recognized to be basically two: evaluating the local impact of new data on the single items of knowledge already stored, and propagating this effect through the knowledge-base maintaining at the same time its global coherence. It will be shown that Bayesian updating, difficult to implement, satisfies simultaneously these two requirements, and that, on the other hand, Dempster-Shafer updating, easy to implement, does not satisfy the requirement of global coherent propagation. I will point out the existence of a trade-off between coherence and effectiveness in the methods for representing uncertainty currently proposed in AI. Two kinds of learning machines, Boltzmann machines and Harmonium, will be discussed and considered as first attempts to give a non-behavioral characterization of coherence in a cognitive agent, a characterization still consistent with the behavioral (probabilistic) definition.
    Commentary: Issues in Knowledge-Based Decision Support BIB 743-751
      Erik Hollnagel