HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 161718192021222324252627282930313233343536

International Journal of Man-Machine Studies 26

Editors:B. R. Gaines; D. R. Hill
Dates:1987
Volume:26
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:47
Links:Table of Contents
  1. IJMMS 1987 Volume 26 Issue 1
  2. IJMMS 1987 Volume 26 Issue 2
  3. IJMMS 1987 Volume 26 Issue 3
  4. IJMMS 1987 Volume 26 Issue 4
  5. IJMMS 1987 Volume 26 Issue 5
  6. IJMMS 1987 Volume 26 Issue 6

IJMMS 1987 Volume 26 Issue 1

Editorial: Knowledge Acquisition BIB 1-2
  John Boose; Brian Gaines
Expertise Transfer and Complex Problems: Using AQUINAS as a Knowledge-Acquisition Workbench for Knowledge-Based Systems BIBA 3-28
  John H. Boose; Jeffrey M. Bradshaw
Acquiring knowledge from a human expert is a major problem when building a knowledge-based system. Aquinas, an expanded version of the Expertise Transfer System (ETS), is a knowledge-acquisition workbench that combines ideas from psychology and knowledge-based systems research to support knowledge-acquisition tasks. These tasks include eliciting distinctions, decomposing problems, combining uncertain information, incremental testing, integration of data types, automatic expansion and refinement of the knowledge base, use of multiple sources of knowledge and providing process guidance. Aquinas interviews experts and helps them analyse, test, and refine the knowledge base. Expertise from multiple experts or other knowledge sources can be represented and used separately or combined. Results from user consultations are derived from information propagated through hierarchies. Aquinas delivers knowledge by creating knowledge bases for several different expert-system shells. Help is given to the expert by a dialog manager that embodies knowledge-acquisition heuristics.
   Aquinas contains many techniques and tools for knowledge acquisition; the techniques combine to make it a powerful testbed for rapidly prototyping portions of many kinds of complex knowledge-based systems.
KRITON: A Knowledge-Acquisition Tool for Expert Systems BIBA 29-40
  Joachim Diederich; Ingo Ruhmann; Mark May
A hybrid system for automatic knowledge acquisition for expert systems is presented. The system integrates artificial intelligence and cognitive science methods to construct knowledge bases employing different knowledge representation formalisms. For the elicitation of human declarative knowledge, the tool contains automated interview methods. The acquisition of human procedural knowledge is achieved by protocol analysis techniques. Textbook knowledge is captured by incremental text analysis. The goal structure of the knowledge elicitation methods is an intermediate knowledge-representation language on which frame, rule and constraint generators operate to build up the final knowledge bases. The intermediate knowledge representation level regulates and restricts the employment of the knowledge elicitation methods. Incomplete knowledge is laid open by pattern-directed invocation methods (the intermediate knowledge base watcher) triggering the elicitation methods to supplement the necessary knowledge.
MOLE: A Tenacious Knowledge-Acquisition Tool BIBA 41-54
  Larry Eshelman; Damien Ehret; John McDermott; Ming Tan
MOLE can help domain experts build a heuristic problem-solver by working with them to generate an initial knowledge base and then detect and remedy deficiencies in it. The problem-solving method presupposed by MOLE makes several heuristic assumptions about the world, which MOLE is able to exploit when acquiring knowledge. In particular, by distinguishing between covering and differentiating knowledge and by allowing covering knowledge to drive the knowledge-acquisition process, MOLE is able to disambiguate an under-specified knowledge base and to interactively refine an incomplete knowledge base.
Knowledge-Based Knowledge Acquisition for a Statistical Consulting System BIBA 55-64
  William A. Gale
Knowledge-based knowledge acquisition means restricting the domain of knowledge that can be acquired and developing a conceptual model of the domain. We have built a prototype knowledge-based knowledge acquisition system for the domain of data analysis. A critique of the prototype has led to a design for a possibly practical data analysis knowledge acquisition system.
KNACK -- Report-Driven Knowledge Acquisition BIBA 65-79
  Georg Klinker; Joel Bentolila; Serge Genetet; Michael Grimes; John McDermott
This paper describes a knowledge-acquisition tool that builds expert systems for evaluating designs of electro-mechanical systems. The tool elicits from experts (1) knowledge in the form of a skeletal report, (2) knowledge about a large collection of report fragments, only some of which will be relevant to any specific report, and (3) knowledge of how to customize the report fragments for a particular application. The tool derives its power from exploiting its understanding of two problem-solving methods and of the different roles that knowledge plays in those two methods.
Modelling Human Expertise in Knowledge Engineering: Some Preliminary Observations BIBA 81-92
  David C. Littman
This paper reports the results of an empirical analysis of the knowledge engineering behavior of six persons with extensive experience in artificial intelligence (AI). The six persons were given the task of designing an AI program and were videotaped while they did so; during their 2-3h design sessions, they were asked to talk aloud about what they were doing and why they were doing it. The paper identifies several recurrent behaviors common to all the AI designers. For example, several components of the designers' goal structures are identified, as is the importance of focusing on a "touchstone", or key issue, around which much of the designer's behavior revolves. Several potential implications of the research for the design of knowledge engineering tools are explored.
Acquiring Domain Models BIBA 93-104
  Katharina Morik
Whereas a Learning Apprentice System stresses the generation and refinement of shallow rules of a performance program, presupposing a domain theory, BLIP is mainly concerned with the construction of a domain theory as the first phase of the knowledge-acquisition process. In this paper the BLIP approach to machine learning is described. The system design is presented and the already implemented knowledge sources are shown with their formalisms and functions for the learning process.
Use of a Domain Model to Drive an Interactive Knowledge-Editing Tool BIBA 105-121
  Mark A. Musen; Lawrence M. Fagan; David M. Combs; Edward H. Shortliffe
The manner in which a knowledge-acquisition tool displays the contents of a knowledge base affects the way users interact with the system. Previous tools have incorporated semantics that allow knowledge to be edited in terms of either the structural representation of the knowledge or the problem-solving method in which that knowledge is ultimately used. A more effective paradigm may be to use the semantics of the application domain itself to govern access to an expert system's knowledge base. This approach has been explored in a program called OPAL, which allows medical specialists working alone to enter and review cancer treatment plans for use by an expert system called ONCOCIN. Knowledge-acquisition tools based on strong domain models should be useful in application areas whose structure is well understood and for which there is a need for repetitive knowledge entry.

IJMMS 1987 Volume 26 Issue 2

A Formal Approach to Learning from Examples BIBA 123-141
  James P. Delgrande
A formal, foundational approach to learning from examples is presented. In the approach, it is assumed that a domain of application is describable as a set of facts, or ground atomic formulae. The task of a learning system is to form and modify hypothesised relations among the relations in the domain, based on a known finite subset of the ground atomic formulae. The subset of known ground atomic formulae is also assumed to grow monotonically, and so the set of hypotheses will require occasional revision.
   Formal systems are derived by means of which the set of potential hypotheses that can be formed is precisely specified. A procedure is also derived for restoring the consistency of a set of hypotheses after conflicting evidence is encountered. The framework is intended both as a basis for the development of autonomous systems that learn from examples, and as a neutral point from which such systems may be viewed and compared.
Design for Acquisition: Principles of Knowledge-System Design to Facilitate Knowledge Acquisition BIBA 143-159
  Thomas R. Gruber; Paul R. Cohen
The problem of knowledge acquisition is viewed in terms of the incongruity between the representational formalisms provided by an implementation (e.g. production rules) and the formulation of problem-solving knowledge by experts. The thesis of this paper is that knowledge systems can be designed to facilitate knowledge acquisition by reducing representation mismatch. Principles of design for acquisition are presented and applied in the design of an architecture for a medical expert system called MUM. It it shown how the design of MUM makes it possible to acquire two kinds of knowledge that are traditionally difficult to acquire from experts: knowledge about evidential combination and knowledge about control. Practical implications for building knowledge-acquisition tools are discussed.
Specification for Expertise BIB 161-181
  Paul E. Johnson; Imran Zualkernan; Sharon Garber
Heuristics for Expertise Transfer: An Implementation of a Dialog Manager for Knowledge Acquisition BIBA 183-202
  Catherine M. Kitto; John H. Boose
One of the most difficult and time-consuming activities in constructing an expert system is the process of knowledge acquisition. Our objectives are to identify a set of heuristics for expertise transfer and modeling based on our experience in knowledge acquisition for expert systems, and to formalize this knowledge as rules. Aquinas, a knowledge-acquisition workbench, contains tools that interview experts; analyse, test, and refine knowledge; and generate knowledge bases for expert system shells. A set of heuristics for knowledge acquisition has been defined and incorporated in the Dialog Manager subsystem of Aquinas to provide guidance in the knowledge acquisition process to domain experts and knowledge engineers.
   The implementation of the Dialog Manager is described, and an example transcript shows the interaction of Aquinas, the Dialog Manager, and the expert. A preliminary classification for knowledge acquisition heuristics is proposed. Finally, issues in formalizing strategies for knowledge acquisition and a plan for future research are presented.
Formal Thought and Narrative Thought in Knowledge Acquisition BIBA 203-212
  Jim Kornell
There are two different kinds of thought of interest to knowledge engineers. One is formal thought, as exemplified by logic and mathematics; the evaluation criterion for formal thought is truth. The second is narrative thought, as exemplified by metaphors, analogies, and gestalts; the evaluation criterion for narrative thought is verisimilitude. To build a knowledge system which operates in the realm of narrative thought, one must build a model of expert knowledge of the domain, and such a model must contain not only the facts and heuristics used by the expert, but also the patterns of reasoning and most importantly the kinds of reasoning used by the expert. Three representative knowledge acquisition tools -- RuleMaster, MOLE, and Aquinas -- are briefly reviewed. It is suggested that the tools discussed, and the members of the classes each represents, fail to effectively support identifying and acquiring the patterns of reasoning and especially the kinds of reasoning used by experts in narrative domains.
INFORM: An Architecture for Expert-Directed Knowledge Acquisition BIBA 213-230
  Eric A. Moore; Alice M. Agogino
This paper presents an architecture for INFORM, a domain-independent, expert-directed knowledge acquisition aid for developing knowledge-based systems. The INFORM architecture is based on information requirements and modeling approaches derived from both decision analysis and knowledge engineering. It emphasizes accommodating cycles of creative and analytic modeling activity and the assessment and representation of aggregates of information to holistically represent domain expertise. The architecture is best suited to heuristic classification problem-solving (Clancey, 1985), in particular domains with diagnosis or decision-making under uncertainty. Influence diagrams are used as the knowledge structure and computational representation. We present here a set of information and performance requirements for expert-directed knowledge acquisition, and describe a synthesis of approaches for supporting the knowledge engineering activity. We discuss potential applications of INFORM as a knowledge engineering aid, specifically as an aid for developing insight about the encoding domain on the part of its user.
Generic Tasks for Knowledge-Based Reasoning: The "Right" Level of Abstraction for Knowledge Acquisition BIBA 231-243
  Tom Bylander; B. Chandrasekaran
Our research strategy has been to identify generic tasks -- basic combinations of knowledge structures and inference strategies that are powerful for solving certain kinds of problems. Our strategy is best understood by considering the "interaction problem", that representing knowledge for the purpose of solving some problem is strongly affected by the nature of the problem and by the inference strategy to be applied to the knowledge. The interaction problem implies that different knowledge-acquisition methodologies will be required for different kinds of reasoning, e.g. a different knowledge-acquisition methodology for each generic task. We illustrate this using the generic task of hierarchical classification. Our proposal and the interaction problem call into question many generally held beliefs about experts systems such as the belief that the knowledge base should be separated from the inference engine.
The Knowledge Acquisition Grid: A Method for Training Knowledge Engineers BIBA 245-255
  Marianne LaFrance
This paper describes the Knowledge Acquisition Grid, developed to assist knowledge engineers in the manual transfer of expertise. The Grid is used in a knowledge-acquisition module which itself is part of a larger program designed to train people in knowledge engineering techniques offered by Digital Equipment Corporation. The Grid describes a two-dimensional space in which five forms of expert knowledge and six basic types of interview questions constitute the horizontal and vertical dimensions respectively. Description of the rationale, dimensions, components, and strategy for use of the Grid in the knowledge-acquisition component of building an expert system is provided along with discussion of the need for greater attention in general to the social psychology of expert interviewing.
Mapping Cognitive Demands in Complex Problem-Solving Worlds BIB 257-275
  David D. Woods; Erik Hollnagel

IJMMS 1987 Volume 26 Issue 3

On Comprehending a Computer Manual: Analysis of Variables Affecting Performance BIBA 277-300
  Donald J. Foss; Penny L. Smith-Kerker; Mary Beth Rosson
In two experiments novice computer users were taught a text editor from one of a set of manuals describing the commands and their functions. The manuals varied systematically in how information was presented. In Experiment II (the more extensive study) half of the 72 college subjects got the commands via an "abstract syntax", following the presentation style of the original manual for the system. For the other half of the subjects the commands were presented in a more concrete form. Crossed with the syntax variable was one of manual organization. For half the subjects the manual presented many alternative ways of accomplishing a sub-task as soon as that sub-task was introduced -- again following the original design of the manual. For the other half the alternatives were minimized at the introduction of the sub-task, but all were provided before the editing itself began. A third variable was also studied: presence vs absence of a surrogate model (a mental model) for the editor. Half of the subjects were presented with such a model and half were not.
   A time-stamped keystroke record was kept while the subjects tried to use the editor, and a variety of dependent variables involving accuracy and speed were measured. The results showed significant effects of the manual variables (syntax and organization), though the locus of the effects -- the dependent variables influenced -- varied between the two independent variables. The surrogate model had little effect. The results are discussed in terms of the planning and execution stages of novices' performance and how the independent variables affect these stages.
Representing and Using Metacommunication to Control Speakers' Relationships in Natural-Language Dialogue BIBA 301-319
  David L. Sanford; J. W. Roach
This paper is a report on a new theory and representation of dialogue, called Dialogue Structures. One of the marks of intelligent behavior among humans is the ability to use metaknowledge for making decisions. When two interactants engage in a dialogue, one necessary job is to use metacommunication to manage the relationship between the interactants. We identify and represent structural patterns that provide metaknowledge about the relationship of the interactants, also providing the usual representation of the content of natural language utterances. This paper presents an analysis "by hand" applying the theory to actual dialogues between human speakers. It explains a working program that implements the theory and presents dialogues produced by human users interacting with the program. The theory is so robust that, as a by-product, it interprets indirect requests correctly.
A Parallel Rule Firing Fuzzy Production System with Resolution of Memory Conflicts by Weak Fuzzy Monotonicity, Applied to the Classification of Multiple Objects Characterized by Multiple Uncertain Features BIBA 321-332
  William Siler; Douglas Tucker; James Buckley
A fuzzy production system shell is described characterized by parallel rather than sequential rule firing. All fireable rules are fired in effect concurrently. Since there is no unfired-rule stack, no backtracking can take place, and no rule conflict algorithm is necessary; instead, a memory conflict algorithm is invoked when more than one rule modifies the same datum. Memory conflicts are resolved by weakly monotonic fuzzy logic; i.e. the value or truth value of an attribute may be replaced if the new truth value is equal to or greater than the old truth value. The system depends heavily on the use of fuzzy logic and on confidence levels, fuzzy numbers and fuzzy sets as explicit data types, and on the generation of rules from a data base of expert knowledge. Fuzzy sets are used to store contradictory and ambiguous information and results. If a problem is suitable for parallel processing, substantial reductions in system overhead are achieved, together with substantial economy in the number of rules which must be written; if a problem is not suitable for parallel processing, no economy is achieved. We suggest that problems which yield to deductive reasoning constitute a class which is suitable for sequential rule firing, and problems which yield to inductive reasoning constitute a class suitable for parallel processing. A successful application of the system to the unsupervised analysis of a time sequence of noisy echocardiogram images is described.
Using Computer Knowledge in the Design of Interactive Systems BIBA 333-342
  Merle P. Martin; William L. Fuerst
An experiment was conducted to determine whether interactive systems designed for a general range of user computer knowledge performed better than models designed for either novice or experienced users. The results showed that the experimental general audience models exhibited better performance on most experiment categories. The results reported in this paper are important for the designers of interactive systems and for those developing systems education curricula.
The Classification of Programming Languages by Usage BIBA 343-360
  J. R. Doyle; D. D. Stretch
Relationships between 16 programming languages have been investigated using data from 1062 U.K. software firms. The number of firms which use both of a given pair of languages is recorded for all pairings of the 16 languages. Above average co-occurrence of a pair is taken as evidence of relationship between the two languages. Alternatively, the number of firms which use neither of a given pair of languages is recorded for all pairs of languages. The two methods of deriving similarity matrices we call the AND analysis (relationship by co-occurrence) and NOR (relationship by co-absence), by analogy with the Boolean operators. The AND and NOR similarity matrices first undergo separate quasi Chi-square fits to remove the size-contributions; the residuals (observed minus expected values) are then used as the raw input to a simple hierarchical clustering algorithm. Separate AND and NOR analyses reveal a consistent picture of inter-language relationships. Subjectively labelled, the broadest dichotomy seems to be between traditional languages, quite often considered clumsy (such as BASIC, COBOL, FORTRAN, Assembler...) and more modern, elegant languages (such as the Algol family and APL). Business vs scientific seems to be a secondary dichotomy.
   Dependency and dominance relationships can be examined by an XOR analysis: counting when one language of a pair is used while the other is not. Relative dominance (when the size-effect has been removed) is modelled by a simple directed graph, with five sub-groups of languages as the nodes.
   Some other similarity measures that might be used to relate programming languages are discussed in the Introduction, any of which may contribute to similarity by usage. Finally, the general method of analysis is applicable to many different situations in which binary data about co-occurrence of events is gathered across a large number of elements.
Structure of a Directory Space: A Case Study with a UNIX Operating System BIBA 361-382
  Omer Akin; Can Baykan; D. Radha Rao
The subjects of this study are the structure of directory spaces and users' search behaviors in a UNIX operating system environment. Protocol analyses, questionnaires and surveys were conducted with users of the CAD-VAX and SPICE-VAX computers in the Architecture and Computer Science departments at Carnegie Mellon University. Findings indicate that most directories studied were organized hierarchically (tree structure) but with few levels. Some nodes were specialized as file storage areas (the leaves of the tree) and others as switch nodes that were used primarily during search. Depth first search characterized both the organization of the directories and the behavior of the users. Single-step as opposed to multiple-step traversal of the directory tree was also prevalent. Recommendations for system friendliness in terms of reusability, orientation, robustness, and consistency are discussed.

IJMMS 1987 Volume 26 Issue 4

Taking Backtracking with a Grain of SALT BIBA 383-398
  Sandra Marcus
SALT is a knowledge-acquisition tool for generating expert systems that can use a propose-and-revise problem-solving strategy. The SALT-assumed method incrementally constructs an initial design by proposing values for design parameters, identifying constraints on design parameters as the design develops and revising design decisions in response to detection of constraint violations in the proposal. This problem-solving strategy provides the basis for SALT's knowledge representation. SALT guides its interrogation of the domain expert using its identification of weaknesses in the knowledge base.
The Use of Alternative Knowledge-Acquisition Procedures in the Development of a Knowledge-Based Media Planning System BIBA 399-411
  Andrew A. Mitchell
The knowledge-acquisition procedures used in developing a knowledge-based media planning system are discussed. The approach used in developing the system and the resulting system will have a number of unique characteristics. First, in developing the system, we first constructed a system which we call a decision frame. This system structures the problem for the media planner and contains little expertise. We are currently adding expertise to the system so that our final system will be able to operate both as a decision frame and as an expert system. A number of different knowledge-acquisition procedures are currently being used to obtain the requisite knowledge from media planners. These include: (1), elicitation procedures; (2), problem-sorting techniques; (3), protocol analysis; and (4), having experts use the decision frame system to acquire knowledge and to determine the validity of the system.
Explanation-Based Learning for Knowledge-Based Systems BIBA 413-433
  Michael J. Pazzani
We discuss explanation-based learning for knowledge-based systems. First, we identify some potential problems with the typical means of acquiring a knowledge base: interviewing domain experts. Next, we review some examples of knowledge-based systems which include explanation-based learning moduluses and discuss two systems in detail: ACES (Pazzini, 1986a) which learns heuristics for fault diagnosis from device descriptions, and OCCAM (Pazzani, 1986b) which learns to predict the outcome of economic sanction episodes from simple economic theories. We conclude that explanation-based learning is a promising approach to constructing knowledge-based systems when the required information is available but not in the form of heuristic rules. In this case, the role of explanation-based learning is to explicate heuristics which are only implicit in deep models.
Multiple-Problem Subspaces in the Knowledge-Design Process BIBA 435-452
  Alain Rappaport
Designing a knowledge base is viewed as a problem-solving task in which the skilled individual's knowledge and behavior must be mapped into the system, preserving the compiled knowledge acquired by experience. The expert's problem space is complex, but its breakdown into three major subspaces allows one to formalize this approach. Selective interfaces and high-level primitives as well as a flexible knowledge representation not only elicit knowledge and the learning of the design task by the expert. High-level programming, stressing the importance of the psychological as well as the physical descriptions, should allow the expert to bypass the current bottleneck of having to decompile the knowledge into a low-level language and then reconstruct the control structures to recover the expertise. Hence, knowledge design becomes a function available to domain experts themselves.
   The following reflections aim at the construction of a comprehensive theory of knowledge acquisition and transfer, in the context of a direct relation between the domain expert and the machine. This work is linked to the development and use of the NEXPERT hybrid knowledge-based system.
An Overview of Knowledge-Acquisition and Transfer BIBA 453-472
  Brian R. Gaines
A distributed anticipatory system formulation of knowledge acquisition and transfer processes is presented which provides scientific foundations for knowledge engineering. The formulation gives an operational model of the notion of expertise and the role it plays in our society. It suggests that the basic cognitive system that should be considered is a social organization, rather than an individual. Computational models of inductive inference already developed can be applied directly to the social model. One practical consequence of the model is a hierarchy of knowledge transfer methodologies which defines the areas of application of the knowledge-engineering techniques already in use. This analysis clarifies some of the problems of expertise transfer noted in the literature, in particular, what forms of knowledge are accessible through what methodologies. The model is being used as a framework within which to extend and develop a family of knowledge-support systems to expedite the development of expert-system applications.
Ontological Analysis: An Ongoing Experiment BIBA 473-485
  James H. Alexander; Michael J. Freiling; Sheryl J. Shulman; Steven Rehfuss; Steven L. Messick
Knowledge engineering is a complex activity which is permeated with problems inherent in the difficulties of choosing the correct abstractions. Knowledge-level analysis has been suggested as a technique to help manage this complexity. We have previously presented a methodology, called ontological analysis, which provides a technique for performing knowledge-level analysis of a problem space. This paper presents the experiences we have gained with knowledge-level analysis. Our experiences are reported and the criteria for a formal knowledge-level analysis language are discussed.
Structured Analysis of Knowledge BIBA 487-498
  S. A. Hayward; B. J. Wielinga; J. A. Breuker
Traditional approaches to Expert Systems development have emphasized the use of exploratory programming techniques and moving as quickly as possible from conceptualization to code. Our research demonstrates that implementation independent domain modelling is feasible and useful within the context of a methodology which aims at supporting good software engineering principles as applied to Expert Systems.
Induction of Horn Clauses: Methods and the Plausible Generalization Algorithm BIBA 499-519
  Wray Buntine
We are considering the problem of induction of uncertainty-free descriptions for concepts when arbitrary background knowledge is available, to perform constructive induction, for instance. As an idealised context, we consider that descriptions and rules in the knowledge-base are in the form of definite (Horn) clauses. Using a recently developed model of generality for definite clauses, we argue that some induction techniques are inadequate for the problem. We propose a framework where induction is viewed as a process of model-directed discovery of consistent patterns (constraints and rules) in data, and describe a new algorithm, the Plausible Generalization Algorithm, that has been used to investigate the sub-problem of discovering rules. The algorithm raises a number of interesting questions: How can we identify irrelevance during the generalization process? How can our knowledge-base answer queries of the form "What do (objects) X and Y have in common that is relevant to (situation) S?"
A Conceptual Framework for Knowledge Elicitation BIBA 521-531
  Chaya Garg-Janardan; Gavriel Salvendy
This paper includes a statement of the knowledge-elicitation problem and argues that the problem is two-fold: to outline a conceptual framework and to develop and validate a knowledge extraction methodology. Required and desirable attributes of a knowledge elicitation methodology are discussed. A conceptual framework that may be used to derive a knowledge elicitation methodology is outlined. This conceptual framework is established by extending Newell's and Simon's (1972) problem space concept and integrating it with Kelly's (1955) theory of personal constructs. This framework provides guidelines regarding the kind of knowledge to be elicited, and the sequence and format in which this should be done. It also elicits knowledge that is used subconsciously and in unique ways by the expert.
The Application of Psychological Scaling Techniques to Knowledge Elicitation for Knowledge-Based Systems BIBA 533-550
  Nancy M. Cooke; James E. McDonald
A formal knowledge-elicitation methodology that incorporates psychological scaling techniques to produce empirically derived knowledge representations is discussed. The methodology has been successfully applied in several domains and overcomes many of the difficulties of traditional knowledge-elicitation techniques. Research issues pertaining to the use of scaling techniques as knowledge-elicitation tools are outlined and in a particular issue, the elicitation of levels of abstraction in knowledge representations, is discussed in detail. Results from a study on the elicitation of knowledge about levels of abstraction for a set of Unix commands from experienced Unix users indicated that the representations obtained using this methodology can be used to obtain more abstract (i.e. categorical) representations of that knowledge.

IJMMS 1987 Volume 26 Issue 5

Propagation of Evidence in Rule Based Systems BIBA 551-566
  Wei-Min Dong; Felix S. Wong
This paper extends Shafer's theory of evidence so that the concept of evidence/ignorance can be applied to inference in a rule-based framework. Evidences supporting the truth of the conditions (antecedents) of a rule are propagated together with the evidence supporting the truth of the rule to become the evidence supporting the hypothesis (consequent) of the rule. Propagation with respect to two groups of rules is discussed: algorithmic rules and conditional rules. The propagation procedure is analogous to Dempster's rule of evidence combination, and can be considered a generalization of Dempster's ideas. Ample examples are used to illustrates the procedure.
Formatting Alphanumeric CRT Displays BIBA 567-580
  B. Mustafa Pulat; Herbert H. Nwankwo
A two-phase experiment was carried out to test effects of two CRT screen formatting variables on performance: namely, information complexity and grouping. Two kinds of tasks were considered: information entry and retrieval. Results indicate preference for medium complexity (169 bits) and four groups of information per frame. Significant effects of experience were also observed.
An Interactive Environment for Tool Selection, Specification and Composition BIBA 581-595
  James D. Arthur; Douglas E. Comer
This paper describes a high-level, screen-oriented programming environment that supports problem-solving by tool selection and tool composition. Each tool is a powerful parameterized program that performs a single high-level operation (e.g. sort a file). To solve a given problem, the user first interacts with the system to compose a task overview consisting of a sequence of generic operations. Such sequences are called compositions. Once an overview is established, a second part of the environment interacts with the user to help expand the generic operations into a corresponding sequence of parameterized tool calls. When a composition is expanded to include details such as parameterization and punctuation it is called a script. This script, when executed by the underlying runtime system, computes a solution to the specified user task.
   The current environment runs under the Unix operating system on a Vax 11/785, and uses a Bitgraph terminal with a 640 x 720 bitmap display and standard keyboard as the principal interface device.
Multi-Window Displays for Readers of Lengthy Texts BIBA 597-615
  J. Tombaugh; A. Lickorish; P. Wright
Two experiments explore whether it would help readers re-locate information in an "electronic book" if different windows on the screen were used to display specific sections of the text. Experiment 1, using a within-subject design, showed that reading and question answering were faster with a single window than with a multi-window display. Experiment 2, in which procedural skills were developed before starting the experiment, and a between-subject design was used, showed that this advantage for the single window display would not generally be the case. The multi-window display was a significant help to readers relocating information once they were familiar with the procedures for manipulating the text. The studies suggest ways in which the display of lengthy electronic texts may be improved. They also illustrate the ease with which misleading results can be obtained in studies of human-computer interaction, and emphasize the need for establishing adequate levels of procedural skill before exploring display characteristics.
An Error Correcting Protocol for Medical Expert Systems BIBA 617-625
  Jerrold A. Landau; Kenneth H. Norwich; Stephen J. Evans; Bohdan Pich
The problem of user error in medical expert systems is discussed. Human users are often not accustomed to the rigour demanded by computers, while computers are generally intolerant of human imprecision. Incorrect or insufficient entry of data into an expert system will usually result in errors in deduction by the computer, and these errors in turn lead to user frustration. The "perturbation technique", which is introduced in this paper, is designed to detect and rectify critical errors and omissions in user input. Following a mathematical development of this technique, several concrete examples are given with respect to DIAG, an expert system for aiding in the diagnosis of skin diseases.
Menu Search: Random or Systematic? BIBA 627-631
  James MacGregor; Eric Lee
This paper questions the conclusion that menu search is random, not systematic. Three sources of evidence -- search times per target as a function of target position, eye movement patterns during search, and the cumulative probability of locating a target as a function of time -- cited in support of random search (Card, 1982, 1983) are re-examined and shown to be consistent with systematic, sequential search.
A Decision-Table-Based Processor for Checking Completeness and Consistency in Rule-Based Expert Systems BIBA 633-648
  Brian J. Cragun; Harold J. Steudel
This paper addresses the issues of completeness and consistency in rule-based expert systems. The approach presented uses decision tables, which have a close relationship to rule-based knowledge bases. A decision-table-supported processor is described which checks knowledge bases for completeness and consistency using this approach. It creates a large decision table from the rules of the knowledge base, splits the decision table into subtables with similar logic, checks each subtable for completeness and consistency, and reports any missing rules. This method is faster than enumeration only methods of checking completeness, and provides implicit context determination of production rules.

IJMMS 1987 Volume 26 Issue 6

Cognitive Processing Differences Between Novice and Expert Computer Programmers BIBA 649-660
  Allan G. Bateson; Ralph A. Alexander; Martin D. Murphy
Research on cognitive processing differences between novice and expert groups has recently begun to focus on applied areas like computer programming. An often-used research paradigm has measured subjects' syntactic memory, their ability to recall briefly presented computer programs. This study demonstrates that expert programmers use semantic memory and high-level plan knowledge to direct their programming activities. Fifty subjects were divided into novice and expert groups based on the number of programming courses taken. Four tests were developed to measure syntactic memory, semantic memory, tactical skill, and strategic skill. Experts performance was superior on all tests. Additionally, the best set of predictor of programmer expertise was semantic memory, tactical skill, and syntactic memory. Results from this and subsequent research may have implications for areas such as selection and training.
A Clinical Field Study of Eight Automated Psychometric Procedures: The Leicester/DHSS Project BIBA 661-682
  J. Graham Beaumont; Christopher C. French
Eight psychological tests were administered to 367 subjects at five clinical sites employing an automated testing system incorporating an optional touch-sensitive screen for patient response. The number of tests undertaken by any given subject varied, but the majority of subjects were tested in an alternate-form test-retest design. It was clearly demonstrated that it is possible to produce psychometrically parallel computerized versions of existing tests, but certain tests were found less amenable to computerization. Good reliabilities were obtained on the Mill Hill Vocabulary Scale, Standard Progressive Matrices and all scales of the Eysenck Personality Questionnaire with the exception of the P scale. Reliability on the Money Road Map Test was acceptable, but reliability on the two Differential Aptitude Tests was disappointing. Certain of the tests produced significantly different scores on the two versions of the tests. Two of the tests studied were completed more rapidly on the computer. Cautious recommendations are made for the continuing use of the computerized versions of the tests.
Procedural and Non-Procedural Query Languages Revisited: A Comparison of Relational Algebra and Relational Calculus BIBA 683-694
  Gary W. Hansen; James V. Hansen
The performance of a single group of subjects on four queries of varying degrees of difficulty, written in both relational algebra and in relational calculus, is measured. The experiment indicates a definite superiority of algebra over calculus in the formulation of complex queries. Varying approaches to query formulation in the two languages are discussed and analysed. Differences between the complexity levels of the languages are illustrated via three forms of a single query.
Predicting End-User Acceptance of Microcomputers in the Workplace BIBA 695-705
  Myron E. Hatcher; Thomas Ronald Diebert
Microcomputers are increasingly being used in the offices of today. Due to increased cost savings, microcomputers are replacing the old mainframe computers. Micros are also being adopted into offices that have never used any type of computer system. This has brought about various reactions from office personnel who have never used a microcomputer before. Many people are afraid of using these microcomputers. Many people are naturally resistant to change because of uncertainty about benefits and this poses a problem when instituting microcomputers in the office place.
   It is the purpose of this study to develop a decision model which could be used by any office that is instituting a microcomputer for the first time. This model is in the form of a questionnaire, cross-validated by experts, and decision rules. The questionnaire would be given to the workers and the resulting aggregate score would tell management just how accepting his/her office is towards microcomputer usage. Individual scores can be used to see who will need more help and who could provide some of it. The underlying theme is that the microcomputers' introduction success relates to time and resources for implementation. How and where this occurs is the key to success or failure.
   Other concerns are the effects of age and education. Does the older employee resist change more? Would older employees dislike the idea of using the computer more than the younger ones? And does the educational level effect one's willingness to learn new things? These sociodemographic variables were tested and found not to be significant.
New Reasoning Methods for Artificial Intelligence in Medicine BIBA 707-718
  Benjamin Kuipers
The discovery and validation of knowledge representations for new types of reasoning is a vital step in artificial intelligence (AI) research. A clear example of this process arises in our recent study of expert physicians' knowledge of the physiological mechanisms of the body. First, we observed that the reliance on weighted associations between findings and hypotheses in first-generation medical expert systems made it impossible for them to express knowledge of disease mechanisms. Second, to obtain empirical constraints on the nature of this knowledge of mechanism in human experts, we collected and analysed verbatim transcripts of expert physicians solving selected clinical problems. This analysis led us to the key aspects of a qualitative representation for the structure and behavior of mechanisms. The third step required a computational study of the problem of inferring behavior from structure, and resulted in a completely specified and implemented knowledge representation and a qualitative simulation algorithm (QSIM). Within this representation, we built a structural description for the mechanism studied in the transcripts, and the simulation produced the same qualitative prediction made by the physicians. Finally, the system is validated in two ways. A mathematical analysis demonstrates the power and limitations of the representation and algorithm as a qualitative abstraction of differential equations. The medical contents of the knowledge base is evaluated and refined using the standard knowledge-engineering methodology. We believe that this combination of cognitive, computational, mathematical, and domain knowledge constraints provides a useful paradigm for the development of new knowledge representations in artificial intelligence.
Modelling Casework: A Case Study of the Development of Computer-Based Models for Judgemental Training BIBA 719-733
  P. B. Taylor; G. H. D. Royston
The use of computer-based simulation models for training staff in procedures is well established. This paper focuses on the development of computer-based models for judgemental training in an area of activity where procedural aspects are well understood and highly formalised. The need for improved judgements arises out of an increasing concern for cost-effectiveness in casework.
   The paper concentrates on the participative development of the models which, in computer realised form, create a highly interactive training aid. The model development work itself provides an interesting case study in some aspects of knowledge engineering. The knowledge elicitation and representation parts of the study took place almost entirely within a multi-disciplinary environment of a team having a mixture of modelling, training and domain expertise.