HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 192021222324252627282930313233343536373839

International Journal of Man-Machine Studies 29

Editors:B. R. Gaines; D. R. Hill
Dates:1988
Volume:29
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:41
Links:Table of Contents
  1. IJMMS 1988 Volume 29 Issue 1
  2. IJMMS 1988 Volume 29 Issue 2
  3. IJMMS 1988 Volume 29 Issue 3
  4. IJMMS 1988 Volume 29 Issue 4
  5. IJMMS 1988 Volume 29 Issue 5
  6. IJMMS 1988 Volume 29 Issue 6

IJMMS 1988 Volume 29 Issue 1

A Model of Fault Diagnosis Performance of Expert Marine Engineers BIBA 1-20
  T. Govindaraj; Yuan-Liang D. Su
Models of fault diagnosis by expert human operators are classified into two types: macro and micro. Macro models describe general problem-solving rules or strategies that are abstracted from observations of expert fault diagnostic behaviour. Micro models are concerned with the detailed knowledge and the mechanisms underlying the diagnostic actions. This paper proposes a micro model developed from observations of fault diagnosis performance on a marine powerplant simulator. Based on experimental data, including protocols and operator action sequences, two types of knowledge are identified: rule-based symptom knowledge and hierarchical system knowledge. The diagnostic process seems to proceed with frequent reference to these two types of knowledge. Characteristics of the diagnostic process are discussed. A conceptual entity called a hypothesis frame is employed to account for observed characteristics. The diagnostic process involves choosing an appropriate frame that matches the known symptoms and evaluating the frame against the system state. This model of fault diagnosis performance is employed to explain protocol data and operator actions.
An Application of a Computerized Fuzzy Graphic Rating Scale to the Psychological Measurement of Individual Differences BIBA 21-35
  Tim Hesketh; Robert Pryor; Beryl Hesketh
This paper aims to outline and evaluate a new approach to measurement within psychology. A computerized fuzzy graphic rating scale which is an extension of a semantic differential is described. The scale allows respondents to provide an imprecise rating and lends itself to analysis using fuzzy set theory. Respondents rated nine occupational stimuli, carefully chosen to represent three levels of prestige (Daniel, 1983) and three levels of sex-type (Shinar, 1975), eight fuzzy graphic rating scales (5 for prestige and 3 for sex-type). A single expected value was calculated for the fuzzy ratings of the occupations to permit correlations with the a priori values for the nine occupations. Various combinations of scales were obtained by forming the union of individual fuzzy ratings. Expected values based on combined scales were calculated and the results were also correlated with the a priori Daniel and Shinar scale values. Potential applications of the fuzzy graphic rating scale are outlined.
The Effect of Different Conceptual Models Upon Reasoning in a Database Query Writing Task BIBA 37-62
  Cecilia Katzeff
This paper proposes that database query writing may be viewed as a hypothesis testing activity. The paper attempts to contribute to the knowledge of what constitutes an appropriate model for efficient use of a database query language. An empirical study was carried out, where subjects were divided into four different conditions. One group received no model; one group received a table model; two groups received descriptions of the query language in terms of sets, one description also providing a general logical explanation. The subjects replied to questions by posing queries to a database system. Each subject was given questions with two types of linguistic structure -- one involving the intersection and the other the union of negative sets. Think-aloud data were collected as well as logdata of the subjects' keystrokes. Analyses of these data indicate that the two set models produced superior performance in subjects. The two set models facilitated the formulation of correct expected computer replies, and thus allowed for more efficient hypothesis testing. The table model proved as inadequate as no model at all. The linguistic structure involving the union of negative sets was more difficult to deal with than the intersection of negative sets.
Question Asking when Learning a Text-Editing System BIBA 63-79
  Carl Martin Allwood; Mikael Eliasson
This study concerns question asking in the context of learning a text-editing program. Twenty-eight novices on the computer were asked to read an instruction manual for a text-editing program. Next, the subjects performed four simple text-editing tasks on the computer and answered a questionnaire on central concepts in the text. Half of the subjects were instructed to ask questions on the content of the text after reading each of 17 sections in the manual. These questions were answered by the experimenter. All subjects rated the difficulty of the text after each of the 17 sections in the text. The opportunity to ask questions did not improve performance on the computer interaction task or on the questionnaire, nor did it make the text easier to read as evidenced by subjects' difficulty ratings of the text. The total number of questions asked by the subjects was related to level of education but not to performance. Type of question content correlated with performance. A significant correlation between rating the text as difficult and poor performance on both performance tasks indicates that the subjects had some access to information which could have been utilized when deciding whether or not to ask a question.
Rough Sets: Probabilistic versus Deterministic Approach BIB 81-95
  Zdzislaw Pawlak; S. K. M. Wong; Wojciech Ziarko

IJMMS 1988 Volume 29 Issue 2

An Expert System for Solving Retrograde-Analysis Chess Problems BIBA 97-112
  B. E. P. Alden; M. A. Bramer
The concern here is with the solution of a specialized form of problem (closely concerned with the game of chess) by means of heuristic methods. Defined is an expert system, RETRO, whose domain of application is retrograde-analysis chess problems. This type of problem, chess logic problems as they are sometimes called, differs from the conventional type of chess problem in that it is concerned only with the past history of the game, and what may be deduced about it. Typically, a (human) solution proceeds by the solver asking himself a series of questions in the form of a Socratic dialogue until a solution emerges. RETRO makes use of a frame-like approach to determine the questions that must be asked to effect a solution. Although RETRO cannot solve any conceivable retrograde-analysis problem, the approach taken has been designed to be of general applicability.
Fuzzy Sets and T-Norms in the Light of Fuzzy Logic BIBA 113-127
  Vilem Novak; Witold Pedrycz
The paper is a discussion of the many-valued fuzzy logic, which is syntactico-semantically complete and its impact on the fuzzy set theory, namely on the operations with fuzzy sets. Arguments that all the operations with membership grades must fulfil the so called fitting condition are given. It follows that general t-norms are not suitable to be basis of the operations with fuzzy sets. Some general classes of operations with membership grades are presented.
Managing Uncertainty in a Fuzzy Expert System BIBA 129-148
  J. J. Buckley
All data and rules in a fuzzy expert system are accompanied by their degree of confidence values. This paper is concerned with processing these confidence values during one round of rule firing in a fuzzy expert system. Part I discusses determining the final confidence in the left hand side of a rule which includes: (1) pattern evaluation; (2) finding the confidence in the antecedent; and (3) combining rule and antecedent confidence. Part II discusses the maintenance of memory in a fuzzy expert production system, in both deductive reasoning systems (with sequential rule-firing schemes) and in inductive reasoning systems (with rules fired in parallel).
The Evaluation of Verbal Models BIBA 149-157
  Rami Zwick
This paper proposes an operational method for evaluating verbal models. The method is based on a statistical technique in which the performance of the verbal model is compared to the performance of an alternative simple random choice model. The method is demonstrated by using experimental data to evaluate Yager's model (1978; 1984) of fuzzy probabilities.
Using Explanations for Knowledge-Base Acquisition BIBA 159-169
  Kenneth Silvestro
An expert system capable of 'intelligent behaviour' requires access to a large store of knowledge called a knowledge-base. A practical approach to knowledge-base evolution, addressing the complex task of hand-crafting domain specific knowledge, has become necessary. One solution is computer acquisition of knowledge. User supplied natural language explanations provide our artificial intelligence acquisition tool with the necessary information to construct and develop a domain specific knowledge-base for a target expert system. The theory of acquisition is founded upon a multi-process strategy relating individual utterance 'types' to a unique acquisition process and the underlying structural elements of explanations.
Using Concept Learning for Knowledge Acquisition BIBA 171-196
  Ian H. Witten; Bruce A. MacDonald
Although experts have difficulty formulating their knowledge explicitly as rules, they find it easy to demonstrate their expertise in specific situations. Schemes for learning concepts from examples offer the potential for domain experts to interact directly with machines to transfer knowledge. Concept learning methods divide into similarity-based, hierarchical, function induction, and explanation-based knowledge-intensive techniques. These are described, classified according to input and output representations, and related to knowledge acquisition for expert systems. Systems discussed include candidate elimination, version space, ID3, PRISM, MARVIN, NODDY, BACON, COPER, and LEX-II. Teaching requirements are also analysed.
A Knowledge Acquisition Environment for Scene Analysis BIBA 197-213
  Deborah Tranowski
This paper describes a knowledge acquisition environment under development to help capture expertise from domain experts involved in analysing scenes from aerial imagery. The research is important because automated image understanding systems are increasingly relying on expert knowledge to help analyse objects and control the analysis process. It is desirable to enable the domain experts to enter and manipulate the domain knowledge directly. The research described is based on the concept of an integrated knowledge acquisition environment (KAE). The goal is to integrate the domain inputs, the translation into internal representations and the actual execution and feedback. The KAE contains a collection of computer-based tools facilitating: viewing and editing domain knowledge in both textual and graphic format (analysts tend to be visually oriented), knowledge base execution and testing, and expert system performance analysis.
Knowledge Acquisition as Knowledge Assimilation BIBA 215-226
  Lawrence S. Lefkowitz; Victor R. Lesser
The assimilation of information obtained from domain experts into an existing knowledge base is an important facet of the knowledge acquisition process. Knowledge assimilation requires an understanding of how the new information corresponds to that already contained in the knowledge base and how this existing information must be modified so as to reflect the expert's view of the domain. This paper describes a system, KnAc, that modifies an existing knowledge base through a discourse with a domain expert. Using heuristic knowledge about the knowledge acquisition process, KnAc anticipates modifications to existing entity descriptions. These anticipated modifications, or expectations, provide a context in which to assimilate new domain information.

IJMMS 1988 Volume 29 Issue 3

Intelligent Computer-Assisted Instruction: A Theoretical Framework BIBA 227-255
  Federico Bumbaca
A theoretical and psychologically meaningful framework for the design of intelligent computer-assisted instruction (ICAI) systems is presented. It is argued that to design more effective and robust ICAI systems a thorough knowledge level analysis of the problem should be performed before implementation issues can be addressed. To this end, with the aid of artificial intelligence (AI) techniques and a knowledge level analysis of the problem an appropriate knowledge representation scheme and system architecture are proposed. It is further suggested that a suitable knowledge representation scheme should be psychologically valid and ideally modeled after a human tutor. One such representation, namely, Schank and Abelson's memory structures, is chosen and shown to be well matched to the requirements of an intelligent tutoring system. It models the basic memory structures of both the student and tutor upon which the goals, plans, and themes of these agents may be built. A blackboard control architecture which provides an extremely flexible environment for intelligent systems is also chosen, and is claimed to be in agreement with ICAI knowledge level requirements. A limited example is finally detailed demonstrating the applicability of this framework to ICAI systems.
A Knowledge-Based Approach to Computer-Aided Learning BIBA 257-285
  Geoffrey I. Webb
This paper describes a methodology for the creation of knowledge-based computer-aided learning lessons. Unlike previous approaches, the knowledge base is utilized only for restricted aspects of the lesson -- both for the management of flow of control through a body of instructional materials and for the evaluation of the student's understanding of the subject matter. This has many advantages. While the approach has lower developmental and operational overheads than alternatives it is also able to perform far more flexible evaluations of the student's performance. As flow of control is managed by a knowledge-based component with reference to a detailed analysis of the student's understanding of the subject matter, lessons adapt to each student's individual understanding and aptitude within a domain.
Uses of Repertory Grid-Centred Knowledge Acquisition Tools for Knowledge-Based Systems BIBA 287-310
  John H. Boose
Repertory grid-centred knowledge acquisition tools are useful as knowledge engineering aids when building many kinds of complex knowledge-based systems. These systems help in rapid prototyping and knowledge base analysis, refinement, testing, and delivery. These tools, however, are also being used as more general knowledge-based decision aids. Such features as the ability to very rapidly prototype knowledge bases for one-shot decisions and quickly combine and weigh various sources of knowledge, make these tools valuable outside of the traditional knowledge engineering process. This paper discusses the use of repertory grid-centred tools such as the Expertise Transfer System (ETS), AQUINAS, KITTEN, and KSSO. Dimensions of use are presented along with specific applications. Many of these dimensions are discussed within the context of ETS and AQUINAS applications of Boeing.
ASTEK: A Multi-Paradigm Knowledge Acquisition Tool for Complex Structured Knowledge BIBA 311-327
  Chris Jacobson; Michael J. Freiling
Knowledge-based systems can require large, highly complex and varied forms of knowledge. An effective knowledge acquisition tool to support such a system should allow the user to transfer and manipulate the different forms knowledge in a manner that is clear and intuitive. ASTEK is a knowledge acquisition tool that provides multiple paradigms for knowledge editing while maintaining a single, consistent framework designed using natural language discourse concepts.
Validation in a Knowledge Support System: Construing and Consistency with Multiple Experts BIBA 329-350
  Mildred L. G. Shaw; J. Brian Woodward
At the previous workshop on Knowledge Acquisition for Knowledge-Based Systems in 1986, criteria for a knowledge support system were discussed, and a preliminary version of KITTEN (Knowledge Initiation and Transfer Tools for Experts and Novices) was described and demonstrated on Apollo workstations. This study is a continuation of the validation studies done by Shaw & Gaines (1983), and investigates a framework for knowledge acquisition evaluation and validation. KITTEN has been evaluated against the first stage of the model and the results are reported in the two domains of spatial interpolation techniques to produce contour maps and in trouble-shooting and maintenance of valves for oil and gas pipelines. Some preliminary results are described on validation experiments to show the extent to which experts agree with each other, with themselves at a later date, and with the results of the processing of their knowledge. Some of the questions asked were:
  • (1) To what extent does an expert find the generated rules meaningful?
  • (2) Do experts agree on their terminology in talking about a topic?
  • (3) To what extend do experts agree among themselves about a topic?
  • (4) Does an expert always use the same terminology?
  • (5) To what extent does each experts agree with the knowledge at a different
        time?
  • IJMMS 1988 Volume 29 Issue 4

    An Expert System for Conceptual Schema Design: A Machine Learning Approach BIBA 351-376
      R. Yasdi; W. Ziarko
    In this paper, we report the design specifications and design principles of EXIS, an expert system for conceptual schema design for an information system currently under development. We focus on machine learning aspects applicable to schema design. The main idea can be highlighted better if integrated with a complete framework of the design environment. Therefore, we first describe a conceptual database model consisting of a semantic model and an event model. Hereafter, we present our approach to design knowledge acquisition and representation which is based on inducing schema design rules from examples. We also present relevant aspects of the theory of Rough Sets and the learning method used in our system. Throughout the paper we discuss several concepts and techniques for expert system design which proved very useful and can be adapted to any other application. Here we tend to avoid being ambiguous by using first order logic to express our ideas.
    A Structured Knowledge Elicitation Methodology for Building Expert Systems BIBA 377-406
      Chaya Garg-Janardan; Gavriel Salvendy
    A key problem in building expert systems is the extraction of knowledge from human experts. This paper presents a conceptual framework and methodology for knowledge elicitation. The framework models, in a domain-independent manner, the structure of human problem solving knowledge and the context in which problems are solved. It defines the knowledge that should be elicited by the methodology and helps derive the procedure used by the methodology to extract knowledge. This framework is used to develop a structured multi-phase methodology that elicits knowledge in a domain independent manner. This methodology is partially implemented as a computer program in Turbo-Pascal and was used to elicit knowledge from experts in a sample real-world setting. Reliability and validity evaluations performed on the elicited knowledge establish the validity of this approach.
    Effects of Computer Programming Experience on Network Representations of Abstract Programming Concepts BIBA 407-427
      Nancy J. Cooke; Roger W. Schvaneveldt
    The cognitive organization of a set of abstract programming concepts was investigated in subjects who varied in degree of computer programming experience. Relatedness ratings on pairs of the concepts were collected from naive, novice, intermediate, and advanced programmers. Both individual and group network representations of memory structure were derived using the Pathfinder network scaling algorithm. Not only did the four group networks differ, but they varied systematically with experience, providing support for the psychological meaningfulness of the structures. Additionally, an analysis at the conceptual level revealed that the four groups differed in the way concepts were represented. Furthermore, this analysis was used to classify concepts in the naive, novice, and intermediate networks as well-defined or misdefined. The identification of semantic relations corresponding to some of the links in the networks provided further information concerning differences in programmer knowledge at different levels of experience. Applications of this work to programmer education and knowledge engineering are discussed.
    A Formal Analysis of Machine Learning Systems for Knowledge Acquisition BIBA 429-446
      Valerie L. Shalin; Edward J. Wisniewski; Keith R. Levi; Paul D. Scott
    Machine learning techniques can be of great value for automating certain aspects of knowledge acquisition. Given the potential of machine learning for knowledge acquisition, we have begun a systematic investigation of how one might map the functions of knowledge-based systems onto those machine learning systems that provide the required knowledge. The goal of our current research is to provide a general characterization of machine learning systems and their respective application domains.
    Refining Problem-Solving Knowledge in Repertory Grids Using a Consultation Mechanism BIBA 447-460
      David B. Shema; John H. Boose
    A general problem when modifying knowledge bases is that changes may degrade system performance. This is especially a problem when the knowledge base is large; it may be unclear how changing one item in a knowledge base containing thousands of items will affect overall system performance. Aquinas, a knowledge acquisition tool, uses knowledge elicitation and representation techniques and consultation review mechanisms to help alleviate this problem. The consultation review mechanisms are discussed here. We are experimenting with ways to use consultations and test cases to refine the information in an Aquinas knowledge base. The domain expert can use interactive graphics to specify the expected results. Modifications to the knowledge base may be tested against previous consultations; adjustments are suggested that make the results of all previous consultations as well as the current consultation correlate better with the expert's expectations. New traits are synthesized that would improve the performance of all previous consultations. New test cases are suggested that cover aspects missed by previous test cases. While we are just beginning to experiment with these techniques, they promise to provide help in improving problem-solving performance and gaining problem-solving insight.
    Design Goals for Sloppy Modeling Systems BIBA 461-477
      Stefan Wrobel
    The sloppy modeling paradigm regards knowledge acquisition as a cooperative process between user and system, in which the system's learning and structuring algorithms and the user are regarded as partners in a common problem solving activity. In this paper, we discuss the design goals that this paradigm entails for a knowledge acquisition system, with a special focus on the environment that needs to be presented to a user. We identify six criteria that such a sloppy modeling environment should meet. We then present the sloppy modeling system BLIP, its knowledge representation, its system architecture, and its user interface as an example of such an environment. After a transcript of a sample session with the system, we finally evaluate BLIP on the criteria established.

    IJMMS 1988 Volume 29 Issue 5

    A Survey of Formal Tools and Models for Developing User Interfaces BIBA 479-496
      Mohammad U. Farooq; Wayne D. Dominick
    This paper presents a survey of formal tools, methodologies, and models which have been proposed for developing user interfaces for interactive information systems. The treatment examines issues related to human engineering, human-computer interfacing, behavioural experiments, and user interface design aids. Particular emphasis is placed on user research studies, specification techniques for interactive language modeling, analytical studies of user-system interaction, user models (including cognitive models, conceptual models, and mental models), and user interface management systems. The paper concludes with a brief list of suggested future research directions based on the results of this survey.
    Accuracy and Savings in Depth-Limited Capture Search BIBA 497-502
      Prakash Bettadapur; T. A. Marsland
    Capture search, an expensive part of any chess program, is conducted at every leaf node of the approximating game tree. Often an exhaustive capture search is not feasible, and yet limiting the search depth compromises the result. Our experiments confirm that for chess a deeper search results in less error, and show that a shallow search does not provide significant savings. It is therefore better to do an arbitrary depth capture search. If a limit is used for search termination, an odd depth is preferable.
    Human Performance in Relational Algebra, Tuple Calculus, and Domain Calculus BIBA 503-516
      Gary W. Hansen; James V. Hansen
    The results of two experiments are reviewed in which users' performance in developing query solutions using relational algebra, relational tuple calculus, and relational domain calculus was measured. Sample query solutions in domain and tuple calculus are analysed and compared for complexity. In terms of human factors, tuple calculus is apparently the weakest of the three languages, with domain calculus showing a decided improvement over tuple calculus. Query solution analysis indicates possible reasons for this result. Although users performed best in relational algebra, their performance in domain calculus was equal to that of relational algebra in three of four query classifications. Further investigation is needed to compare algebra and domain calculus in other query classifications, in order to more precisely determine their relative utility in solving complex queries.
    Non-Monotonic Compatibility Relations in the Theory of Evidence BIBA 517-537
      Ronald R. Yager
    A belief structure, m, provides a generalized format for representing uncertain knowledge about a variable. We suggest that the idea of one belief structure being more specific than another is related to the plausibility-certainty interval, more fundamentally, how well we know the probability structure. A compatibility relation provides a structure for obtaining information about one variable based upon a second variable. An inference scheme in the theory of evidence concerns itself with the use of a compatibility relation and a belief structure on one variable to infer a belief structure on the second variable. The problem of monotonicity in this situation can be related to change in the specificity of the inferred belief structure as the antecedent belief structure becomes more specific. We show that the usual compatibility relations, type I, are always monotonic. We introduce type II compatibility relations and show that a special class of these, which we call irregular, are needed to represent non-monotonic relations between variables. We discuss a special class on non-monotonic relations called default relations.
    User-Adaptive Computer Graphics BIBA 539-548
      Marek Holynski
    Computer graphics presentations should be modified to the user's needs and on-the-spot applications. This adjustment is provided by computer graphics systems that use artificial intelligence methodology for an automatic selection of the presentation format for a data set, a format attuned to individual users. These systems can learn about general principles of graphic presentation, application constraints, users' preferences and knowledge and incorporate them into display algorithms. Adaptive Graphics Analyser, the system presented in this paper, uses machine learning techniques for discovering standards of effective visual representation of data and incorporates them into a user-adaptive graphics package. Utilizing these standards the system can generate, change and refine images interactively according to the user's requirements.
    Protos: An Exemplar-Based Learning Apprentice BIBA 549-561
      E. Ray Bareiss; Bruce W. Porter; Craig C. Wier
    Building Protos, a learning apprentice system for heuristic classification, has forced us to scrutinize the usefulness of inductive learning and deductive problem solving. While these inference methods have been widely studied in machine learning, their seductive elegance in artificial domains (e.g. mathematics) does not carry-over to natural domains (e.g. medicine). This paper briefly describes our rationale in the Protos system for relegating inductive learning and deductive problem solving to minor roles in support of retaining, indexing, and matching exemplars. The problems that arise from "lazy generalization" are described along with their solutions in Protos. Finally, an example of Protos in the domain of clinical audiology is discussed.
    MOLE: A Knowledge Acquisition Tool that Buries Certainty Factors BIBA 563-577
      Larry Eshelman
    MOLE is a knowledge acquisition tool for helping experts build systems that do differential diagnosis. Diagnostic expert systems often have to rely upon inferences that involve some degree of uncertainty. Typically, the tentativeness of the rules of inference is represented by certainty factors or some other cardinal measure of support. Unfortunately, this information is difficult to acquire from the experts. This paper describes how MOLE is able to dispense with certainty factors. By integrating into its problem-solving method several heuristic assumptions about how evidence relates to hypotheses, and by including in its knowledge acquisition process a way of generalizing the expert's preferences, MOLE does not need to elicit certainty factors from the domain experts or to internally represent the degree of support of an inference rule with certainty factors. This facilitates knowledge acquisition with no loss of diagnostic performance.
    Acquiring Strategic Knowledge from Experts BIBA 579-597
      Thomas R. Gruber
    This paper presents an approach to the problem of acquiring strategic knowledge from experts. Strategic knowledge is used to decide what course of action to take, when there are conflicting criteria to satisfy and the effects of actions are not known in advance. We show how strategic knowledge challenges the current approaches to knowledge acquisition: knowledge engineering, interactive tools for experts, and machine learning. We present a knowledge acquisition methodology embodied by an interactive tool that draws from each approach, automating much of what is currently performed by knowledge engineers, and synthesizing interactive and automatic learning techniques. The technique for eliciting strategic knowledge from experts and transforming it into an executable form addresses the technical problems of operationalization, encoding examples, biasing generalization, and the new terms problem.
    Toward Automating Recognition of Differing Problem-Solving Demands BIBA 599-611
      Jeffrey Stout; Gilbert Caplain; Sandra Marcus; John McDermott
    SALT provides a knowledge acquisition framework for the development of expert systems that use propose-and-revise as their problem-solving method. These systems incrementally construct a tentative design, identify constraints on the design and revise design decisions in response to constraint violations. By having an understanding of the specific problem-solving method used to integrate the knowledge it acquires, it has been previously shown that SALT possesses a number of advantages over less restrictive programming languages. We have applied SALT to a new type of propose-and-revise task, and have identified areas where SALT was too restrictive to adequately permit acquisition of domain knowledge or efficient utilization of that knowledge. Addressing these problems has led to a more "general" SALT and to a better understanding of when it is an appropriate tool.

    IJMMS 1988 Volume 29 Issue 6

    Experimentation in Computer Science: An Empirical View BIBA 613-624
      Jeffrey Mitchell; Charles Welty
    In many disciplines, scientific inquiry relies heavily on experimentation. Computer science is compared to other scientific disciplines in its use of experimentation by classifying articles in professional journals as experimental or non-experimental. The results of the classification suggest that experiments occur less frequently in computer science than in many other disciplines.
    Knowledge-Based Successive Linear Programming for Linearly Constrained Optimization BIBA 625-636
      Shao-bo Wang; Yong-zai Lu
    Combining knowledge engineering technology with some operations research algorithms will get novel efficient optimization methods. As an example, this paper discusses a knowledge-based successive linear programming method for linearly constrained optimization problems. In this new method we use both traditional successive linear programming algorithm in operations research and the knowledge base which is constructed with the expertise of an optimization expert and valuable experience data, so that this knowledge-based program can solve optimization problems somewhat like a human expert who is great at operations research and has a lot of practical experience of problem-solving. The improvement of efficiency in problem-solving depends mainly on the skillful use of plausible reasoning based on incomplete experience knowledge. In addition, man-machine interaction during the computation procedures is also used. Finally, two numerical examples illustrate that the proposed method is much more flexible and efficient than the traditional operations research algorithms concerned.
    On the Representation and the Impact of Reliability on Expert System Weights BIBA 637-646
      Daniel E. O'Leary
    Rule-based expert systems employ weighting schemas that associate weights with a rule. In the development of an expert system the reliability of the rules may be a critical variable. However, currently, weighting systems do not facilitate accounting for reliability. Accordingly, this paper demonstrates how to introduce reliability into one of the primary systems for weighting the rules. After the introduction of reliability a number of findings are discovered. First, small changes in reliability can lead to substantial changes in the adjusted weights. Second, when reliability is completely uncertain the weights become 0. Third, introducing reliability can change the signs of the revised weights. Fourth, it is unlikely that heuristics can be effectively used instead of an analytic approach.
    Accommodating Individual Differences in Searching a Hierarchical File System BIBA 647-668
      Kim J. Vicente; Robert C. Williges
    Individual differences among users of a hierarchical file system were investigated. The results of a previous experiment revealed that subjects with low spatial ability were getting lost in the hierarchical file structure. Based on the concept of visual momentum, two changes to the old interface were proposed in an attempt to accommodate the individual differences in task performance. The changes consisted of a partial map of the hierarchy and an analogue indicator of current file position. This experiment compared the performance of users with high and low spatial abilities on the old verbal interface and the new graphical interface. The graphical interface resulted in changes in command usage that were consistent with the predictions of the visual momentum analysis. Although these changes in strategy resulted in a performance advantage for the graphical interface, the relative performance difference between high and low spatial groups remained constant across interfaces. However, the new interface did result in a decrease in the within-group variability in performance.
    The Mental Rotation and Perceived Realism of Computer-Generated Three-Dimensional Images BIBA 669-684
      Woodrow Barfield; James Sandford; James Foley
    Two experiments were performed, one to investigate the effects of computer-generated realism cues (hidden surfaces removed, multiple light sources, surface shading) on the speed and accuracy with which subjects performed a standard cognitive task (mental rotation), the other to study the subjective perceived realism of computer-generated images. In the mental rotation experiment, four angles of rotation, two levels of object complexity, and five combinations of realism cues were varied as subjects performed "same-different" discriminations of pairs of rotated three-dimensional images. Results indicated that mean reaction times were faster for shaded images than for hidden-edge-removed images. In terms of speed of response and response accuracy, significant effects for object complexity and angle of rotation were shown. In the second experiment, subjective ratings of image realism revealed that wireframe images were viewed as less realistic than shaded images and that number of light sources was more important in conveying realism than type of surface shading. Implications of the results for analogue and propositional models on memory organization and integral and non-integral characteristics of realism cues are discussed.
    A Mathematical Programming Approach to Inference with the Capability of Implementing Default Rules BIBA 685-714
      Ronald R. Yager
    We suggest solving the problem of logical inference via the use of mathematical programming. We investigate a way that we can use this programming approach to reason in the face of default rules.
    Knowledge Acquisition for Evaluation Systems BIBA 715-731
      Georg Klinker; Serge Genetet; John McDermott
    KNACK is a specialized knowledge acquisition tool that generates expert systems for evaluating different classes of designs. An important goal in the development of KNACK is that it acquires knowledge from domain experts without presupposing knowledge engineering skills on their part. During knowledge acquisition KNACK gains power by exploiting a presupposed problem solving method and a domain model. This paper describes KNACK's approach to automating the acquisition of a domain model as part of its knowledge acquisition strategy. To build a model of a domain, general understanding about evaluation is incorporated into KNACK. In an initial questioning session with domain experts KNACK customizes that knowledge and builds a preliminary model of the domain. Critical for KNACK's performance is its general understanding about evaluation and its ability to refine the preliminary domain model into a detailed structural and functional model of a particular domain. To get a better understanding of the means of evaluation and how to derive a domain model, KNACK was used to create a series of application systems in different domains. The experience gained with these tasks resulted in some data describing KNACK's performance and scope.
    Cognitive Primitives BIBA 733-747
      Alain T. Rappaport
    This paper addresses the problem of the level of abstraction at which knowledge-based system computational primitives must be developed so as to facilitate the knowledge acquisition process. Low-level programming or the use of task-level methodologies as they exist now, respectively prevent rapid learning and development and lock the knowledge designer in rigid problem-solving paradigms. We explore the principles underlying the design of a compromise-level set of primitives called cognitive primitives. They are domain and task-independent computational primitives which can be used to map an expert's behaviour into an artificial formalism and integrate it in existing environments. Flexible task- or domain-level functions can emerge from working with these primitives. Examples are presented of the design and use of this computational approach. This new approach leads to the design of tools whose functions more closely match human expert knowledge, which is difficult to decompile and thus to represent in more classic formalisms.