| Guest Editorial and Conference Report | | BIB | 1-3 | |
| James C. Bezdek | |||
| NAFIP-1. Panel Discussion on Introduction of Fuzzy Sets to Undergraduate Engineering and Science Curricula | | BIB | 5-7 | |
| J. T. P. Yao | |||
| Potential Applications of Fuzzy Sets in Civil Engineering | | BIBA | 9-18 | |
| J. L. A. Chameau; A. Alteschaeffl; H. L. Michael; J. T. P. Yao | |||
| The authors are all civil engineers but with various specialties in soil dynamics, foundation engineering, transportation and structural engineering. Because they are colleagues on the same faculty, there have been many occasions to discuss potential applications of the theory of fuzzy sets in their respective fields. The different disciplines that are involved appear to have complemented each other in the development of ideas to solve problems in the civil engineering profession using the fuzzy sets mathematics. In this paper their ideas are summarized and selected applications and future developments are discussed. | |||
| Experiments in Evidence Composition in a Speech Understanding System | | BIBA | 19-31 | |
| Lorenza Saitta | |||
| A method for composing partial evidences in pattern recognition problems is
presented and experimental results, referring to speech understanding, are also
discussed.
The method is well suited for real-time problems, where speed and parallelism in taking decisions are fundamental requirements. The case study presented in the paper is a simple one, for the sake of clarity, but a generalization to complex production systems can be easily obtained. | |||
| Information Content of an Evidence | | BIBA | 33-43 | |
| Philippe Smets | |||
| A measure of the information content of an evidence inducing a belief function or a possibility function is axiomatically defined. Its major property is to be additive for distinct evidences. | |||
| Fuzzy Sets and Generalized Boolean Retrieval Systems | | BIBA | 45-56 | |
| Donald H. Kraft; Duncan A. Buell | |||
| Substantial work has been done on the application of fuzzy subset theory to
information retrieval. Boolean query processing has been generalized to allow
for weights to be attached to individual terms, in either the document indexing
or the query representation, or both. Problems with the generalized Boolean
lattice structure have been noted, and an alternative approach using query
thresholds and appropriate document evaluation functions has been suggested.
Problems remain unsolved, however. Criteria generated for the query processing mechanism are inconsistent. The exact functional form and appropriate parameters for the query processing mechanism must be specified. Moreover, the generalized Boolean query model must be reconciled with the vector space approach, suggested new lattice structures for weighted retrieval, and probabilistic retrieval models. Finally, proper retrieval evaluation mechanisms reflecting the fuzzy nature of retrieval are needed. | |||
| Issues in Fuzzy Production Systems | | BIBA | 57-71 | |
| Thomas Whalen; Brian Schott | |||
| The purpose of this study is to examine some critical issues in the
development of practical knowledge-based systems using fuzzy logic and related
techniques. Eight current and proposed implementations of such systems have
been selected for study in order to provide a context for examining the
following three general topic areas: the overall structure and application of
the system; the nature and locus of uncertainty; and the representation of the
logical implication (If-Then) operator. Our goal is not to determine which
treatment of these issues is best overall, but rather to draw from such
practical experience as currently exists to begin mapping out the advantages
and disadvantages of each option relative to the specific structure of the
application being addressed.
The eight fuzzy knowledge-based systems selected are not intended as a complete survey of a rapidly-growing field, but only to provide a sampling from the wide variety of ways in which fuzzy logic can be used to represent and process various sorts of inexact knowledge. | |||
| Querying Knowledge Base Systems with Linguistic Information via Knowledge Trees | | BIBA | 73-95 | |
| Ronald R. Yager | |||
| We are interested in finding automated procedures for extracting information from knowledge bases which contain linguistic information. We use the concept of a possibility distribution to represent the information in a linguistic value. The theory of approximate reasoning is used to provide both a means for translating propositions into a machine-understandable form and a methodology for making inferences from this information. An algorithmic procedure consisting of the development of a knowledge tree and an evaluation procedure resulting in the desired information are presented. In this paper we restrict ourselves to answering questions about the value of a variable from a knowledge base consisting of simple data statements and implication statements. | |||
| An Adaptive FCV Clustering Algorithm | | BIBA | 97-104 | |
| Robert W. Gunderson | |||
| A modified fuzzy c-varieties (FCV) algorithm is proposed which is capable of seeking out a mixture of clusters of possibly different shapes. The algorithm adapts to the structure encountered during the computation and should thereby be more likely to successfully detect the actual structure in the data and less likely to impose it. | |||
| Trace Element Distribution in Yeast and Wort Samples: An Application of the FCV Clustering Algorithms | | BIBA | 105-116 | |
| Tove Jacobsen; Robert W. Gunderson | |||
| Certain trace elements have been shown to be essential growth factors to
microorganisms. A minimum concentration of zinc has been considered necessary
to obtain optimal fermentation rate by brewer's yeast, but interactions between
zinc and other elements have also been discussed.
Cluster analyses were performed with trace element data from 24 different yeast samples. The element compositions of the obtained yeast prototypes indicated an interaction between copper and zinc, as well as an essential effect of manganese for one yeast strain. The results further supported that the trace element level in yeast shows less fluctuations than the element concentration in the worts. | |||
| Precise Past -- Fuzzy Future | | BIBA | 117-134 | |
| Brian R. Gaines | |||
| This paper examines the motivation and foundations of fuzzy sets theory, now
some 20 years old, particularly possible misconceptions about possible
operators and relations to probability theory. It presents a standard
uncertainty logic (SUL) that subsumes standard propositional, fuzzy and
probability logics, and shows how many key results may be derived within SUL
without further constraints. These include resolutions of standard paradoxes
such as those of the bald man and of the barber, decision rules used in pattern
recognition and control, the derivation of numeric truth values from the
axiomatic form of the SUL, and the derivation of operators such as the
arithmetic mean. The addition of the constraint of truth-functionality to a
SUL is shown to give fuzzy, or Lukasiewicz infinitely-valued, logic. The
addition of the constraint of the law of the excluded middle to a SUL is shown
to give probability, or modal S5, logic. An example is given of the use of the
two logics in combination to give a possibility vector when modelling
sequential behaviour with uncertain observations.
This paper is based on the banquet address with the same title given at NAFIP-1, the First North American Fuzzy Information Processing Group Workshop, held at Utah State University, May 1982. | |||
| A Multilingual Input/Output Device for Indian Scripts | | BIBA | 137-146 | |
| T. Radhakrishnan; J. W. Atwood; S. G. Krishnamoorthy | |||
| A design is presented for a multilingual interactive input/output device for Indian languages. A keyboard is proposed which is based on the basic letter sounds of the Indian languages. Since the union of the sets of basic letter sounds for the major Indian languages contains about 76 sounds, a keyboard of reasonable size can be constructed which will be suitable for use throughout India. The basic letter sounds also provide a means for an unambiguous internal encoding which preserves the lexicographic order of the characters in the language(s) being used. A possible model is given for an output device that transforms the basic letter sound oriented internal representation of the characters into graphical symbols for human reading. The use of the basic letter sound representation allows a unified design which is adaptable to different regional needs. | |||
| Approximating System Relations from Partial Information | | BIBA | 147-162 | |
| Roger C. Conant | |||
| Methods are available by which one can deduce the structure of a system, presuming that a set-theoretic description of the global system relationship is at hand. However, if the information available about the system does not include such a description but consists of bits of partial knowledge, incomplete views, scattered facts regarding possible or impossible system configurations, knowledge of constraints on the system, and so forth, how should one proceed to deduce or approximate the system relationship? This article considers this question qualitatively and quantitatively and presents several guidelines for the effective and complete use of partial information. A measure for the amount of information provided by bits of partial knowledge and an additivity relationship for this measure are discussed. The implications of the above for the experimental method, in which the experimenter seeks to discover a system relationship by means of successive tests each of which reveals partial information about the system, are also discussed. The very powerful method of embedding information in the representation of a system is presented. Examples are offered to make the foregoing discussion concrete. | |||
| Tools and Techniques for the Monitoring of Interactive Graphics Dialogues | | BIBA | 163-180 | |
| H. Hanusa | |||
| This article discusses several aspects of the man-machine interface in interactive graphics dialogue systems. Based on these considerations and on results of the Seillac II Workshop, a model of interaction and its evaluation is developed. As a result tools and techniques for user-adaptive dialogue-monitoring are presented and discussed in detail. This discussion is also based on experience with implementations of such a dialogue-monitor. | |||
| Composition and Editing of Spoken Letters | | BIBA | 181-193 | |
| Robert B. Allen | |||
| Two studies examine issues relating to composing and editing spoken letters. In the first study participants used a prototype speech editor. In the second study, participants composed letters with a tape-recorder and later identified which parts of those letters whey would like to edit. The data on the speech-editing system suggest that it is usable, but substantial improvements would be desirable. Together, the studies showed that revisions made to spoken letters were generally similar to the types of revisions found previously for written documents. However, previous findings on the details on planning times during the composition of spoken letters were contradicted. | |||
| Quantified Propositions in a Linguistic Logic | | BIBA | 195-227 | |
| Ronald R. Yager | |||
| We introduce two methodologies for interpreting quantifiers in binary logic.
We then extend these interpretations to the case where the quantifiers are
linguistic. We use the formalism of fuzzy subset theory to provide a framework
in which to interpret linguistic quantifiers. We discuss various methodologies
for measuring the cardinality of a fuzzy set including the concept of a fuzzy
cardinality.
Among the important questions we study in the paper is the problem of making quantified statements based upon the observation of a sample from a set of objects. | |||
| Introduction | | BIB | 229-230 | |
| Phil Hayes | |||
| Steps Toward Graceful Interaction in Spoken and Written Man-Machine Communication | | BIBA | 231-284 | |
| Philip J. Hayes; D. Raj Reddy | |||
| Natural language processing is often seen as a way to provide easy-to-use
and flexible interfaces to interactive computer systems. While natural
language interfaces typically perform well in response to straightforward
requests and questions within their domain of discourse, they often fail to
interact gracefully with their users in less predictable circumstances. Most
current systems cannot, for instance: respond reasonably to input not
conforming to a rigid grammar; ask for and understand clarification if their
user's input is unclear; offer clarification of their own output if the user
asks for it; or interact to resolve any ambiguities that may arise when the
user attempts to describe things to the system.
We believe that graceful interaction in these and the many other contingencies that can arise in human conversation is essential if interfaces are ever to appear co-operative and helpful, and hence be suitable for the casual or naive user, and more habitable for the experienced user. In this paper, we attempt to outline key components of graceful interaction, to identify major problems involved in realizing them, and in some cases to suggest the shape of solutions. To this end we propose a decomposition of graceful interaction into a number of relatively independent skills: skills involved in parsing elliptical, fragmented, and otherwise ungrammatical input; in ensuring robust communication; in explaining abilities and limitations, actions and the motives behind them; in keeping track of the focus of attention of a dialogue; in identifying things from descriptions, even if ambiguous or unsatisfiable; and in describing things in terms appropriate for the context. We suggest these skills are necessary for graceful interaction in general and form a good working basis for graceful interaction in a certain large class of application domains, which we define. None of these components appear individually much beyond the current state of the art, at least for suitably restricted domains of discourse. Thus, we advocate research into the construction of gracefully interacting systems as an activity likely to pay major dividends in improved man-machine communication in a relatively short time. | |||
| Graceful Interaction Through the COUSIN Command Interface | | BIBA | 285-306 | |
| Philip J. Hayes; Pedro A. Szekely | |||
| Currently available interactive command interfaces often fail to provide adequate error correction or on-line help facilities, leading to the perception of an unfriendly interface and consequent frustration and reduced productivity on the part of the user. The COUSIN project of Carnegie-Mellon University is developing command interfaces which appear more friendly and supportive to their users, using a form-based model of communication, and incorporating error correction and on-line help. Because of the time and effort involved in constructing truly user-friendly interfaces, we are working on interface system designed to provide interfaces to many different application systems, as opposed to separate interfaces to individual applications. A COUSIN interface system gets the information it needs to provide these services for a given application from a declarative description of that application's communication needs. | |||
| A View of Human-Machine Communication and Co-Operation | | BIBA | 309-333 | |
| Horst Oberquelle; Ingbert Kupka; Susanne Maass | |||
| Nowadays computers are increasingly used for communication purposes and less
for mere calculation. Userfriendly dialog design for non-computer professional
users is becoming an important research issue. The discussion has already
shown that human-machine systems have to be studied as a whole: apart from the
machine and its users they include the designers and those persons responsible
for the system's application, as well. Computers just play a special role as
one element in a highly complex communication network with several human agents
linked in space and time.
In order to characterize communication between humans and machines the concept of formal communication is introduced and related to natural communication. Communicating behaviour and its determining factors are represented by a model which is based on psycholinguistic concepts of communication and which uses high-level Petri net interpretations. Formal communication can be observed among humans as well as with machines; often it is caused by delegation. Programming of computer systems can be conceived as a special form of delegation. The view of computer systems as communication media with formal communicating behaviour permits an explanation of problems arising from computer applications, especially at the human-machine interface, and shows directions for future research. | |||
| On the Use of Semantic Constraints in Guiding Syntactic Analysis | | BIBA | 335-357 | |
| Gregg C. Oden | |||
| It is argued that in natural language comprehension, the degree of sensibleness of the possible interpretations of sentences and parts of sentences is used to guide syntactic analysis by assisting in making the decisions about which interpretations should continue to be processed. Continuous sensibleness information is shown to be necessary for the successful resolution of many syntactically ambiguous sentences, that is, to obtain the meaning that is most likely to have been intended by the speaker. The computation of the degree of sensibleness of each interpretation is hypothesized to take place in parallel with the analysis of its syntactic structure and to be performed through the use of semantic constraints that may be more or less satisfied. A formulation of continuous semantic constraints is proposed and characteristics of such constraints are examined including how such semantic constraints may be derived directly from knowledge in a fuzzy propositional semantic memory. | |||
| User Error or Computer Error? Observations on a Statistics Package | | BIBA | 359-376 | |
| Richard Davis | |||
| A detailed observational study of 11 psychologists using a statistics package, SPSS, to analyse their research work is presented. The shortcomings of the interaction were pinpointed by various error classifications. The use of an expert's macro-command facility by a casual user is discussed, highlighting the problems of changing existing interfaces. Various differences between batch and interactive processing are also reported. The results are then reviewed and discussed within the framework of existing MMIF guidelines. | |||
| On the Readability of COBOL Manuals | | BIBA | 377-389 | |
| Ronald S. Lemos | |||
| Comparative readability of COBOL language manuals from the top five
mainframe vendors is analyzed. A program to derive the Flesch Reading Ease
Index is used as the measure of text readability. This formula estimates
readability based upon sentence length (number of words) and word length
(number of syllables). Thirty text samples of approximately 100 words each are
systematically selected from each manual. The manuals represent both "user"
and "reference" manuals.
Results showed all manuals to be categorized as "difficult" reading characteristic of academic journals. The COBOL manual from Sperry/Univac was found to have a significantly lower readability index than those of Burroughs and the CDC User manual. No significant differences were found between the two "user" and the five "reference" manuals. This study shows the feasibility of using readability indexes for COBOL manuals and provides readability measures of COBOL manuals. These results can be used by customers, vendors, and technical writers in developing more effective documentation materials. | |||
| Programming Problem Representation in Novice and Expert Programmers | | BIBA | 391-398 | |
| Mark Weiser; Joan Shertz | |||
| The representation of computer programming problems in relation to the organization of programming knowledge is examined. An experiment previously done for physics knowledge is replicated to examine differences in the categories used for problem representation by novice and expert programmers. Results from sorting tasks show experts and novices begin their problem representations with specific different problem categories. Experts initially abstract an algorithm to solve a problem, whereas novices base their approach on a problem's literal features. A preliminary study of programming managers indicates an abstraction different from that used by programmers. | |||
| Editorial: Developments in Expert Systems | | BIB | 399-402 | |
| M. J. Coombs | |||
| Reasoning from First Principles in Electronic Troubleshooting | | BIBA | 403-423 | |
| Randall Davis | |||
| While expert systems have traditionally been built using large collections
of rules based on empirical associations, interest has grown recently in the
use of systems that reason "from first principles", i.e. from an understanding
of causality of the device being examined. Our work explores the use of such
models in troubleshooting digital electronics.
In discussing troubleshooting we show why the traditional approach -- test generation -- solves a different problem and we discuss a number of its practical shortcomings. We consider next the style of debugging known as discrepancy detection and demonstrate why it is a fundamental advance over traditional test generation. Further exploration, however, demonstrates that in its standard form discrepancy detection encounters interesting limits in dealing with commonly known classes of faults. We suggest that the problem arises from a number of interesting implicit assumptions typically made when using the technique. In discussing how to repair the problems uncovered, we argue for the primacy of models of causal interaction, rather than the traditional fault models. We point out the importance of making these models explicit, separated from the troubleshooting mechanism, and retractable in much the same sense that inferences are retracted in current systems. We report on progress to date in implementing this approach and demonstrate the diagnosis of a bridge fault -- a traditionally difficult problem -- using our approach. | |||
| Deep versus Compiled Knowledge Approaches to Diagnostic Problem-Solving | | BIBA | 425-436 | |
| B. Chandrasekaran; Sanjay Mittal | |||
| Most of the current generation expert systems use knowledge which does not represent a deep understanding of the domain, but is instead a collection of "pattern -> action" rules, which correspond to the problem-solving heuristics of the expert in the domain. There has thus been some debate in the field about the need for and role of "deep" knowledge in the design of expert systems. It is often argued that this underlying deep knowledge will enable an expert system to solve hard problems. In this paper we consider diagnostic expert systems and argue that given a body of underlying knowledge that is relevant to diagnostic reasoning in a medical domain, it is possible to create a diagnostic problem-solving structure which has all the aspects of the underlying knowledge needed for diagnostic reasoning "compiled" into it. It is argued this compiled structure can solve all the diagnostic problems in its scope efficiently, without any need to access the underlying structures. We illustrate such a diagnostic structure by reference to our medical system MDX. We also analyze the use of these knowledge structures in providing explanations of diagnostic reasoning. | |||
| Diagnostic Expert Systems Based on a Set Covering Model | | BIBA | 437-460 | |
| James A. Reggia; Dana S. Nau; Pearl Y. Wang | |||
| This paper proposes that a generalization of the set covering problem can be used as an intuitively plausible model for diagnostic problem solving. Such a model is potentially useful as a basis for expert systems in that it provides a solution to the difficult problem of multiple simultaneous disorders. We briefly introduce the theoretical model and then illustrate its application in diagnostic expert systems. Several challenging issues arise in adopting the set covering model to real-world problems, and these are also discussed along with the solutions we have adopted. | |||
| On the Application of Expert Systems | | BIBA | 461-477 | |
| Andrew Basden | |||
| Expert systems have recently been arousing much interest in industry and
elsewhere: it is envisaged that they will be able to solve problems in areas
where computers have previously failed, or indeed, never been tried. However,
although the literature in the field of expert systems contains much on their
construction, on knowledge representation techniques, etc., relatively little
has been devoted to discussing their application to real-life problems.
This article seeks to bring together a number of issues relevant to the application of expert systems by discussing their advantages and limitations, their roles and benefits, and the influence that real-life applications might have on the design of expert systems software. Part of the expert systems strategy of one major chemical company is outlined. Because it was in constructing one particular expert system that many of these issues became important this system is described briefly at the start of the paper and used to illustrate much of the later discussion. It is of the plausible-inference type and has application in the field of materials engineering. The article is aimed as much at the interested end-user who has a possible application in mind as at those working in the field of expert systems. | |||
| Adapting a Consultation System to Critique User Plans | | BIBA | 479-496 | |
| Curtis P. Langlotz; Edward H. Shortliffe | |||
| A predominant model for expert consultation systems is one in which a computer program simulates the decision making processes of an expert. The expert system typically collects data from the user and renders a solution. Experience with regular physician use of ONCOCIN, an expert system that assists with the treatment of cancer patients, has revealed that system users can be annoyed by this approach. In an attempt to overcome this barrier to system acceptance, ONCOCIN has been adapted to accept, analyze, and critique a physician's own therapy plan. A critique is an explanation of the significant differences between the plan that would have been proposed by the expert system and the plan proposed by the user. The critique helps resolve these differences and provides a less intrusive method of computer-assisted consultation because the user need not be interrupted in the majority of cases -- those in which no significant differences occur. Extension of previous rule-based explanation techniques has been required to generate critiques of this type. | |||
| Towards an Understanding of the Role of Experience in the Evolution from Novice to Expert | | BIBA | 497-518 | |
| Janet L. Kolodner | |||
| Two major factors seem to distinguish novices from experts. First, experts generally know more about their domain. Second, experts are better than novices at applying and using that knowledge effectively. Within AI, the traditional approach to expertise has concentrated on the first difference. Thus, "expert systems" research has revolved around extracting the rules experts use and developing problem solving methodologies for dealing with those rules. Unlike these systems, human experts are able to introspect about their knowledge and learn from past experience. It is this view of expertise, based on the second distinguishing feature above, that we are exploring. Such a view requires a reasoning model based on organization of experience in a long-term memory, and incremental learning and refinement of both reasoning processes and domain knowledge. This paper will present the basis for this view, the reasoning model it implies, and a computer program which begins to implement the theory. The program, called SHRINK, models psychiatric diagnosis and treatment. | |||
| Reasoning and Natural Explanation | | BIBA | 521-559 | |
| J. A. Goguen; J. L. Weiner; C. Linde | |||
| Reasoning occupies a central position in studies of both artificial and natural intelligence. This paper presents a precise and computationally effective model of the structure of human explanation, based upon the careful analysis of transcripts of actual, naturally occurring conversations between human beings involved in explanation. Explanations are represented by trees whose internal nodes correspond to the three major types of justification which we have found offered for assertions (giving a reason, giving examples, and eliminating alternatives); the real-time process of explanation production is represented by a sequence of transformations on such trees. Focus of attention in explanation is represented by pointers in the tree, and shifts of focus of attention by pointer movement. The ordering and embedding of explanations are considered; we explain why some orderings are easier to understand than others, and we demonstrate that some forms of explanation require multiple pointers, some of which are embedded in others. A fairly complex example is analyzed in detail. The paper emphasizes the view that explanation is a social process, i.e. occurs in particular contexts involving particular people who have particular assumptions and predispositions which significantly influence what actually occurs. Implications of the results of this paper to the design of systems for explanation production and for artificial reasoning are discussed. | |||
| Task Analysis and User Errors: A Methodology for Assessing Interactions | | BIBA | 561-574 | |
| Richard Davis | |||
| A methodology for integrating observational data of specific human errors with theoretical task analysis of the man-machine interface is presented. Error data collected during sessions using an emi-interactive statistics package are superimposed on a Command Language Grammar [a task analysis developed by Moran (1981)]. The match/mismatch between user's goals and the implementation requirements of the computer system at various levels of analysis are described. The mismatch boundaries are discussed, and recommendations are derived for the interface and the methodological approach. | |||
| Readability Measurements of Palantype Transcription for the Deaf | | BIBA | 575-594 | |
| A. C. Downton; R. G. Baker; S. M. Lewis; P. J. Cooper | |||
| This paper describes an experiment designed to compare the readability of
normal English orthography with that of text produced by a Palantype
transcription system designed as an aid for the severely and profoundly deaf.
The Palantype transcription system has been described in detail in an earlier
paper in this Journal. To provide guidance for future development work, two
artificially produced enhancements of the transcription system are also
compared with the normal transcription system output and with English
orthography. The experimental subjects are members of local deaf and hard of
hearing clubs.
The experiment is based upon the Cloze technique: words are deleted from the original text and subjects are asked to replace these missing words. Standard texts, normally used for measuring reading ability, form the experimental material. The experiment uses a modified Latin squares design. Statistical analysis of the results shows that there are significant differences between the different text treatments, and some attempt is made to quantify these differences in terms of measures of reading ability commonly used in the educational field. | |||
| A Lexical Analysis of Keywords in High Level Programming Languages | | BIBA | 595-607 | |
| C. M. Eastman | |||
| Lexical characteristics of nine high level programming languages, Ada, APL, BASIC, COBOL, FORTRAN, LISP, Pascal, PL/I, and SNOBOL, are discussed. The properties considered include keyword number and length, keyword relationship to English words, and similarities among languages in the use of keywords. Some broader implications of these lexical characteristics are addressed. | |||