HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 101112131415161718192021222324252627282930

International Journal of Man-Machine Studies 20

Editors:B. R. Gaines; D. R. Hill
Dates:1984
Volume:20
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:32
Links:Table of Contents
  1. IJMMS 1984 Volume 20 Issue 1
  2. IJMMS 1984 Volume 20 Issue 2
  3. IJMMS 1984 Volume 20 Issue 3
  4. IJMMS 1984 Volume 20 Issue 4
  5. IJMMS 1984 Volume 20 Issue 5
  6. IJMMS 1984 Volume 20 Issue 6

IJMMS 1984 Volume 20 Issue 1

Editorial: Developments in Expert Systems BIB 1-2
  M. J. Coombs
Strategic Explanations for a Diagnostic Consultation System BIBA 3-19
  Diane Warner Hasling; William J. Clancey; Glenn Rennels
This article examines the problem of automatic explanation of reasoning, especially as it relates to expert systems. By explanation we mean the ability of a program to discuss what it is doing in some understandable way. We first present a general framework in which to view explanation and review some of the research done in this area. We then focus on the explanation system for NEOMYCIN, a medical consultation program. A consultation program interactively helps a user to solve a problem. Our goal is to have NEOMYCIN explain its problem-solving strategies. An explanation of strategy describes the plan the program is using to reach a solution. Such an explanation is usually concrete, referring to aspects of the current problem situation. Abstract explanations articulate a general principle, which can be applied in different situations; such explanations are useful in teaching and in explaining by analogy. We describe the aspects of NEOMYCIN that make abstract strategic explanations possible -- the representation of strategic knowledge explicitly and separately from domain knowledge -- and demonstrate how this representation can be used to generate explanations.
Expert Systems: An Alternative Paradigm BIBA 21-43
  Mike Coombs; Jim Alty
There has recently been a significant effort by the A.I. community to interest industry in the potential of expert systems. However, this has resulted in far fewer substantial applications projects than might be expected. This article argues that this is because human experts are rarely required to perform the role that computer-based experts are programmed to adopt. Instead of being called in to answer well-defined problems, they are more often asked to assist other experts to extend and refine their understanding of a problem area at the junction of their two domains of knowledge. This more properly involves educational rather than problem-solving skills.
   An alternative approach to expert system design is proposed based upon guided discovery learning. The user is provided with a supportive environment for a particular class of problem, the system predominantly acting as an advisor rather than directing the interaction. The environment includes a database of domain knowledge, a set of procedures for its application to a concrete problem, and an intelligent machine-based advisor to judge the user's effectiveness and advise on strategy. The procedures focus upon the use of user generated "explanations" both to promote the application of domain knowledge and to expose understanding difficulties. Simple database PROLOG is being used as the subject material for the prototype system which is known as MINDPAD.
Knowledge Reorganization and Reasoning Style BIBA 45-61
  Christopher K. Riesbeck
To study the learning of expertise, two closely related stages of expertise in economics reasoning are analyzed and modelled, and a mechanism for going from the first to the second is proposed. Both stages share the same basic concepts and generate plausible economic scenarios, but reasoning in the first stage oversimplifies by focusing on how the goals of a few actors are affected. Reasoning in the second stage produces better arguments by taking into account how all the relevant parts of the economy might be affected. The first stage is modelled by highly interconnected goal forests and very selective, story understanding search heuristics. The second stage is modelled with more explicit links between economic quantities and a more appropriate set of search heuristics. The learning mechanism is a failure-driven process that not only records better arguments as they are seen, but also records the failure of existing inference rules to find these arguments on their own. The collected failures are used to determine which search heuristics work best in which situations.
On the Application of Rule-Based Techniques to the Design of Advice-Giving Systems BIBA 63-86
  Peter Jackson; Paul Lefrere
This article attempts to assess how much is known about building systems whose advice actually benefits users. We review current approaches to the provision of on-line help, and suggest that the most promising are those which represent a user's intentions explicitly. Following this lead, we examine recent work on speech acts, planning and meta-level inference for clues as to how a user's inputs could be interpreted in the context of his current aims and activities. We conclude that the appropriate context of interpretation for an input is supplied by hypotheses concerning the current state of a user's plan. Next we suggest that the techniques developed in rule-based systems could be used to implement an advisor capable of generating and revising plan hypotheses, and outline what we consider to be the outstanding problems of control associated with such an implementation. Finally, we show how such a system might help a user to structure his activity, so that he can iterate towards his goal while avoiding common errors.
Expert Systems and Information Retrieval: An Experiment in the Domain of Biographical Data Management BIBA 87-106
  Gian Piero Zarri
The RESEDA project is concerned with the construction of Artificial Intelligence (AI) management systems working on factual databases consisting of biographical data; this data is described using a particular Knowledge Representation language ("meta-language") based on the Artificial Intelligence understanding of a "Case Grammar" approach. The "computing kernel" of the system consists of an inference interpreter. Where it is not possible to find a direct response to the (formal) question posed, RESEDA tries to answer indirectly by using a first stage of inference procedures ("transformations"). Moreover, the system is able to establish automatically new causal links between the statements represented in the base, on the ground of "hypotheses", of a somewhat general nature, about the class of possible relationships. In this case, the result of the inference operations can thus modify, at least in principle, the original content of the database.
Distributed Architecture and Parallel Non-Directional Search for Knowledge-Based Cartographic Feature Extraction Systems BIBA 107-120
  Barbara A. Lambird; David Lavine; Laveen N. Kanal
Expert or knowledge-based system approaches are currently being viewed with great interest for their potential to handle the many difficult problems encountered in image understanding and cartographic feature extraction from remotely sensed imagery. This article presents an overview of the many types of knowledge that must be modeled in remote sensing and cartography, and discusses architectural and control aspects deemed important for cartographic expert systems. A distributed architecture and a control structure based on a parallel non-directional search algorithm are described and open problems are mentioned.
A Model for the Interpretation of Verbal Predictions BIBA 121-134
  Alf C. Zimmer
There is a marked gap between the demands on forecasting and the results that numerical forecasting techniques usually can provide. It is suggested that this gap can be closed by the implementation of experts' qualitative predictions into numerical forecasting systems. A formal analysis of these predictions can then be integrated into quantitative forecasts.
   In the framework of possibility theory, a model is developed which accounts for the verbal judgments in situations where predictions are made or knowledge is updated in the light of new information. The model translates verbal expressions into elastic constraints on a numerical scale. This numerical interpretation of qualitative judgments can then be implemented into numerical forecasting procedures.
   The applicability of this model was tested experimentally. The results indicate that the numerical predictions from the model agree well with the actual judgments and the evaluation behaviour of the subjects.
   The applicability of this model is demonstrated in a study where bank clerks had to predict exchange rates. The analysis of qualitative judgments according to this model provided significantly more information than numerical predictions.
   A general framework for an interactive forecasting systems is suggested for further developments.

IJMMS 1984 Volume 20 Issue 2

The Application of Human Factors to the Needs of the Novice Computer User BIBA 137-156
  Anne Lee Paxton; Edward J. Turner
In this article the literature on the application of human factors to the needs of the novice or inexperienced computer user was reviewed. The need for research in the area was illustrated by the fact that an increasing number of people who are not computer professionals are using computers routinely in their jobs. A need for the development of man-computer systems that are maximally suited to the users' needs and preferences was indicated. The needs of the manager as a naive or novice computer user were described as a case in point. Methods to assist members of the university community obtain maximum benefit from computer facilities were also reviewed. The importance of applying human factors to software design as well as the overall design of the man-computer interface was discussed in the literature along with recommendations for specific design and other techniques that would aid the novice in effective use of the computer. Research on the application of human factors to text editing for the novice was reviewed, and the results indicated that novices work best with an inflexible, natural language based text editor. An examination of the literature provided support for designing help facilities for the novice, such as a help key. Anxiety, attitude, and closure were also discussed in the literature as affecting the learning and performance of the novice computer user. The application of human factors to the training of the novice computer user was another area covered in the review. Literature on programming in the future home was discussed, which included recommendations for making computers more useful in that environment. Various implications for future research were presented, including methods to treat computer anxiety as well as design techniques to assist the novice.
Type-Checking in an Untyped Language BIBA 157-167
  Allan Ramsay
It is argued that typed variables and functions are inappropriate for languages which allow functional arguments, data types defined by predicates, and conditional expressions that test data types. However, it is still possible to do some compile-time type-checking for such languages. This paper presents a technique for inferring data types in an untyped language, and a program that uses this technique to show where type constraints are obeyed or violated and where run time checks are needed.
A Methodology for Interactive Evaluation of User Reactions to Software Packages: An Empirical Analysis of System Performance, Interaction, and Run Time BIBA 169-188
  Avi Rushinek; Sara F. Rushinek; Joel Stutz
This study deals with identifying primary factors which determine the usefulness of computer-assisted instruction (CAI). Usefulness is defined as user performance score. The study establishes linear models which explain the effects of CAI modifications designed to improve the utilization of computer facilities as well as users' performance.
Instructional Manipulation of Users' Mental Models for Electronic Calculators BIBA 189-199
  Piraye Bayman; Richard E. Mayer
A mental model for a calculator or computer refers to the user's conception of the "invisible" information processing states and transformations that occur between input and output. This paper explores the following ideas concerning users' mental models for electronic calculators: (1) users differ greatly in their mental models in spite of similar "hands-on" experience; (2) users often tend to develop either impoverished or incorrect models; (3) users can be encouraged to develop more useful mental models through instructional intervention.
The Depth/Breadth Trade-Off in the Design of Menu-Driven User Interfaces BIBA 201-213
  John I. Kiger
An experiment is reported investigating the role of depth and breadth of menus and tree structures in user interfaces for information-retrieval systems. The results suggest minimizing the depth of tree structures by providing menus of eight or nine selections each. This limit on menu breadth is consistent with known limits on human information processing capacity.
The Influence of Rule-Generated Stress on Computer-Synthesized Speech BIBA 215-226
  David L. McPeters; Alan L. Tharp
This paper reports on an investigation to understand better the effects of adding rule-generated stress to the output of a text-to-speech translator. An experiment was conducted in which subjects listened to questions and declaratives formed by computer-synthesized speech and responded with an answer to a question or a repetition of a declarative. To determine the effects of stress on the intelligibility of sentences, the subjects were divided into three groups. One group listened to unstressed output; a second group listened to output to which rule-generated stress had been added, while a third group listened to manually-polished synthesized speech.
   The results of the experiment are reported together with an analysis and a discussion of their relevance to the improvement of computer-synthesized speech.

IJMMS 1984 Volume 20 Issue 3

Editorial: Developments in Expert Systems (Extension) BIB 227
  M. J. Coombs
A Rational Reconstruction of the MYCIN Consultation System BIBA 229-317
  J. Cendrowska; M. A. Bramer
This article presents a detailed analysis and partial reconstruction in POP-2 of the control structure of MYCIN, arguably the best-known and most significant Expert System currently in existence. The aim is to aid the development of theory in the field and to assist those who wish to build their own working systems. Attention is focused, inter alia, on the production rules, the goal-directed backward chaining of rules that comprises the control structure and the parameters and context types employed, together with the data structures created during a consultation, and the system's use of "certainty factors" to handle uncertain information. A detailed account is given of how a typical consultation proceeds and some variants that can occur are considered. Developments to MYCIN since its original implementation in 1976 and its generalized version, EMYCIN, are also briefly described. The article identifies a number of gaps in the original reporting of MYCIN and presents a critique of a number of its features and an appraisal of the value of a MYCIN-like approach as a starting point for further Expert Systems development.
Using an Expert System in Merging Qualitative and Quantitative Data Analysis BIBA 319-330
  J. G. Ganascia
Reasoning modelling and efficient numerical computation seem a priori incompatible: on one hand, Expert Systems are usually designed in order to deal with symbolic data. They are able to solve problems in domains where computers failed before, but they are not able to perform efficiently sophisticated computation or complex numerical data analysis. On the other hand, many disciplines regularly face the need to analyse large volumes of numerical data, and many efficient algorithms have been set up in the past. Nevertheless, use of experience and specialized knowledge are required in numerical processing, either to understand output or to adjust input parameters. Our aim is that Expert Systems may be used in order to achieve these tasks, i.e. both to interpret symbolic features extracted from numerical data by classical techniques and to adjust input parameters according to automatic expert diagnosis. Therefore, Expert Systems may be used not only in dealing with symbolic data, but also in merging symbolic data with numerical data. We present in this article the overall LITHO project which is a numerical processing including an Expert System, describing the Expert System itself and the whole chain surrounding it. Moreover, we show that even though their success and their ability to be included in numerical data process, diagnostic systems need to be completed by a diagnostic system environment containing tools both for interpreting results and for preventing contradictions-in Knowledge Base and in Fact Base.

IJMMS 1984 Volume 20 Issue 4

An Algorithm for an Intelligent Arabic Computer Terminal BIBA 331-341
  A. J. Al-Khalili
A novel method of eliminating unnecessary keys at the Arabic terminal is suggested. Intelligence is provided for the terminal in such a way that the shape of a letter inputted by the operator is determined by the terminal rather than by the operator, as is customary now.
   By using the algorithm developed, it is possible to reduce the number of the keys on the terminal by 18%, and at the same time, increasing the number of symbols used by 72%.
   The suggested method will increase the keying speed by as much as 100% since a major part of deciding the shape of the letter is shifted from the operator's work to the intelligence within the terminal.
   This method is applicable to all Arabic text inputting machines from ordinary Arabic electric typewriters, word processors, lithographs, etc. Also, the method can be applied to other terminals of the same language family such as Persian, Urdu, Pashto, etc.
Holophrasted Displays in an Interactive Environment BIBA 343-355
  Scott R. Smith; David T. Barnard; Ian A. Macleod
This article presents a general algorithm for prettyprinting and holophrasting interactively created objects. It is shown how an environment with knowledge of the entities it manipulates can automatically produce cathode-ray tube displays with more helpful contextual information for the user than a traditional display of contiguous lines from a source file.
An Investigation of the Utility of Flowcharts During Computer Program Debugging BIBA 357-372
  D. J. Gilmore; H. T. Smith
An experiment was performed to investigate whether flowcharts improved the speed and efficiency of computer program debugging. Twenty-four subjects were given six problems, each a program containing one error. The errors could be located by studying the behaviour of the program. The subjects were divided into three groups of eight and were given the programs either as a listing, a standard notation flowchart or as a Bowles structure diagram. No significant differences were found between these three conditions for any of the three dependent variables, but there were differences in performance between problems. The analysis of performance variation across conditions and problems implies that flowchart usefulness may not be a clear-cut issue. The results suggest that both the nature of the task and the individual programmer characteristics are important determinants of flowchart utility. A framework is presented which emphasizes these factors and which is generalizable to other aspects of programming performance.
Natural Artificial Languages: Low Level Processes BIBAK 373-419
  Gary Perlman
An artificial language is one created for concise and precise communication within a limited domain such as mathematics. A natural artificial language is one that people find easy to learn and use. I discuss low level properties of natural artificial languages, especially those in which names are chosen for concepts, and symbols are chosen for names, a class of artificial languages I call linguistically mediated artificial languages. These properties include choosing mnemonic symbols for names, and suggestive names for concepts, and using both internally and externally consistent syntax. I outline a model of processing linguistically mediated artificial language and present results from experiments in support of the model. The results of the experiments are applied to the design of a user interface to a programming system, demonstrating their practicality along with their theoretical interest. The research shows the trade-offs in designing natural artificial languages: naturalness in a specific domain is gained at the cost of generality for other domains.
Keywords: user interfaces, cognitive psychology, human factors, systems development

IJMMS 1984 Volume 20 Issue 5

Modelling Degrees of Item Interest for a General Database Query System BIBA 421-443
  Neil C. Rowe
Many databases support decision-making. Often this means choices between alternatives according to partly subjective or conflicting criteria. Database query languages are generally designed for precise, logical specification of the data of interest, and tend to be awkward in these circumstances. Information retrieval research suggests several solutions, but there are obstacles to generalizing these ideas to most databases.
   To address this problem we propose a methodology for automatically deriving and monitoring "degrees of interest" among alternatives for a user of a database system. This includes (a) a decision theory model of the value of information to the user, and (b) inference mechanisms, based in part on ideas from artificial intelligence, that can tune the model to observed user behavior. This theory has important applications to improving efficiency and cooperativeness of the interface between a decision-maker and a database system. We have performed some preliminary experiments with it.
Experimental Study of a Two-Dimensional Language vs Fortran for First-Course Programmers BIBA 445-467
  Melvin Klerer
The variability in programming performance for a group of novice Fortran programmers was measured over a set of problems from an introductory programming text. A wide variation was observed despite the elementary nature of each problem and the relatively homogeneous subject group. The implications of these results are examined. Another experiment measured the comparative performance of programming novices using Fortran and the Klerer-May two-dimensional (2-D) language. The results indicated that the 2-D language was much more economically efficient than Fortran for the subject groups in the areas of scientific/engineering application programming.
Rough Classification BIBA 469-483
  Zdzislaw Pawlak
This article contains a new concept of approximate analysis of data, based on the idea of a "rough" set. The notion of approximate (rough) description of a set is introduced and investigated. The application to medical data analysis is shown as an example.
Expressive Power of Knowledge Representation Systems BIBA 485-500
  Ewa Orlowska; Zdzislaw Pawlak
In this article we attempt to clarify some aspects of expressive power of knowledge representation systems. We show that information about objects provided by a system is given up to an indiscernibility relation determined by the system and hence it is incomplete in a sense. We discuss the influence of this kind of incompleteness on definability of concepts in terms of knowledge given by a system. We consider indiscernibility relations as a tool for representing expressive power of systems, and develop a logic in which properties of knowledge representation systems related to definability can be expressed and proved. We present a complete set of axioms and inference rules for the logic.
Epistemological Questions Raised by the Metasystem Paradigm BIBA 501-509
  John P. van Gigch
The metasystem paradigm attempts to resolve epistemological issues which have existed from the beginning of time. These issues are recalled here to provide readers and scholars with a list to which they can refer when working in quest for possible answers.
Preserving the Integrity of the Medium: A Method of Measuring Visual and Auditory Comprehension of Electronic Media BIBA 511-517
  Mary Alice White; Barbara Sandberg; Eda Behar; Jean Mockler; Elizabeth Perez; Janine Pollack; Kenneth Rosenblad
This study reports the development of two techniques to assess comprehension of television which "respect the integrity of the medium". A cut, defined as that segment of the videotape corresponding to one particular camera angle, was the unit of measurement. Random cuts were made in the videotape of a television program, and presented to 99 fourth and fifth grade boys and girls. The subjects were presented with the videotape including 20 blank cuts; for each blank cut one-half of our subjects had to select one of five visual choices for the visual blank cut; the other half had to choose one of five auditory choices for the auditory blank cuts. The choices, including the correct choice, were taken from the videotape and varied in appropriateness. The children were able to perform the two tasks, receiving mean scores of 54% and 61% correct, which compares favorably to comprehension scores for reading.
   These two techniques are described as tools for assessing electronic comprehension which assess in the same medium as the information was received. The significance of this for further research is suggested.

IJMMS 1984 Volume 20 Issue 6

An Experimental Evaluation of Delimiters in a Command Language Syntax BIBA 521-535
  M. L. Schneider; K. Hirsh-Pasek; S. Nudelman
This article examines the effects of delimiters on the cognitive processes involved in command construction and interpretation. After a theoretical analysis of the impact of delimiters on commands, two techniques of delimiter usage were examined: single-level delimiters (space) and two-level hierarchical delimiters (space and comma). Using a within-subject experimental design with 20 subjects, we found that hierarchical delimiters improved the ability of individuals to distinguish fields and identify the grammatical correctness of the commands. This study suggests that the ability to organize elements of commands mentally is enhance when they consist of fields separated by one delimiter and items within fields by another delimiter. Other factors, not examined in this study, should have an impact on production. They may include the name of the command, the order in which operands are presented, and the experience level of the user.
On the Complexity of Recursion in Problem-Solving BIBA 537-544
  M. C. Er
The importance of paying attention to the complexity of recursion in problem solving is stressed. Many ill-founded beliefs and doctrines on constructing recursive algorithms are challenged. The Tower of Hanoi problem and its variant are used as concrete examples for illustrating that many seemingly correct recursive algorithms are, indeed, invalid or non-optimal. A simple context-free grammar for generating strings of balanced parentheses is then used to show the difficulty of programming recursive algorithms in block-structured languages. Other factors contributing to the difficulty in understanding recursive algorithms implemented in block-structured languages are also identified. It is suggested that more research needs to be done to foster the science of recursive programming.
An Experimental Comparison of Tabular and Graphic Data Presentation BIBA 545-566
  Matthew Powers; Conda Lashley; Pamela Sanchez; Ben Shneiderman
We present the results of our experiment designed to test the hypothesis that more usable information can be conveyed using a combination of graphical and tabular data then by using either form alone. Our independent variables were memory (recall and non-recall) and form (tables, graphs, or both). Comprehension was measured with a multiple choice exam consisting of three types of questions (retrieve, compare, or compare/calculate answers). Both non-recall and tabular treatments significantly increased comprehension. Combinations of graphs and tables produced slower but more accurate performance. An executive should use the form with which he/she is most familiar and comfortable.
A Description of Structural Change in a Central Place System: A Speculation Using Q-Analysis BIBA 567-594
  John R. Beaumont
It is argued that Q-analysis, and language of structure, offers a more useful description of central place systems than the traditional approaches based on multivariate statistical procedures. More specifically, it is possible to consider explicitly the reciprocal symbiosis between the structure of a central place system and the behaviour of consumers and producers over time. To illustrate the ideas, a description of the Lincolnshire central place system for the period 1863-1913 is presented, and the results are compared with those generated by a principal components analysis. Finally, it is argued that links with bifurcation theory may help to provide an improved description of structural change in a central place system.
Cluster Analysis and Q-Analysis BIBA 595-604
  S. M. Macgill
In this article the correspondence between a characteristic algorithm of Q-analysis and the single link method of cluster analysis is noted. Implications of the correspondence are discussed and more important differences between the approaches of Q-analysis and of cluster analysis are brought out.