HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 0102030405060708091011121314151617181920

International Journal of Man-Machine Studies 10

Editors:B. R. Gaines; D. R. Hill
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Links:Table of Contents
  1. IJMMS 1978 Volume 10 Issue 1
  2. IJMMS 1978 Volume 10 Issue 2
  3. IJMMS 1978 Volume 10 Issue 3
  4. IJMMS 1978 Volume 10 Issue 4
  5. IJMMS 1978 Volume 10 Issue 5
  6. IJMMS 1978 Volume 10 Issue 6

IJMMS 1978 Volume 10 Issue 1

Guest Editorial BIB 1-2
  P. Hajek
The GUHA Method -- Its Aims and Techniques (Twenty-Four Questions and Answers) BIBA 3-22
  Petr Hajek; Tomas Havranek
Basic notions and facts concerning the GUHA method of mechanized hypothesis formation are explained in a concise and comprehensible form. The paper contains a full bibliography of the GUHA method.
Some Remarks on Computer Realizations of GUHA Procedures BIBA 23-28
  Jan Rauch
Experience from the work on a computer realization of the GUHA method (implicational and associational version) is summarized and some general observations are presented.
An Application of the GUHA Method in Medicine BIBA 29-35
  Z. Renc; K. Kubat; J. Kourim
In a group of Prague children followed-up regularly from birth up to 6 years of age, the relationship between their health, anthropological and social characteristics was investigated. The processing of the data obtained was performed by the GUHA method. A significant association was revealed between more frequently ill children (7 or more acute diseases in the follow-up period) and their stay in collective institutions (i.e. creches, kindergarten), regardless of its length. On the contrary, no relationship was found between children with a higher number of diseases and thin and subtle children. Further, a relationship was found between the length of breast-feeding on the one hand and the number of diseases and body constitution on the other hand (characterized by the weight ratio). It was observed that the somatic and health development of the children breast-fed for a longer time (over 60 days) was more harmonious (in the mean). The development of the children breast-fed briefly (less than 60 days) showed extreme values of the properties investigated (i.e. they exceed the standard deviations).
   These apparently plain conclusions are a result of a detailed investigation and evaluation of all data obtained by the GUHA method, i.e. of all conjunctions of most properties investigated, where a relationship with a higher morbidity or the length of the breast-feeding was confirmed. The GUHA method, based on the processing of conjunctions of the individual properties, enables us to assess the basic results in larger connections.
   The study showed the suitability of the GUHA method in medical disciplines. It has two advantages. Firstly, it gives detailed answer to the questions concerning possible relations between the properties followed-up. Secondly, some of the responses it shows are unexpected to such a degree that they need a deeper explanation, and thus they indicate the orientation of further investigation.
On Interpretation of GUHA Results BIBA 37-46
  Z. Renc
By interpretation of the results obtained by means of the GUHA method we mean their structuring and mutual comparison with the aid of adopted criteria of importance. The purpose is to become well-orientated in the problems under examination and to make selection of the most important results for postprocessing possible. In the present paper, we confine ourselves to the most typical starting situation met with in interpreting results obtained by means of the GUHA method with implicational and associational quantifiers. Two ways of processing the given set of observational statements are described in greater detail and illustrated by examples. These ways can serve as a basis for building up as system of automatic interpretation or, more generally, automatic postprocessing of the results.
GUHA-Style Processing of Mixed Data BIBA 47-57
  Tomas Havranek; Dan Pokorny
In the present paper some suggestions concerning the GUHA-style processing of mixed data are presented. Mixed data are data obtained by simultaneous observation of dichotomous, multinomial and real-valued quantities. In the first part of the paper, attention is focused on mixed dichotomous and multinomial data, and in the second part on mixed dichotomous and real-valued data. In both cases reasonable sets of relevant questions (hypotheses) are defined and, in the first case, algorithms for evaluating these sets in data are given. More complex questions are investigated in the following two papers.
Enumeration Calculi and Rank Methods BIBA 59-65
  Tomas Havranek
An observational characterization of two classes of rank methods is given. This characterization enables one to analyse some GUHA procedures based on statistical rank methods and intended for use in the treatment of mixed two and rational-valued data. Some remarks concerning the notion of the r-problem and its solution are contained.
A GUHA Procedure with Correlational Quantifiers BIBA 67-74
  T. Havranek; J. Vosahlo
A GUHA procedure with correlational quantifiers is presented. Quantifiers used, deduction rules and algorithms are described and the possibilities for computer implementation are discussed.
The GUHA Method and Desk Calculators BIBA 75-86
  Dan Pokorny
A general discussion concerning the advantages of direct access to computing devices for the iterative use of GUHA procedures is presented (section 1). Section 2 brings a brief survey of GUHA procedures, implemented on desk calculators. Section 3 contains an example of a desk calculator oriented GUHA procedure, including an application in physiology. The procedure concerns a new method of identifying sources of dependence in two-way contingency table. Its program implementation is described in the Appendix.
Statistics of Multidimensional Contingency Tables and the GUHA Method BIBA 87-93
  Tomas Havranek
The statistics of multidimensional contingency tables based on two-valued data is discussed. Possibilities of an analysis of data having 30-50 qualities is considered and the role of the GUHA method in the present context is elucidated.

IJMMS 1978 Volume 10 Issue 2

Manual Optimization of Ill-Structured Problems BIBA 95-111
  James R. Buck; Walton M. Hancock
This paper describes an empirical study on human operators optimizing ill-structured problems over a variety of problem conditions. Performance and exploratory characteristics of the operators were examined as a function of these conditions relative to the random automatic optimization method. Manual optimization performance exceeded that of the automatic method under most conditions. In those problems containing more controls to be optimized and where there were few trials available, manual optimization was far more effective. Operator performance was impaired in solving problems which contained noise in the reported pay-off. Exploratory characteristics of these operators changed with the problem conditions. Based upon these characteristics, manual optimization may be described as a low-order gradient optimizer with adaptation to different problem conditions.
A Syntactic View of Semantic Networks BIBA 113-119
  D. Partridge
So-called "semantic networks" are notations which make explicit much of the implicit syntax of say, English. The mapping from an English sentence to a portion of the network, often claimed to be "understanding" of the sentence, is a mapping of one notation (the English language) to another (the "semantic network"). This viewpoint in no way belittles the achievements that such a mapping, in general terms, may represent nor does it detract from the potential value of the "semantic network" notation. What it does do is, first highlight the customary arbitrariness with which the term "syntax" and "semantics", and hence "understanding" are applied, and leads to a more systematic application of these concepts. Second, it raises the question of why one notation (grammar, syntax) is more useful than another, and again suggests an answer. Thus we can formulate rather differently the goals of a "semantic network" notation in contrast to the usual goal: the explication of the "semantic" content of language constructs. A goal that tends to lack either realism or substance, as "semantics" stubbornly resists attempts at an absolute, formal circumscription that is both conceptually adequate and logically consistent.
Schenker's Theory of Tonal Music -- Its Explication Through Computational Processes BIBA 121-138
  R. E. Frankel; S. J. Rosenschein; S. W. Smoliar
It has been possible to depict the constructs of a theory of tonal music developed by Heinrich Schenker (1867-1935) as a system of computational processes. The basis of Schenker's theory involves the existence of musical proto-structures which expand into tonal compositions through a well-defined set of rewriting rules. On the basis of these rewriting rules, it becomes possible to describe musical structure in terms of tree transformations. Using the programming language LISP, we have implemented a data structure that models the process by which the hierarchy is created. To demonstrate our system, we present structural descriptions of excerpts of tonal music as LISP programs. While our data and control structures help to clarify many of Schenker's ideas, we shall also focus upon those aspects of the theory found during the course of this study to be problematic.
FOCUS on Education -- An Interactive Computer System for the Development and Analysis of Repertory Grids BIBA 139-173
  Mildred L. G. Shaw; Laurie F. Thomas
Most teachers and tutors would agree that they achieve their best results when they "start from where the learner is". However, the techniques offered by psychology to help the teacher such as attitude scales, personality tests and questionnaires are less than satisfactory. The Kelly repertory grid is a new tool recently being used more extensively in education to raise the learner's awareness of the learning process, but many users have found the analysis of the grid difficult and unhelpful, and the structure too rigid.
   This paper describes two BASIC computer programs to elicit and analyse grids easily and clearly. FOCUS uses a two-way cluster analytic method to re-order the constructs and the elements to highlight similarities and differences in the grid, and displays the focused results together with tree diagrams of the similarities in elements and constructs. PEGASUS is an interactive program which conversationally elicits a grid, processes and offers real-time feedback commentary on the results.
An Economic Basis for Certain Methods of Evaluating Probabilistic Forecasts BIBA 175-183
  Judea Pearl
This paper deals with the question of selecting an appropriate measure of compatibility (also called Scoring Rule) between probabilistic models and empirical data. It is natural to require that if the model predicts the occurrence of an observed event with probability p < 1, then the compatibility measure should reflect the actual economical damage caused to the user who acts as though the event (which is about to happen) has only a p < 1 chance of occurring.
   The paper establishes relations between the compatibility measure and the user's distribution of future pay-offs and shows that each of the commonly used measures (e.g. logarithmic, spherical, quadratic) represents a natural payoff emanating from an economic environment of a specific character. Using these relations the problem of selecting an appropriate measure of compatibility reduces to that of characterizing the economic impact of the forecast at hand.
Analysis of Subjective Assessments of Structural Failures BIBA 185-195
  D. I. Blockley
A classification of structural failures is discussed and then expanded into a set of parameter statements which could be used in a process for predicting the likelihood of future structural failures. The parameter statements are subjectively assessed for 23 major structural accidents and then analysed using fuzzy sets. The failures are ranked in order of "inevitability". Human errors of one form or another proved to be the dominant reasons for the failures considered. A tentative method for assessing the safety of a structure allowing for the possibility of human error is outlined.
Empirical and Formal Language Design Applied to a Unified Control Construct for Interactive Computing BIBA 197-216
  David W. Embley
This paper reports the results of the application of both empirical and formal methods to the design of a proposed unified control construct for interactive computing. The empirical method is based on the use of controlled experiments on programming behavior to investigate alternative language constructs. The formal method requires the precise specification of language constructs in order to expose strengths, weakness, and inconsistencies. Insights gained, that were not obvious from mere observation and introspection, are reported and supporting data for design decisions is presented.

Book Reviews

"Representation and Understanding: Studies in Cognitive Science," edited by D. G. Bobrow and A. Collins BIB 217-220
  S. Soule
"Mechanisms of Speech Recognition," by W. A. Ainsworth BIB 217-220
  W. Jassem

IJMMS 1978 Volume 10 Issue 3

Editorial: Man-Computer Conference Issue BIBA 221-223
  D. R. Hill; Brian Gaines
This issue is made up of invited papers selected from two major conferences of 1977: Man-Computer Communications at Calgary in May; and Applied General Systems Research at Binghamton in August (papers from Artificial Intelligence Symposium).
Man-Computer Communication -- What Next? BIB 225-232
  B. R. Gaines
CHOREO: An Interactive Computer Model for Dance BIBA 233-250
  G. J. Savage; J. M. Officer
In this paper, the need for literacy in dance is established and an introduction to dance notation systems is given: specifically, the Massine notation method and the Labanotation method are described in brief. The use of interactive computer graphics as a tool for both learning and interpreting dance notation is introduced. The operation of a computer graphics model of a dancer is outlined and two methods of dancer-machine interaction, for purposes of describing a dance to the computer and obtaining the resultant simulation, are described. The first method of interaction is based on the Massine method of notation where the dancer describes a dance to the computer in the language of dance by means of a system of menus and a ten-key function box. The second method of interaction is based on the Labanotation method where the dancer describes the dance to the computer by selecting the actual Laban symbols from a menu using an acoustic pen.
The Computer as an Artistic Tool BIBA 251-262
  Lynn Sveinson
This paper is based on the belief that science and art are not necessarily "two cultures", but can be combined successfully as a result of their inescapable influences upon each other. A brief justification of this position is presented with some emphasis on artistic investigations using the computer. The machine's merits are viewed not only as a means of mass communication or of producing pictures, but also as a potential modelbuilder for the formalization of aesthetic concepts.
   The remainder of the paper details recent and current research on such uses of the machine -- in particular the use of a computer as an artistic tool. The work completed was concerned with the possibilities inherent in using computer plotted output to solve problems similar to those which an artist solves in manual drawings or paintings, particularly a problem known as "readability of surface". An account of the program parameters and methods that were used to produce the plotted drawings is given and the results are discussed. Current research is then outlined. This new work involves an attempt to algorithmitize basic design concepts, working firstly with the idea of "balance". The concepts and the methods to be used in experimenting with balance are explained.
   It is important to note that all the work is being carried out in co-operation with a professional artist. The hope is that the artist will be stimulated, by the many alternatives provided by the computer, in the exploration of design. Furthermore, it is hoped that eventually it will prove possible to formulate clearly some of the traditionally vague design concepts. The results to date are encouraging in their support of the interrelationship between science and art.
Computer Graphic Aided Music Composition BIBA 263-271
  Theo Goldberg; Guenther F. Schrack
Computer-aided design systems need not be restricted to technical tasks. They can be useful in a much larger area including, in particular, the fine arts. Computers as a tool have been used for quite some time to aid artists in producing graphics and to aid composers in producing music.
   The project described has been designed to unite both visual and audio compositions, to be presented simultaneously in multi-media performances.
   Using a high-level graphics programming language, an applications program was written to the specifications of the composer.
   The program allows the creation of aesthetically pleasing graphics interactively. The images serve two purposes. They represent an unconventional notation for the composition of electronic music and, at the same time, form the basis for slides for the visual presentation. The images, created with the aid of the computer graphics program, represent numerical information which is converted by a computer sound synthesis and composition program into electronic music. The images are called tendency masks which represent the macro-form of the composition, i.e. its frequency-time distribution and the density of sound.
   A general description of the design system and the graphics program will be given, and an evaluation of the difficulties in man-machine and man-man communications will be attempted. A recently completed multi-media work, composed with the aid of the system, will be described.
Template Mode -- An Aid for the Interactive Graphic User BIBA 273-283
  U. G. Lama
The continuing spread of low-cost interactive graphics from scientific and engineering fields into less technical areas has increased the demand for application and user-oriented software that requires little or no knowledge of programming. Thus, the conundrum presented to the systems designer is how to tailor a module to a particular application and yet at the same time make the system flexible enough to be applicable to different fields. One solution which has proven effective, can be called "User Template Mode". In this system from a set of generalized building blocks, by means of a simple catechism, a user can initially select all the desired parameters and options. Thereafter, he need only invoke the template to obtain his customized interactive display automatically. This concept has been applied to a diverse range of problems, from program forecasting to log-sawing simulation.
A Program Structure for Event-Based Speech Synthesis by Rules within a Flexible Segmental Framework BIBA 285-294
  David R. Hill
A program structure based on recently developed techniques for operating system simulation has the required flexibility for use as a speech synthesis algorithm research framework. Synthesis is possible with less rigid time and frequency-component structure than with simpler schemes, and it allows much of the speech knowledge required for synthesis to be removed from the main driving structure and embodied as tables and procedures that may easily be modified or replaced. The program also meets real-time operation and memory-size constraints. The resulting view of speech structure, at the acoustic-segmental level, is that of time-ordered, perceptually relevant events, and is related to that used in the author's work on automatic speech pattern discrimination. The flexibility of the scheme for synthesis, and the excellent mutual independence of the many processes, with differing objectives, that must be run for realistic approximations to real speech variation, have proved a welcome release from earlier problems.
Trends in Artificial Intelligence BIBA 295-299
  Patrick Hayes
This paper discusses the foundations of Artificial Intelligence as a science and the types of answer that may be given to the question, "What is intelligence?" It goes on to compare the paradigms of Artificial Intelligence and general systems theory, and suggests that the links of general systems theory are closer to "brain science" than they are to Artificial Intelligence.
A Teachable Machine in the Real World BIBA 301-312
  Peter M. Andreae; John H. Andreae
The design of learning machines which can be taught how to solve problems in the real world is of importance in industrial automation and in other fields. This paper outlines a new approach to this problem and describes PURR-PUSS, a working system embodying this approach. The two key features of the system are "multiple context" and "novelty". We discuss the way in which these two features contribute to the teachability of PURR-PUSS.
MYCIN: A Knowledge-Based Consultation Program for Infectious Disease Diagnosis BIBA 313-322
  William Van Melle
MYCIN is a computer-based consultation system designed to assist physicians in the diagnosis of and therapy selection for patients with bacterial infections. In addition to the consultation system itself, MYCIN contains an explanation system which can answer simple English questions in order to justify its advice or educate the user. The system's knowledge is encoded in the form of some 350 production rules which embody the clinical decision criteria of infectious disease experts. Much of MYCIN's power derives from the modular, highly stylized nature of these decision rules, enabling the system to dissect its own reasoning and allowing easy modification of the knowledge base.
A Paradigmatic Example of an Artificially Intelligent Instructional System BIBA 323-339
  John Seely Brown; Richard R. Burton
This paper describes research directed at understanding and designing artificially intelligent instructional systems that use their knowledge bases and problem-solving expertise to aid the student in several ways. A system is described that derives from the world of manipulatory mathematics based on attribute blocks and examples are given of interaction with it. The system can answer student questions and evaluate his theories as well as criticize his solution paths. It can also form structural models of his reasoning strategies to identify his fundamental misconceptions and determine when and how to provide remediation, hints or further instruction.

IJMMS 1978 Volume 10 Issue 4

Communication between Two Self-Organizing Systems Modelled by Controlled Markov Chains BIBA 343-366
  John S. Nicolis; E. N. Protonotarios; M. Theologou
The interaction between two self-organizing systems each possessing two hierarchical levels is dealt with. The dynamics of the lower levels Q,Q' of the systems involved are emulated as finite state controlled Markov chains; their transitional probability matrices are parameterized on control variables which are related to the probabilities of "pay-off" in an underlying two-agent game which simulates collectively all hierarchical levels below Q and Q', respectively. The higher levels W,W' are modelled by semi-Markov chains, whose mean holding times are inversely proportional to the degree of organization (or redundancy) of the respective lower level(s) Q,Q'.
   The higher hierarchical levels, W,W' receive afferently from the respective lower levels Q,Q' collective properties which measure (a) the percentage of occupancy of a state selected a priori (on the lower level(s)) as a "homeostatic" one, and (b) the cross correlation(s) between the state sequences of the levels (Q,W') or (Q',W), respectively. The transition probabilities for the semi-Markov chains at the higher levels W,W' are functions of the above-mentioned quantities (a) and (b)
   The communication process between the two hierarchical systems is pursued as a bidirectional information transaction where the lower levels play the role of receivers and the higher levels play the role of transmitters -- standing for "experience" and "behavior", respectively.
   The inter-system communication becomes adaptive by built-in control efferent (feedforward) mechanisms exercised intra-systematically from the higher levels W,W' to their respective lower levels Q,Q' via the controlling parameters of the underlying games.
   In this paper an extensive computer program is undertaken in order to simulate such an adaptive communication process. The evolution with time of the behavioral mode (state) switching at the levels W and W' is computed for the specified control laws, randomly selected from all possible feedforward mechanisms which increase exponentially with the number of quantization levels of the control variables. As far as the selection of the control algorithm is concerned the criterion is the maximization of "the joint figure of merit" having to do with the long term average of an appropriately weighted sum of the conflicting terms; Probabilities of the homeostatic states and the inter-systemic cross correlations between the levels (W,Q') and (W',Q). The present work is inspired by contemporary psycho-physiological and biological research.
   The simulation on the computer of the mathematical model presented here may contribute to a better understanding of certain aspects of communication between organisms and suggest the design of new neurophysiological experiments.
   The purpose of this work is to propose cybernetical models for a comprehensive presentation of an extensive amount of phenomena related to communication between biological -- i.e. self-organizing systems.
On Dealing with Quantification in Natural Language Utterances BIBA 367-394
  Peter B. Sheridan
This paper presents some preliminary and still quite rudimentary ideas growing out of on-going studies into the problem of representation of quantified information in English, in terms of linked and nested "attribute-value" structures. Specifically, we indicate how such representation can be accomplished within the kind of attribute-value framework provided by Heidorn's NLP (Natural Language Processor) system.
   A scheme for representing well-formed formulas of a sorted first-order logic (with descriptions) is first presented and serves as a bare conceptual framework for understanding the sequel. Then we indicate how certain commonly encountered forms of English language quantificational utterances may be reduced to canonical forms representable within such an underlying scheme. A few of many still outstanding logico-linguistic issues (viz.: ambiguities connected with 'a', 'any' and 'some'; multiple predication; and anonymous quantification) are noted and discussed. Numerous examples are included.
PRUF -- A Meaning Representation Language for Natural Languages BIBA 395-460
  L. A. Zadeh
PRUF -- an acronym for Possibilistic Relational Universal Fuzzy -- is a meaning representation language for natural languages which departs from the conventional approaches to the theory of meaning in several important respects.
   First, a basic assumption underlying PRUF is that the imprecision that is intrinsic in natural languages is, for the most part, possibilistic rather than probabilistic in nature. Thus, a proposition such as "Richard is tall" translates in PRUF into a possibility distribution of the variable Height (Richard), which associates with each value of the variable a number in the interval [0,1] representing the possibility that Height (Richard) could assume the value in question. More generally, proposition, p, translates into a procedure, P, which returns a possibility distribution, Πp, with P and Πp representing, respectively, the meaning of p and the information conveyed by p. In this sense, the concept of a possibility distribution replaces that of truth as a foundation for the representation of meaning in natural languages.
   Second, the logic underlying PRUF is not a two-valued or multivalued logic, but a fuzzy logic, FL, in which the truth-values are linguistic, that is, are of the form true, not true, very true, more or less true, not very true, etc., with each such truth-value representing a fuzzy subset of the unit interval. The truth-value of a proposition is defined as its compatibility with a reference proposition, so that given two propositions p and r, one can compute the truth of p relative to r.
   Third, the quantifiers in PRUF -- like the truth-values -- are allowed to be linguistic, i.e. may be expressed as most, many, few, some, not very many, almost all, etc. Based on the concept of the cardinality of a fuzzy set, such quantifiers are given a concrete interpretation which makes it possible to translate into PRUF propositions exemplified by "Many tall men are much taller than most men," "All tall women are blonde is not very true," etc.
   The translation rules in PRUF are of four basic types: Type I -- pertaining to modification; Type II -- pertaining to composition; Type III -- pertaining to quantification; and Type IV -- pertaining to qualification and, in particular, to truth qualification, probability qualification and possibility qualification.
   The concepts of semantic equivalence and semantic entailment in PRUF provide a basis for question-answering and inference from fuzzy premises. In addition to serving as a foundation for approximate reasoning, PRUF may be employed as a language for the representation of imprecise knowledge and as a means of precisiation of fuzzy propositions expressed in a natural language.
A Q-Analysis of Television Programmes BIBA 461-479
  J. H. Johnson
This paper documents an experiment in the application of R. H. Atkin's methodology of Q-analysis in the field of television. Relations are defined between programme types and time intervals for each of three British television channels, Anglia, BBC1 and BBC2, and the associated simplicial complexes are compared by Q-analysis. Pairwise the structures are similar for each channel compared with itself over two consecutive weeks but are quite different when one channel is compared with another. A theoretical discussion attempts to illustrate the relationships between the programme-time interval backcloth and the viewing patterns it can support. The paper demonstrations how Q-analysis may be applied to programme schedules and suggests this may be a fruitful new area of application.

Book Review

"Humanized Input: Techniques for Reliable Keyed Input," by Tom Gilb and Gerald M. Weinberg BIB 481
  Bruce Anderson

IJMMS 1978 Volume 10 Issue 5

Linguistic Models and Fuzzy Truths BIBA 483-494
  Ronald R. Yager
In this paper the linguistic value of truth is used to derive a procedure for validation of models. The procedure uses fuzzy set theory to obtained the compatibility between observed data and model generated data. Of significance is the fact that the data can be in linguistic form.
An Experimental Study of the Effects of Data Display Media on Decision Effectiveness BIBA 495-505
  Kenneth A. Kozar; Gary W. Dickson
This paper reports the results of an experimental study of the dependence of decision effectiveness on computer data display media. The experiment involved a comparison of cost and time performance of two groups of graduate business administration students making decisions in a computer simulated production environment. Each group received the same information, one on a paper hard copy medium and the other on the screen of a cathode ray tube (CRT). The CRT users took a statistically significant longer time than the hard users to make their decisions. Although costs were lower for the hard copy users, differences in costs were not significant. No difference in the confidence placed in the decisions was detected.
Why an Examination was Slower On-Line than on Paper BIBA 507-519
  Wilfred J. Hansen; Richard Doring; Lawrence R. Whitlock
Although most interactive systems seem to facilitate user work, our system -- an interactive examination system -- increased the time required by as much as 100%. To explore this phenomenon we videotaped four students taking an exam interactively and on paper. Analysis showed the extra time went to system overhead, think time and trouble understanding what to do. We were able to explain the latter two categories as the result of four varieties of "uncertainty". Evidence included changes in review behavior and working habits, hesitation before leaving a page and the "context switch" time required to recognize even a familiar image. With subsequent improvements the increased time has been largely eliminated.
Adaptive Training of Perceptual-Motor Skills: Issues, Results, and Future Directions BIBA 521-551
  Gavan Lintern; Daniel Gopher
This review has two aims; the first is to assess AT as a method for teaching control skills, and the second is to establish a conceptual framework that will allow a detailed analysis of adaptive manipulations and their influence on skill acquisition. The major studies in adaptive training research are described and evaluated. A critical examination of the various experiments reveals that there is less support for the application of adaptive manipulations to applied motor skill training than is generally believed. Some apparently favorable experiments have methodological and interpretive flaws that seriously weaken their conclusions. Other experiments that provide tenable support have characteristics that are unique in adaptive training research so that generality of their data is in doubt. The limitations of the data prevent any firm conclusions being drawn about the efficiency of adaptive training. However a detailed analysis of motor skill theory and research indicates that some adaptive manipulations could be effective. Methodological and conceptual issues that are critical to successfully testing those manipulations are clarified in a discussion of the adaptive training concept. In addition, that discussion outlines several empirical tests that are needed to enable a more effective analysis of adaptive training.
The Performance of College Freshmen and Seniors During Co-Operative Problem Solving BIBA 553-568
  Robert N. Parrish
Teams of two college freshmen, two college seniors, or one freshman and one senior solved factually-oriented problems co-operatively. Subjects on a team worked in adjoining rooms and exchanged information by means of handwritten notes. Performance was assessed on time required to solve problems and on behavioural measures of activity. Problem solution times did not differ significantly among groups, though teams solved a geographic orientation problem significantly faster than an equipment assembly problem. Analyses of behavioral measures revealed significant differences among groups for two categories, between problems in nine, and between job roles in five. Interactions showed that subjects at different educational levels allocated their problem solution times in different ways. Information retrieval activities were affected by the nature of the problem as well as by educational levels.
A Text-to-Speech Translation System for Italian BIBA 569-591
  Leonardo Lesmo; Marco Mezzalama; Piero Torasso
This paper describes a text-to-speech translation system for the Italian language. It is based on a formal translator which converts a graphemic text into the corresponding phonemic representation and on a suprasegmental features descriptor. The translator is implemented according to the formal techniques developed for lexical and syntactical analysis of artificial languages. The suprasegmental features descriptor determines pause positions, phoneme durations and pitch contour on the sole basis of the punctuation marks and the accent marks present in the text.
Andre Tchaikovsky Meets the Computer: A Concert Pianist's Impromptu Encounter with a Musicianship Teaching Aid BIBA 593-602
  Martin R. Lamb
A brief account is given of the reaction of a professional musician to an experimental computer system designed to transcribe music played at a keyboard into a notation immediately comprehensible to a musician.
Interactive Instruction in Solving Fault Finding Problems BIBA 603-611
  J. B. Brooke; K. D. Duncan; E. C. Marshall
A training program is described which provides, during fault diagnosis, additional information about the relationship between the remaining faults and the available indicators. Since this information changes throughout the process of fault diagnosis, an interactive computer system seems to be the most suitable training implementation. The computer program developed for this purpose and the first results of experimental training are described.

IJMMS 1978 Volume 10 Issue 6

A State-Space Description of Transfer Effects in Isomorphic Problem Situations BIBA 613-623
  George F. Luger
Previous research has shown significant transfer for subjects solving two problems of isomorphic structure. The state-space representation of these problems is used to characterize their structure, and in particular, the state-space decomposition modulo isomorphic sub-problems is used to define "stages" within the problem's solution. Changes within these "stages" provide a framework for a more complete description and understanding of the transfer phenomenon.
Two Experimental Comparisons of Relational and Hierarchical Database Models BIBA 625-637
  Margaret Brosey; Ben Shneiderman
The data model is a central feature which characterizes and distinguishes database management systems. This paper presents two experimental comparative studies of two prominent data models; the relational and hierarchical models. Comprehension, problem solving situation and memorization tasks were performed by undergraduate subjects. Significant effects were found for the data model, presentation order, subject background and tasks.
A Procedure for Predicting Program Editor Performance from the User's Point of View BIBA 639-650
  D. W. Embley; M. T. Lan; D. W. Leinbaugh; G. Nagy
A trace-driven model is derived for predicting the terminal session times for program editing tasks performed on interactive computer systems. The external parameters of the model are user think time, user typing rate and the computer response time. Possible applications of the model, including the selection of the most suitable subset of a program editor for a given task or user profile, are discussed. By the way of example, the model is applied to command subsets suitable for novice and intermediate programmers on two program editors at the University of Nebraska. Statistically significant results are obtained for the difference between expected session durations on the two systems.
A Design-Interpretation Analysis of Natural English with Applications to Man-Computer Interaction BIBA 651-668
  John C., Jr. Thomas
Many behavioral scientists and most designers of man-computer interfaces view communication in a certain way. This viewpoint includes the implicit belief that communication from system A to system B essentially involves the encoding of some internal state in system A into an external statement for transmission to system B. System B decodes this message and changes its internal state. Communication is considered "good" to the extent that there is an isomorphism between the internal states of the two systems after the message has been sent. This paper argues that this view is inadequate both for an understanding of communication between the two persons and as a theoretical foundation for any kind of man-computer interaction, particularly, in natural language. Empirical results supporting this proposition are reported. In addition, an alternative view of the communication process is outlined. This view stresses the game-theoretic aspects of communication, the importance of viewing message-building as a constructive (rather than translational) process, the importance of metacomments, the multiplicity of channels involved in human natural language communication, and stresses that, under certain conditions, the "vagueness", "fuzziness" and ambiguity of natural language are assets, not liabilities. The paper concludes by discussing some ways these ideas could serve as possible guidelines for the design of man-computer interfaces. A major purpose of the paper is to encourage the expression of alternative views on these issues.
Fast Text-to-Speech Algorithms for Esperanto, Spanish, Italian, Russian and English BIBA 669-692
  Bruce Arne Sherwood
Fast, simple algorithms are presented for producing good-quality speech from a phonemic synthesizer in five different languages: Esperanto, Spanish, Italian, Russian and English. The Esperanto and Spanish algorithms can operate on standard written texts. For Italian the typist must occasionally mark the stressed syllable in a word, and for Russian the typist must mark all stressed syllables. For English, the typist must use a phonetic spelling and mark the stressed syllables: situations where this approach is viable include the addition of speech output to computer-based educational materials. A particularly advantageous phonetic spelling scheme for English is presented. Comparisons are made among the five languages as to the ease with which text-to-speech conversions can be made.
A Rule-Based Approach to Knowledge Acquisition for Man-Machine Interface Programs BIBA 693-711
  D. A. Waterman
The development of computer programs, called agents, that act as man-machine interfaces for computer users are described. These programs are written in RITA: the Rule-directed Interactive Transaction Agent system, and are organized as sets of IF-THEN rules or "productions systems." The programs, or "personal computer agents," are divided into two main categories: those that interface the user to computer systems he wishes to use and those that interact with the user to acquire the knowledge needed to create these interface programs. The relationship between the interface program and the knowledge acquisition program is that of parent-offspring. Three types of parent-offspring RITA agent pairs are described: (1) an exemplary programming agent that watches a user perform an arbitrary series of operations on the computer and then writes a program (a task agent) to perform the same task; (2) a tutoring agent that watches an expert demonstrate the use of an interactive computer language or local operating system and then creates a teaching agent that can help naive users become familiar with the language or system demonstrated by the expert; and (3) a reactive-message creating agent which elicits text from a user (the sender) and from it creates a new RITA agent which is a reactive message. The reactive message is sent to some other user (the recipient) who interacts with it. During the course of the interaction a record of the recipient's responses is sent back to the sender.