HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 050607080910111213141516171819202122232425

International Journal of Man-Machine Studies 15

Editors:B. R. Gaines; D. R. Hill
Dates:1981
Volume:15
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:26
Links:Table of Contents
  1. IJMMS 1981 Volume 15 Issue 1
  2. IJMMS 1981 Volume 15 Issue 2
  3. IJMMS 1981 Volume 15 Issue 3
  4. IJMMS 1981 Volume 15 Issue 4

IJMMS 1981 Volume 15 Issue 1

Editorial: Special Issue on "The Semantics and Syntax of Human-Computer Interaction" BIB 1-2
  Thomas P. Moran
The Command Language Grammar: A Representation for the User Interface of Interactive Computer Systems BIBA 3-50
  Thomas P. Moran
This article introduces and discusses a specific grammatical structure -- the Command Language Grammar (CLG) -- as a representational framework for describing the user interface aspects of interactive computer systems. CLG partitions a system into a Conceptual Component (tasks and abstract concepts), a Communication Component (command language), and a Physical Component (display, keyboard, etc.). The components are further stratified into distinct Levels -- a Task Level, a Semantic Level, a Syntactic Level, and an Interaction Level -- each Level being a complete description of the system at its level of abstraction. Each Level's description contains procedures for accomplishing the tasks addressed by the system in terms of the actions available at that Level. That is, the system is described by progressive refinement. An extensive example, a small message-processing system, is described at all Levels in the CLG notation.
   CLG is discussed from three points of view: the Linguistic View sees CLG as elaborating the structure of the system's user interface and of the communication between the user and the system. The principal goal of CLG in this view is to lay out the space of command language systems. The Psychological View sees CLG as describing the user's mental model of the system. The main concern in this view is with the psychological validity of the CLG description. The Design View sees CLG as a series of representations for specifying the design of a system. CLG proposes a top-down design process in which the conceptual model of the system is first specified and the a command language is created to communicate with it.
The Machine Inside the Machine: Users' Models of Pocket Calculators BIBA 51-85
  Richard M. Young
For an interactive device to be satisfactory, its intended users must be able to form a "conceptual model" of the device which can guide their actions and help them interpret its behaviour. Three designs of pocket calculator are analysed for the models implied by their behaviour. Two different kinds of model are illustrated, oriented primarily to answering different questions about the calculator's behaviour. Implied register models, aimed at predicting how the calculator will respond to a given sequence of inputs, provide a simple "cover story" for how the calculator works. Task/action mapping models, aimed more at deriving an appropriate input sequence to achieve a given task, focus on the relations between the actions a user performs and the task the calculator carries out. The "core" of these relations acts as the conceptual model. Complexities in the model, for example, give rise to corresponding difficulties in the use of the calculator. Three applications of mapping models are discussed, the analysis in each case yielding empirically testable consequences for the user's behaviour.
Consistency and Compatibility in Human-Computer Dialogue BIBA 87-134
  P. J. Barnard; N. V. Hammond; J. Morton; J. B. Long
To tackle problems of human-computer interaction the traditional scope of human-machine studies needs extending to include the complex cognitive skills of understanding, communication and problem solving. This extension requires a fusion of the conceptual and empirical tools of human factors with those of cognitive psychology. A methodological approach to this fusion is outlined as a background for three studies of structured human-computer dialogue. The studies involved a task in which secret messages were decoded in a number of discrete steps corresponding to computer commands. Each "command" required two numeric arguments. The study investigated underlying variables using questionnaire techniques in addition to user performance in an interactive version of the task. Three factors concerning the order of arguments in a command string were investigated: the consistent positioning of a recurrent argument, the relationship between argument entry order and their order in natural language, and the relationship between argument entry order and the position of argument values on a VDU. In Study I software specialists were asked to design command structures for the task and to give reasons for their choices. In Study II naive subjects were asked to choose between telegrams in which alternative argument orders were expressed in terms of alternative word orders. In the interactive version of the task, used in Study III, positionally consistent systems were most readily learned, but this depended on having the recurrent argument in the first position. With positionally inconsistent systems there were reliable effects due to the position of the direct object of individual command verbs.

Book Review

"Communicating with Microprocessors," by I. H. Witten BIB 135-136
  E. M. Scharf

IJMMS 1981 Volume 15 Issue 2

Mechanisms of Human Facial Recognition BIBA 137-178
  Robert J. Baron
This paper presents an extension and refinement of the author's theory for human visual information processing, which is then applied to the problem of human facial recognition. Several fundamental processes are implicated: encoding of visual images into neural patterns, detection of simple facial features, size standardization, reduction of the neural patterns in dimensionality, and finally correlation of the resulting sequence of patterns with all visual patterns already stored in memory. In the theory presented here, this entire process is automatically "driven" by the storage system in what amounts to an hypothesis verification paradigm.
   Neural networks for carrying out these processes are presented and syndromes resulting from damage to the proposed system are analyzed. A correspondence between system component and brain anatomy is suggested, with particular emphasis on the role of the primary visual cortex in this process. The correspondence is supported by structural and electrophysiological properties of the primary visual cortex and other related structures.
   The logical (computational) role suggested for the primary visual cortex has several components: size standardization, size reduction, and object extraction. The result of processing by the primary visual cortex, it is suggested, is a neural encoding of the visual pattern at a size suitable for storage. (In this context, object extraction is the isolation of regions in the visual field having the same color, texture, or spatial extent.) It is shown in detail how the topology of the mapping from retina to cortex, the connections between retina, lateral geniculate bodies and primary visual cortex, and the local structure of the cortex itself may combine to encode the visual patterns. Aspects of this theory are illustrated graphically with human faces as the primary stimulus. However, the theory is not limited to facial recognition but pertains to Gestalt recognition of any class of familiar objects or scenes.
A Computer System for Axiomatic Investigation BIBA 179-200
  Rosalind L. R. Ibrahim
This paper describes a computer system that can be used for informal axiomatic investigation of mathematics. A formal mathematics-like language is defined which enables a mathematician to write informal mathematical statements and proofs. Each step of a proof is checked for logical validity, and a user may develop and retain a system of axioms, theorems, dependencies, and proofs written in this language. The intended user initially is a college mathematics student, who would use this system to develop proof-writing skills.
   Some examples of proofs are given.
Some Problems Concerning the Construction of Algorithms of Decision-Making in Fuzzy Systems BIBA 201-211
  Ernest Czogala; Witold Pedrycz
The fuzzy set theory established by L. A. Zadeh has started a new period in formalizing a decision-making processes in ill-defined systems where a human being (operator) is an important element in the control loop. The control algorithm used here is based on a generalized fuzzy version of modus ponens (compositional rule of inference) where a set of decision-making rules forming a control algorithm is given. The aim of this paper is to present some problems which may appear in the initial stage of design of a decision-making algorithm and discuss a method of their formulation. We introduce notions like the completeness of algorithm, interactivity and the competitivity of control rules and consider indices illustrating the presented design aspects.
Towards a Reconciliation of Fuzzy Logic and Standard Logic BIBA 213-220
  John Fox
Haack (1979) has questioned the need for fuzzy logic on methodological and linguistic grounds. However, three possible roles for fuzzy logic should be distinguished; as a requisite apparatus -- because the world poses fuzzy problems; as a prescriptive apparatus -- the only proper calculus for the manipulation of fuzzy data; as a descriptive apparatus -- some existing inference system demands description in fuzzy terms. Haack does not examine these distinctions. It is argued that recognition of various different roles for fuzzy logics strengthens the pragmatic case for their development but that their formal justification remains somewhat exposed to Haack's arguments. An attempt is made to reconcile pragmatic pressures and theoretical issues by introducing the idea that fuzzy operations should be carried out on subjective statements about the world, leaving standard logic as the proper basis for objective computations.
Classification in Medical Diagnostics: On Some Limitations of Q-Analysis BIBA 221-237
  Vaclav Pinkava
This paper gives a critical evaluation of Atkin's Q-Analysis in its application to problems of medical diagnosis. The basic procedure of Q-analysis is first explained in simple terms. Then the theory of classification by binary features is explained. It is done in a generally accessible way for the benefit of those likely to use the method for diagnostics. The main points are illustrated on a small Q-analysis on artificial data. It is shown that Q-analysis is not a suitable method for research in diagnostics.
A Critique of the Paper "Classification in Medical Diagnostics: On Some Limitations of Q-Analysis" by V. Pinkava BIBA 239-248
  J. H. Johnson
The limitations of Q-analysis in the field of psychology suggested in Pinkava's paper seem to boil down to the following: (A) an innate inability to consider "negative features"; (B) that a Q-analysis is based on numbers of shared features as opposed to the feature themselves; (C) that a definition of "classification" stated in the language of algebraic logic eludes Q-analysis; and (D) an example of Q-analysis failing to discover a "hidden" classification. In this reply a review of the concept of anti-vertex answers (A), the distinction between the procedure "Q-analysis" and the "Methodology of Q-analysis" illuminates (B), a construction using anti-vertices answers (C), while a more relevant Q-analysis answers (D). Insofar as Pinkava's criticisms are directed against the computer algorithm called "Q-analysis", it is correct to say that a blind application of it alone will not automatically give useful q-connected components for the purpose of classification. To exploit the proven use of Q-analysis in set definition and classification it is necessary to appeal to the wider "Methodology of Q-analysis". The latter is concerned with all the kinematics associated with a topological representation of a relation (between finite sets), whilst the former is a technique for finding some of the global properties of such a representation.

IJMMS 1981 Volume 15 Issue 3

Introduction BIB 251
  Petr Hajek
The Present State of the GUHA Software BIBA 253-264
  Tomas Havranek
The present state of software realizing the GUHA method of mechanized hypothesis formation in exploratory data analysis of categorical data is presented. The paper is related to the paper of Hajek & Havranek (1978b) describing the basic principles of the GUHA method.
The GUHA Method in the Context of Data Analysis BIBA 265-282
  Tomas Havranek
In the present paper questions concerning the relation of the GUHA method to contemporary methods of exploratory data analysis are discussed. Some knowledge of the tutorial paper on the GUHA method by Hajek & Havranek (1987b) is assumed.
Main Problems and Further Possibilities of the Computer Realization of GUHA Procedures BIBA 283-287
  Jan Rauch
The first part of this article concerns certain assumed trends of the development of the GUHA method. Particular attention was paid to the calculus of open observational formulae as a means for making it possible to declare derived quantities as parameters of GUHA procedures.
   In the second part of the article, problems are discussed, which are to be solved for providing the software and for the development of the GUHA method in the mentioned trends. Main features of the database systems GUHA-DBS as a means for solving the problems mentioned are outlined. The database system GUHA-DBS is described in more detail in Pokorny & Rauch (1981).
   It is assumed that the reader knows the works of Hajek & Havranek (1977, 1978a) and Rauch (1978).
The GUHA-DBS Data Base System BIBA 289-298
  Jaroslav Pokorny
This paper presents the concept of the GUHA-DBS Data Base System intended for the users of the GUHA method and for GUHA procedure programmers. The architecture of the system as a whole is described and the features of its principal components are presented.
An Application of the GUHA Method to Chemical Engineering BIBA 299-307
  V. Vlcek
An application of the GUHA method of automated hypothesis formation to chemical engineering is described and evaluated. Programs for this purpose but capable of general use are also described.
Some Examples of Transforming Ordinal Data to an Input for GUHA-Procedures BIBA 309-318
  J. Ivanek
Some applications of the GUHA method of mechanized hypothesis formation in pedagogy, economics and sports are described with emphasis on constructions of two-valued data models. The adequacy of the used procedures is partly discussed on the basis of concrete results and in the comparison with other statistical methods.
Complexity of Hypothesis Formation Problems BIBA 319-332
  Frederic Springsteel
This paper reviews some results concerning the computational complexity of the processes of mechanizing hypothesis formation in the framework of GUHA methods.
Formal Systems for Mechanized Statistical Inference BIBA 333-350
  Tomas Havranek
In the present paper we try to discuss some formal systems corresponding to current theories of statistical inference and oriented towards mechanized statistical inference as considered, for example, in GUHA methods of mechanized hypothesis formation. The syntax and semantics of theoretical sentences is shown in relation to functor calculi and corresponding observational sentences evaluable at data.
Decision Problems of Some Statistically Motivated Monadic Modal Calculi BIBA 351-358
  Petr Hajek
Logical calculi corresponding to the theoretical level of statistical inference (as understood, for example, in foundations of GUHA-style hypothesis formation) may be described as some generalized monadic modal predicate calculi. It is shown that various such calculi are undecidable when endowed with general semantics (arbitrary probabilistic structures) but, roughly, all reasonable such calculi become decidable when semantics is restricted to identically independently distributed structures.

IJMMS 1981 Volume 15 Issue 4

Planning and Direction of Problem Solving in Structured Programming: An Empirical Comparison between Two Methods BIBA 363-383
  Jean Michel Hoc
The results of an empirical comparison between two methods of computer programming are presented, both of which are based on principles of structured programming. Their principal difference is in the approach proposed for problem solving. One approach is prospective: by which the program structure is derived from the data structure. The other is retrospective: one must, on the contrary start from the structure of the results. A relatively complex management program produced by students trained to each of the methods (29 and 17 subjects, respectively) was analysed. Three main results were observed: (a) the type of approach has quite a clear effect on the global program structure and (b) on certain categories of errors, (c) the adoption of a prospective approach (regardless of the method learned) to construct a difficult component of the program. We conclude by a recommendation for the conception of programming methods.
The IF THEN ELSE Statement and Interval-Valued Fuzzy Sets of Higher Type BIBA 385-455
  Ellen Hisdal
A new fuzzy relation which represents an IF THEN ELSE (abbreviated to "ITE") statement is constructed. It is shown that a relation which (a) always gives correct inference and (b) does not contain false information which is not present in the ITE statement, must be of a higher degree of fuzziness than the antecedents and consequents of the ITE statement. Three different ways of increasing the fuzziness of the relation are used here: (1) the fuzzy relation is of higher type; (2) it is interval-valued; (3) contains a BLANK or "don't know" component. These three types of fuzziness come about naturally because the relation is a restriction of an initial relation which represents the second approximation to the state of complete ignorance. There exist successive approximations to the state of complete ignorance, each of them being an interval-valued fuzzy set BLANK of one type higher than the previous one. Similar representations of the zeroth and first approximation to the state of ignorance have been used in the theory of probability, though in a rather heuristic fashion. The assignment of a value to a variable is represented as a complete restriction of the BLANK state of type N; the "value" being any pure (non-interval-valued) fuzzy set of type N. With the new relation, the inferred set is a superposition of the consequents of the ITE statement and of the BLANK state, each of these components being multiplied by an interval-valued coefficient. In the case of modus ponens inference, the component with the highest coefficient (determined from a specially defined ordering relation for interval-values) is always the correct consequent, provided that the original ITE statement is logically consistent. A mathematical test for logical consistency of the (possibly fuzzy) ITE statement is given. Disjointness of fuzzy sets is defined and connected up with logical consistency. The paradoxes of the implication of mathematical logic disappear in the fuzzy set treatment of the ITE statement. Subnormal fuzzy singletons find their natural interpretation. When used as an antecedent in an ITE statement, such a singleton does not have enough strength to induce the consequent with complete certainty. Instead it induces a superposition of an interval-valued fuzzy set and the BLANK state. New definitions for union, intersection and complementation of fuzzy sets of higher type are suggested. A new interpretation of an interval-valued fuzzy set of type N as a collection of fuzzy sets of type N, not as a type N + 1 set, is given. There exist two types of union, intersection and complementation for interval-valued fuzzy sets. They are called the fuzzy and the crisp operations, respectively. It is suggested that the negation be represented by an interval-valued fuzzy set. We conclude that increased fuzziness in a description means increased ability to handle inexact information in a logically correct manner.
The Shomotopy Bottle of Q-Analysis BIBA 457-460
  J. H. Johnson
Atkin's concept of shomotopy (pseudo-homotopy) extends the concept of homotopy to the q-connectivity of simplicial complexes. Analogues of the generators of the fundamental group are defined as "q-holes" or "objects", but to date no satisfactory method exists of computing them. An attempt to achieve this by contracting q-loops failed because of "bulges" in the structure which act like the shoulder of a bottle after the neck. The existence of "shomotopy bottles" and "neck loops" shows another type of structural disconnection which could have a kind of focusing effect for q-transmitted changes in patterns of numerical values (q-forces).
Domains of Interest in Fuzzy Sets BIBA 461-468
  Ernest A. Edmonds
The notion of fuzzy sets whose characteristic function is defined over a proper subset of the universal set is discussed. Arising out of this, operations on fuzzy sets over restricted domains of interest are defined. The implications of the use of fuzzy domains of interest is explored. Domain shift operations, which yield a new domain of interest as well as operating on the fuzzy set, are introduced and used to define sequences of operations that terminate when the domain of interest is empty. Some of the ideas are used to extend image processing techniques and hence generalize certain raster graphics operations from the binary image case to grey-scale and colour images.
Why Interactive Computer Systems are Sometimes Not Used by People Who Might Benefit from Them BIBA 469-483
  Raymond S. Nickerson
Several reasons are considered why some people who might benefit from using computer systems do not use them. The discussion is organized around examples of several classes of complaints that abstainers and dissatisfied users have been known to make regarding various aspects of the design and operation of specific computer-based systems.