HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 060708091011121314151617181920212223242526

International Journal of Man-Machine Studies 16

Editors:B. R. Gaines; D. R. Hill
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Links:Table of Contents
  1. IJMMS 1982 Volume 16 Issue 1
  2. IJMMS 1982 Volume 16 Issue 2
  3. IJMMS 1982 Volume 16 Issue 3
  4. IJMMS 1982 Volume 16 Issue 4

IJMMS 1982 Volume 16 Issue 1

Fuzzy Design of Military Information Systems BIBA 1-38
  John T. Dockery
On the supposition that military operations are by nature fuzzy endeavours, some fuzzy set theory concepts are applied to the analysis of combat. More specifically, the military command and control function, which directs combat operations, is examined from the perspective of a supporting management information system (MIS). Since a military commander's perspective of the situation can be as important as the facts of the situation, a partitioned data base to support the MIS is introduced. One part contains data which may be either sharp or fuzzy; the other part, information relating to context. It is suggested that probability concepts apply to the former, and that possibility concepts apply to the latter. Data fusion is defined as the joining of the two partitions to provide information in context to the relevant field commander and his staff.
   Based on a simple military scenario, the design implications of this fusion are explored. A number of examples using recent fuzzy set results are applied to a conceptual fuzzy data base. In particular, both probabilistic and possibilistic entropy appear to operate within the data base, and structure it serves. Both entropy concepts lead to the introduction of information context regeneration along the command network serviced by the fuzzy MIS. The idea of fuzzy containment in a real system is discussed. It is asserted that both the classical hierarchial structure and the centralized control structures of the military serve to keep fuzziness within bounds. The concept of contextual noise is introduced.
   Throughout, fuzzy set theory is advanced as a conceptual framework for design rather than a detailed design tool. This suggestion is based partially on the evidence that the universe of fuzzy data and information is much larger than that of sharp data. Moreover, in military situations, there may only be fuzzy information.
The Lexical, Syntactic and Semantic Processing of a Speech Recognition System BIBA 39-63
  Silvano Rivoira; Piero Torasso
This work describes the lexical, syntactic and semantic processing of a recognition system of meaningful sentences spoken in the Italian language. A Transition Network grammar models, in an integrated way, the syntactic and semantic knowledge sources of the robot command protocol used in the experiments.
   The mechanisms of emission, prediction and verification of lexical hypotheses are coordinated by a parsing strategy according to the hypothesize-and-test paradigm. The most promising alternative interpretations of an input sentence are developed in parallel by the parser which has the capability of recovering errors due to mis-classifications and extra or missing words.
Methodology and Experimental Research in Software Engineering BIBA 65-87
  Thomas Moher; G. Michael Schneider
There has been a very rapid increase in the use of psychological experimentation as a method for addressing problems in the area of software engineering. However, there is a total lack of understanding on how to use this technique effectively so that the results of the experiment will have both validity and wide applicability. This paper describes what we feel is a major new research area which aims at formalizing the use of controlled group experimentation in programming language design and software engineering. The paper describes the growth of the problem, the critical need for this new research area and finally categorizes the problems that must be addressed as part of the research into these problems. An annotated bibliography contains literature references (through 1978) to the use of controlled group experimentation in software engineering and related areas of Computer Science.
Plan Creation, Plan Execution and Knowledge Acquisition in a Dynamic Microworld BIBA 89-112
  Gordon I. McCalla; Larry Reid; Peter F. Schneider
This paper discusses a planning system that works in a dynamic geographic microworld (an abstraction of a small city). The planning system simulates a taxi driver who must take a customer to a destination in this small city, despite the possible presence of unpredictable dynamic obstacles such as other cars, red lights, pedestrians, etc. At the heart of the system is a knowledge base where all knowledge of the city is uniformly represented in routes, one for each trip which the simulated taxi driver has previously completed.
   A plan for a new trip is formulated by appropriately splicing together routes. The plan is then augmented to handle the dynamic features of the world and is executed. After successful execution, the plan is integrated into the knowledge base as a new route.
   One key aspect of this approach to planning is that plans are hierarchical, thus allowing the appropriate scoping of "demons" to handle the dynamic features of the microworld. Another key aspect is the similarity in structure of both routes and plans which facilitates the route-splicing approach to planning. Both aspects are useful when acquiring new routes, the similarity in structure allowing successfully executed plans to be straightforwardly added to the collection of routes and the hierarchy providing a framework for abstracting and generalizing information gleaned during plan execution. The most important overall lesson to be learned from this research is the usefulness of taking an integrated view of planning, execution and acquisition.

IJMMS 1982 Volume 16 Issue 2

Towards a Conceptualization of Evolution in Environmental Systems BIBA 113-145
  John R. Beaumont
There are many difficulties facing those who have interests both in the natural environment and in the man-made environment together with the problems of their interfacing. One of the currently most intractable is that, having been imbued with the ecosystem model, with the emphasis on balance, equilibrium, cycling and stability, scholars are increasingly faced with the methodological necessity of also accommodating active control involving the impelling of systems on time trajectories through sequences of state, each different, probably non-recoverable and presumably ever more adapted to the evolving needs of man in society.
   Bennett & Chorley (1978, p. 471)
Writing and Speaking Letters and Messages BIBA 147-171
  John D. Gould
Twelve participants composed written and spoken letters under various conditions. In the first five experiments they were told which letters to write and which letters to speak. In the last three experiments they could choose their method of composition. Results showed that speaking required only 35-75% of the time that writing did. Written and spoken letters were rated as about equally effective, being characterized more by their similarities than by their differences. When participants could choose a method, they did not always select the method that they said they would under those circumstances. A key theoretical result was that time spent planning was not a constant amount in both methods, but rather a constant ratio. Planning time was about two-thirds of total composition time, regardless of letter complexity.
Aspects of Human Performance in an Intensive Speech Task BIBA 173-181
  Nickolai G. Zagoruiko; Yuri Tambovtsev
This paper describes a study of the mean reading speed of native Russian speakers (both male and female) in reading technical texts. Both "sight-unseen" and prepared (previously read) conditions were treated. The influence of fatigue on speed and reliability are discussed and tables of reading times for speakers giving individual characteristics are presented. The paper also recommends guidelines for the total volume of material to be read aloud by an operator during a working day and discusses the operators' sociological attitude to a selection of work roles.
Uniformity in User-Computer Interaction Languages: A Compromise Solution BIBA 183-210
  Siegfried Treu
To alleviate the user problems associated with the diverse implementations of user-computer interaction languages, the use of an intermediary processor to "uniformize" the languages is presented. Such processing is characterized in terms of required intermediary actions and logical capabilities. The supportive NBS facilities are portrayed and an example application, using a common command language to access five bibliographic retrieval systems, is described.
Character Level Ambiguity: Consequences for User Interface Design BIBA 211-225
  Harold Thimbleby
Certain user interface functions require single- or few-character interactions and in some systems the number of functions which is made available exceeds the number of suitable key combinations. Hence modes are introduced; keys can be given different interpretations in different modes. But this is a source of user interface ambiguity; if there are too many frequently-used modes then the user can make errors all too easily. Definite user interface techniques, which are discussed, can be chosen to increase ease of use/user satisfaction: for instance, by reducing the number of necessary modes or the consequences of user typing errors. To make an interface consistent and predictable requires considerable effort, even if only at this level of single character semantics.

Book Review

"Computer Systems for Human Systems," by A. Demb BIB 227
  C. Higgins

IJMMS 1982 Volume 16 Issue 3

Editorial BIB 229
  Ernest Edmonds
The Man-Computer Interface: A Note on Concepts and Design BIBA 231-236
  Ernest Edmonds
The notion of identifying a part of a computer system, the man-computer interface, that can be seen as representing the user's model of the system is explored. A particular classification of the components of an interface is presented. It is suggested that the design of the man-computer interface is central to the design of an interactive system. Certain design problems are discussed and problems requiring further research identified.
An Evaluation of Published Recommendations on the Design of Man-Computer Dialogues BIBA 237-261
  Martin Maguire
It is recognized that the design of an effective man-computer dialogue is crucial to the success of an interactive system. A review of the literature on the subject identifies a number of important aspects of dialogue design upon which much sound advice is given. However, there is a tendency for recommendations based upon a limited field of experience to be offered as more general advice causing apparent conflicts between suggestions emanating from different sources. Such conflicts can frequently be resolved by regarding each suggestion as an alternative strategy to be applied in a particular situation. It is concluded then that the definition of a framework for the classification of dialogue design recommendations would simplify the designer's task of retrieving those most relevant to his needs.
The Use of Software Tools for Dialogue Design BIBA 263-285
  Stephen P. Guest
The benefit of software tools to aid the production of software has long been established. At Leicester Polytechnic there are two such tools which have been developed mainly as design aids for man-computer interfaces. Both have been used for a wide range of applications, but have been received differently by designers. Why should one type of tool appeal to a designer when the other he finds impossible to use? This paper will look at the format of the design languages required by these two tools to produce a dynamic interface. One has an input structure based on production rules and the other is related to transition networks. The uses to which some designers have put these tools are described and any comments given by them in their attempts to use these packages are reported.
Towards Self-Adaptive Interface Systems BIBA 287-299
  P. R. Innocent
This paper follows a trend towards more user oriented design approaches to interactive computer systems. The implicit goal in this trend is the development of more "natural" systems. Design approaches should aim at a system capable of continuous change by means of suitable agents. The term "soft facade" is used to describe a modifiable interface to a system. Facades vary in softness and agents for change can be the systems designer, the user or an expert system in the facade. In the latter case, the system is called a self-adaptive interface system. The conditions where a self-adaptive interface system is desirable are briefly considered and discussed in relation to a simple example. Recent research in artificial intelligence is mentioned as a possible source of proposed components for a self-adaptive interface system.
The Interactive Manipulation of Unstructured Images BIBA 301-313
  Stephen A. R. Scrivener
Conventional approaches to interactive computer graphics do not always seem appropriate for certain kinds of two-dimensional design (e.g. art, graphics). This paper discusses an approach to computer graphics in which the interaction between the man and the machine is viewed as a process of communicating interpretations.
   The task for the man is to describe the structure perceived in the displayed image. The task for the machine is to derive an interpretation consistent with the user's perception by utilizing the image (the bitmap of a raster display) and the description of it provided by the user. Examples of techniques used in this approach are discussed, including those for handling figure/ground perceptions. It is argued that by using such techniques it is possible for the user to manipulate unstructured images interactively.
Extracting Shapes from Grey-Scale Images BIBA 315-326
  Dominic Boreham; Ernest Edmonds
In considering the interactive manipulation of computer generated raster images, particular attention has been paid to the problem of enabling the human to communicate to the computer about his or her perceived structures in the image. The paper considers some of the issues involved, particularly where they relate to the extraction from an image of a single perceived region. An algorithm that operates on grey-scale images is discussed.
The Acquisition of Linguistic Knowledge from Visible Speech Spectrograms of Ordinary Speech: A Proposal BIBA 327-332
  Julius J. Guzy
A feasibility study into direct speech input to machines (Connolly, Edmonds & Hashim, 1980) is currently nearing completion at Leicester Polytechnic. This paper makes a number of tentative observations which have arisen out of that study, and in the light of those observations proposes a method for the acquisition of linguistic knowledge from the observation and analysis or ordinary human conversation.
A Study in the Use of a Computer as an Aid to English Teaching BIBA 333-339
  Linda Candy; Ernest A. Edmonds
A preliminary study has been conducted concerning the use of a computer to aid the learning of children with particular basic language difficulties in relation to their group norm. The example chosen involved a combined spelling and vocabulary exercise. A record of the dialogues between the students and the computer was kept and discussed with them individually. The students were also asked for their general views and were pre- and post-tested for spelling. For the most part the results were favorable. The technique used for program implementation is one that would allow individual teachers to generate their own material readily.
An Expert System for the Medical Diagnosis of Brain Tumours BIBA 341-349
  K. Wills; D. Teather; P. Innocent; G. H. Du Boulay
This paper describes the current state of a project to establish an expert system for the diagnosis of brain tumours. Expert systems for the diagnosis of diseases have proved their worth in various specialist fields (notably jaundice and abdominal pain).
   The basis of this system is a data base consisting of coded Computer Tomography (CT) scans of patients with known disease. For the diagnosis of a new patient the CT scan is coded and used in conjunction with the data base.
   The current system is a prototype which has been developed on a microprocessor system and is being evaluated against the expert radiologist for diagnostic accuracy.

IJMMS 1982 Volume 16 Issue 4

Q-Transmission in Simplicial Complexes BIBA 351-377
  J. H. Johnson
Q-analysis assumes a qualitative difference between q-connectivity and (q-1)-connectivity which is significant in the way changes in pattern values can be propagated over the backcloth. The way such changes (t-forces) move through the simplicial complexes of the backcloth is defined as q-transmission. It is shown there is a simple derived network structure for q-transmission, and this will facilitate computation in applications. Simplicial complexes are shown to possess various quantitative characteristics which can be expressed as transmission numbers. The concept of time which underlies q-transmission is considered and the relationship between q-transmission and Atkin's concept of p-event is investigated. This allows an algebraic description of prediction and suggests the possibility of relative social time.
Stereotyped Program Debugging: An Aid for Novice Programmers BIBA 379-392
  Harald Wertz
This paper presents a system (PHENARETE) which understands and improves incompletely defined LISP programs, such as those written by students beginning to program in LISP. This system takes, as input, the program without any additional information. In order to understand the program, the system meta-evaluates it, using a library of pragmatic rules, describing the construction and correction of general program constructs, and a set of specialists, describing the syntax and semantics of the standard LISP functions. The system can use its understanding of the program to detect errors in it, to eliminate them and, eventually, to justify its proposed modifications. This paper gives a brief survey of the working of the system, emphasizing some commented examples.
Driving the Votrax Speech Synthesizer from a Wide Phonetic Transcription with High-Level Prosodic Markers BIBA 393-403
  Ian H. Witten
Previously-developed procedures for controlling the pitch and rhythm of synthetic speech by markers in the input phonetic text have been adapted for the Votrax ML-I speech synthesizer. The aim is not necessarily to advocate these procedures, but rather to show that existing knowledge about prosodic features of speech can be adapted to the Votrax device. Also described are rules for translating International Phonetic Alphabet transcriptions into Votrax sound segments, which incorporate many of the hints for allophone adjustment that are given in the Votrax manual.
Creative Names for Personal Files in an Interactive Computing Environment BIBA 405-438
  John M. Carroll
The names with which people refer to their personal computer files in an interactive computing environment were analyzed as a case study of purposeful creative naming behavior. Staff members at a research laboratory were asked to annotate a listing of their filenames by appending descriptive exegeses. Overwhelmingly, the very form of the filenames organized them into structured paradigms, coextending with clusterings of the files by conceptual and functional content (as revealed by examination of the rendered descriptive exegeses). The pervasive existance of such paradigmatic structure in spontaneously created names has implications both for traditional and current philosophical analyses of names (where non-paradigmatic names, such as Aristotle have been taken to be typical) and, more specifically, for the potential utility and design of filenaming facilities in computing systems.
   Part of speech and abbreviation strategies were also analyzed and compared with prior laboratory research. They were shown to correlate with filetype classification, indicating this as a further relevant parameter for the design of filename facilities.
Pattern-Based Representations of Knowledge in the Game of Chess BIBA 439-448
  M. A. Bramer
The focus of recent Artificial Intelligence research into computer chess has been on endgames. These afford the possibility of controlled experimentation, whilst retaining much of the complexity of the full game of chess. This paper discusses some of the specific reasons for complexity in the endgame and considers its effects on human chess-playing strategy, textbook descriptions and the development of programs. In programming the endgame the researcher is faced with a range of decisions concerning the quality of play to be aimed at, the balance between knowledge and search to be adopted and the degree to which the playing strategy should be understandable to human chess-players. A model for representing pattern-knowledge is described which has enabled the development of algorithms to play a number of endgames. Three algorithms representing different levels of performance for the endgame King and Pawn against King are compared, in order to discuss the tradeoff between complexity and completeness, on the one hand, and compactness and comprehensibility, on the other. Finally, the role of search in reducing the amount of knowledge to be memorized is considered and an extension to the basic model to incorporate deeper search is discussed.
Learning a First Computer Language: Strategies for Making Sense BIBA 449-486
  M. J. Coombs; R. Gibson; J. L. Alty
It is a common observation that people differ greatly in their ability to make use of computers. In controlled experiments on the writing and debugging of programs, for example, large discrepancies in performance have been found even at the professional level, and in universities it is often noted that some individuals make more effective use of facilities than others who have undergone the same training and whose needs are just as great. This paper reports a study in which individual differences found in the learning of FORTRAN as a first computer language by a university population are used as a source of information on the nature of computing skills.
   The study employed two classes of task: a "target" task consisting of tests of programming skill and an "indicator" task being a measure of learning style devised by Pask. Novice programmers completed these tasks following a standard introductory FORTRAN course. Comparison of performance by each subject on the two tasks was then used to draw inferences on the nature of successful strategies for learning a first programming language. Successful learners worked from "inside" the language, paying close attention to the procedural representation of logical relations between individual language structures. Less successful learners sought to determine important structural detail with reference to factors external to the program language itself, e.g. features of the local machine, and to represent this knowledge in descriptive rather than procedural terms.