HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 141516171819202122232425262728293031323334

International Journal of Man-Machine Studies 24

Editors:B. R. Gaines; D. R. Hill
Dates:1986
Volume:24
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:41
Links:Table of Contents
  1. IJMMS 1986 Volume 24 Issue 1
  2. IJMMS 1986 Volume 24 Issue 2
  3. IJMMS 1986 Volume 24 Issue 3
  4. IJMMS 1986 Volume 24 Issue 4
  5. IJMMS 1986 Volume 24 Issue 5
  6. IJMMS 1986 Volume 24 Issue 6

IJMMS 1986 Volume 24 Issue 1

Editorial BIB i
  Brian Gaines
From Timesharing to the Sixth Generation: The Development of Human-Computer Interaction. Part I BIBA 1-27
  Brian R. Gaines; Mildred L. G. Shaw
The human-computer interface is increasingly the major determinant of the success or failure of computer systems. It is time that we provided foundations of engineering human-computer interaction (HCI) as explicit and well-founded as those for hardware and software engineering. Computing technology has progressed though a repeated pattern of breakthroughs in one technology, leading to its playing a key role in initiating a new generation. The basic technologies of electronics, virtual machines, and software have gone through cycles of breakthrough, replication, empiricism, theory, automation and maturity. HCI entered its period of theoretical consolidation at the beginning of the fifth generation in 1980. The lists of pragmatic dialog rules for HCI in the fourth generation have served their purpose, and effort should now be directed to the underlying foundations. The recently announced sixth-generation computer system (SGCS) development program is targeted on these foundations and the formulation of knowledge science. This paper surveys the development of HCI and related topics in artificial intelligence; their history, foundations, and relations to other computing disciplines. The companion paper surveys topics relating to future developments in HCI.
An Experimental Comparison of a Mouse and Arrow-Jump Keys for an Interactive Encyclopedia BIBA 29-45
  John Ewing; Simin Mehrabanzad; Scott Sheck; Dan Ostroff; Ben Shneiderman
This paper reports on an experiment which was conducted to examine relative merits of using a mouse or arrow-jump keys to select text in an interactive encyclopedia. Timed path traversals were performed by subjects using each device, and were followed by subjective questions. Personality and background of the subjects were recorded to see if those attributes would affect device preference and performance. The arrow-jump keys were found to have the quickest traversal times for paths with either short or long target distances. The subjective responses indicated that the arrow-jump method was overwhelmingly preferred over the mouse method. Personality type was not found to play a critical role.
The User's Mental Model of an Information Retrieval System: An Experiment on a Prototype Online Catalog BIBA 47-64
  Christine L. Borgman
An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a database of bibliographic records. The research was based on the mental models theory which proposes that people can be trained to develop a "mental model" or a qualitative simulation of a system which will aid in generating methods for interacting with the system, debugging errors, and keeping track of one's place in the system. It follows that conceptual training based on a system model will be superior to procedural training based on the mechanics of the system. We performed a laboratory experiment with two training conditions (model and procedural), and with each condition split by sex. Forty-three subjects participated in the experiment, but only 32 were able to reach the minimum competency level required to complete the experiment. The data analysis incorporated time-stamped monitoring data, personal characteristics variables, affective variables, and interview data in which subjects described how they thought the system worked (an articulation of the model). As predicted, the model-based training had no effect on the ability to perform simple, procedural tasks, but subjects trained with a model performed better on complex tasks that required extrapolation from the basic operations of the system. A stochastic process analysis of search-state transitions reinforced this conclusion. Subjects had difficulty articulating a model of the system, and we found no differences in articulation by condition. The high number of subjects (26%) who were unable to pass the benchmark test indicates that the retrieval tasks were inherently difficult. More interestingly, those who dropped out were significantly more likely to be humanities or social science majors than science or engineering majors, suggesting important individual differences and equity issues. The sex-related differences were slight, although significant, and suggest future research questions.
Fuzzy Cognitive Maps BIBA 65-75
  Bart Kosko
Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.
Using Planning Techniques in Intelligent Tutoring Systems BIBA 77-98
  Darwyn R. Peachey; Gordon I. McCalla
This paper proposes an architecture for building better Computer-Assisted Instruction (CAI) programs by applying and extending Artificial Intelligence (AI) techniques which were developed for planning and controlling the actions of robots. A detailed example shows how programs built according to this architecture are able to plan global teaching strategies using local information. Since the student's behavior can never be accurately predicted, the pre-planned teaching strategies may be foiled by sudden surprises and obstacles. In such cases, the planning component of the program is dynamically reinvoked to revise the unsuccessful strategy, often by recognizing student misconceptions and planning a means to correct them. This plan-based teaching strategy scheme makes use of global course knowledge in a flexible way that avoids the rigidity of earlier CAI systems. It also allows larger courses to be built than has been possible in most AI-based "intelligent tutoring systems" (ITSs), which seldom address the problem of global teaching strategies.

IJMMS 1986 Volume 24 Issue 2

Foundations of Dialog Engineering: The Development of Human-Computer Interaction. Part II BIBA 101-123
  Brian R. Gaines; Mildred L. G. Shaw
The human-computer interface is increasingly the major determinant of the success or failure of computer systems. It is time that we provided foundations of engineering human-computer interaction (HCI) as explicit and well-founded as those for hardware and software engineering. Through the influences of other disciplines and their contribution to software engineering, a rich environment for HCI studies, theory and applications now exists. Many principles underlying HCI have systemic foundations independent of the nature of the systems taking part and these may be analysed control-theoretically and information-theoretically. The fundamental principles at different levels may be used in the practical design of dialog shells for engineering effective HCI. This paper surveys the development of styles of dialog through generations of computers, the principles involved, and the move towards integrated systems. It then systematically explores the foundations of HCI by analysing the various analogies to HCI possible when the parties are taken to be general systems, equipment, computers or people.
Comparison of Decision Support Strategies in Expert Consultation Systems BIBA 125-139
  Peretz Shoval
Different strategies of decision support can be identified in a consultation process and implemented in an expert system. This paper concentrates on experiments that were carried out with an expert system that was developed in the area of information retrieval, to perform the job of an information specialist who assists users in selecting query terms for database searches. Three different support strategies are utilized in the system. One is a "participative" strategy, in which the system performs a search within its knowledge base and during which there is interaction between the system and the user, whereby the system informs the user of intermediate findings and the user judges their relevancy and directs the search. The second is a more "independent" support strategy, in which the system performs a search and evaluates its findings without informing the user before the search is completed. The third is a "conventional" strategy (not an expert system) in which the system only provides information according to the user's request, but it does not make judgments/decisions; the user himself is expected to evaluate and to decide.
   Three main questions are examined in the experiments: (a) which of the three support strategies or systems is more effective in suggesting the appropriate query terms; (b) which of the approaches is preferred by users; and (c) which of the expert systems is more efficient, i.e. more "accurate" and "fast" in performing its consultation job. The experiments reveal that the performance of the system with the first two strategies is similar, and it is significantly better than the performance with the third strategy. Similarly, users generally prefer these two strategies over the "conventional" strategy. Between the first two, the more "independent" system behaves more "intelligently" than the more "participative" one.
On the Suitability of Fuzzy Models: An Evaluation Through Fuzzy Integrals BIBA 141-151
  Siegfried Gottwald; Witold Pedrycz
The paper deals with a problem of evaluating the properties of a system on the basis of the corresponding fuzzy model, its properties, and its quality measured by a performance index. It is shown that a grade of satisfaction for a property of the system may be calculated by means of a fuzzy integral with respect to a fuzzy measure, where the latter corresponds to a quantitative representation of the quality of the model constructed. Two properties of wide significance in system analysis, controllability and predictability are studied in detail.
ADDS -- A Dialogue Development System for the Ada Programming Language BIBA 153-170
  A. Burns; J. Robinson
A dialogue development system for the Ada programming language is described. The system supports the production of multi-level adaptable interfaces and provides the following features: input validation, user recovery/backtracking, in-depth help facilities, user-performance monitoring and a variety of user interface specification languages. In addition, development tools are provided that enable dialogue specific software to be automatically generated from the constituent specifications. Considerations are given to the use of multi-level interfaces and the employment of such systems in studying user adaptability and learning. The architecture of ADDS is described and some simple examples of its use are given. Although ADDS is designed for, and implemented in, Ada, it is structured in a manner that will make the features it supports more widely applicable.
Training by Exploration: Facilitating the Transfer of Procedural Knowledge Through Analogical Reasoning BIBA 171-192
  Anita L. Kamouri; Joseph Kamouri; Kirk H. Smith
This study compared exploration-based training and instruction-based training as methods of acquiring and transferring procedural device knowledge, and examined whether any differences in learning outcomes could be explained by the trainees' use of analogical reasoning from either abstract or concrete representations of devices in memory. The exploration trainees experimented with three analogous simulated devices in order to discover the procedures governing their operations, whereas the instructed trainees followed procedural examples contained in manuals. After a 2-day post-training delay, trainees were exposed to a novel transfer device, which was either analogous or disanalogous to the three training devices. Performance on the novel device, subjects' perceptions of the similarity among devices' functions and subjects' recall (written and behavioral) of the three training devices' operations, all provided data indicating that exploration-based training promoted the use of analogical reasoning in knowledge transfer and facilitated the induction of abstract device representations (schemas). No such claim could be made for instruction-based training. Implications for the future of exploration as a training method and suggestions for future research are discussed.

IJMMS 1986 Volume 24 Issue 3

A Taxonomy of User-Oriented Functions BIBA 195-292
  James A., Jr. Carter
This paper presents a taxonomy of user-oriented data functions. The taxonomy is composed of a hierarchy of user-oriented functions plus command names suggested by a group of potential users for those functions, classified as generally required in data-processing systems. The taxonomy can be used for commercial system design and evaluation and as a basis for further research.
Stars, Maximal Rectangles, Lattices: A New Perspective on Q-Analysis BIBA 293-299
  J. H. Johnson
A new star-hub structure of binary relations is discussed in the context of the methodology of Q-analysis, and parallels are drawn with maximal rectangles and Galois lattice structures. Although these structures generalize those of Q-analysis, there remain problems due to the very large number of star-hub pairs generated by fairly modest data sets. It is argued that more theory is necessary, and some possibilities are discussed. It is suggested that the criteria for defining new structures will come most fruitfully from the study of the relationship between backcloth and the ways it constrains traffic. Finally, it is argued that these combinatorial structures are still not sufficient to fully describe complex systems and that for this one needs to consider polyhedra in the context of N-ary relations.
A Virtual Protocol Model for Computer-Human Interaction BIBA 301-312
  Jakob Nielsen
A model of computer-human interaction is presented, viewing the interaction as a hierarchy of virtual protocol dialogues. Each virtual protocol realizes the dialogue on the level above itself and is in turn supported by a lower-level protocol. This model is inspired by the OSI-model for computer networks from the International Standards Organization. The virtual dialogue approach enables the separation of technical features of new devices (e.g. a mouse or a graphical display) from the conceptual features (e.g. menus or windows). Also, it is possible to analyse error messages and other feedback as part of the different protocols.

IJMMS 1986 Volume 24 Issue 4

Mode Errors: A User-Centered Analysis and Some Preventative Measures Using Keying-Contingent Sound BIBA 313-327
  Andrew Monk
It is often claimed that user interfaces of advanced integrated systems are mode-free. However, if one applies the user-centered analysis developed in this paper, it is clear that almost any system of realistic complexity will have modes of some kind. By using this analysis it is also possible to identify the situation in which modes are likely to give rise to errors and those where they will not. Various measures for preventing mode errors are suggested. One of these is to signal mode by generating sounds which are contingent on the users' action. Experimental work presented shows that this mode-dependent keying-contingent sound can be an effective way of making users aware of mode changes. Mode errors were reduced to a third of the number observed with a control group.
A Review and Synthesis of Recent Research in Intelligent Computer-Assisted Instruction BIBA 329-353
  Christopher Dede
Educational devices incorporating artificial intelligent (AI) would "understand" what, whom and how they were teaching and could therefore tailor content and method to the needs of an individual learner without being limited to a repertoire of prespecified responses (as are conventional computer assisted instruction systems). This article summarizes and synthesizes some of the most important research in the development of stand-alone intelligent computer-assisted instruction (ICAI) systems; a review of passive AI-based educational tools (e.g. microworlds, "idea processors", empowering environments) would require a separate discussion. ICAI tutors and coaches have four major components: a knowledge base, a student model, a pedagogical module and a user interface. Major current themes of research in the knowledge base include studies of expert cognition, the transfer of meaning, and the sequencing of content. Student-modelling issues focus on alternative ways to represent a pupil's knowledge, errors and learning. Pedagogical strategies used by ICAI devices range over presenting increasingly complex concepts or problems, simulating phenomena, Socratic tutoring with correction of pupil misconceptions and modelling of expert problem solving via coaching; the central theme of research is finding overarching paradigms for explanation. Language comprehension and generation topics which have special relevance to intelligent tutors and coaches are also briefly reviewed. Overall, increasing availability, decreasing cost and growing commercial interest in AI-based educational devices are enhancing the development of ICAI systems. Limits on the sophistication of user interfaces, on the scope of subject domains and on current understanding of individual learning are all constraining the effectiveness of computer tutors and coaches. The explicitness required for constructing intelligent devices makes their evolution more difficult and time consuming, but enriches the theoretical perspective which emerges. In brief, the computational and economic enabling of ICAI is proceeding more rapidly than are its empirical and cognitive foundations, but significant overall progress is being made.
An Empirical Comparison of Model-Based and Explicit Communication for Dynamic Human-Computer Task Allocation BIBA 355-363
  Joel S. Greenstein; Lynn Y. Arnaut; Mark E. Revesman
When both a human and a computer in a system are capable of performing the same tasks, task responsibilities may be allocated between them dynamically. This study compared two methods of human-computer communication for dynamic task allocation: explicit and model-based communication. With explicit communication the human directed the computer and the computer did not perform any actions on its own. With model-based communication the computer employed a model of the human which predicted the human's actions and the computer used this model to work on its own. Subjects performed a process monitoring task using both of these allocation methods. In addition, in half the trials subjects had knowledge of the computer's actions and in the other half they did not. The results indicated that overall system performance was always better under model-based communication, although human performance alone was better with explicit communication. In addition, overall system and human performance were higher when the human had knowledge of the computer's actions.
An Experimental Evaluation of Prefix and Postfix Notation in Command Language Syntax BIBA 365-374
  Joan M. Cherry
Commands in a command language usually consist of a verb plus object(s). A designer must decide the word order in which elements will appear in the command string. Many existing command languages use the format, verb-object, often referred to as prefix notation. However, another feasible syntax is object-verb, referred to as postfix notation. The objective of this study was to determine whether one of these notations facilitates user performance. Two command languages for a text editor were designed. One is based on the natural word order of English, verb-object. The other is based on the reverse word order, object-verb. Sixty subjects, all native speakers of English, were blocked into three groups: novices, experienced subjects who normally used a line editor, and experienced subjects who normally used a screen editor. Subjects were randomly assigned to use one of the command languages. They learned to use the command language by self instruction, then were given 20 min to perform a manuscript editing task. The task was deliberately made too long for any subject to finish. Measures of performance used were: percentage of task completed, percentage of erroneous commands, and editing efficiency. A two-way ANOVA was performed on each of the three dependent variables. There were no significant differences in performance between subjects who used the prefix command language and those who used the postfix command language, contrary to intuitive expectations. The novices differed significantly from the experienced subjects on percentage of task completed and percentage of erroneous commands, as expected. However, there was no significant difference between the novices and experienced subjects on the measure of editing efficiency. Variables which were correlated with percentage of task completed strongly enough to be useful as covariates were typing speed (all subjects) and contact hours with text editors (experienced subjects).
On-Line Recognition of Pitman's Handwritten Shorthand -- An Evaluation of Potential BIBA 375-393
  C. G. Leedham; A. C. Downton
This paper describes a number of evaluation experiments designed to establish the potential of Pitman's handwritten shorthand as an input for computer transcription to text. Such a system would have applications in verbatim reporting, the electronic office, and as an aid for the deaf. The experiments described compare the performance of a proposed computer transcription system for shorthand (described previously in this journal) with the benchmark performance obtained using human transcription. In addition, measurements on typical Pitman shorthand data are used to estimate potential performance limits. It is concluded that the poor overall performance of the proposed computer transcription system is due to a combination of three factors: first, the simplified nature of the recognition algorithms used compared with the knowledge-based techniques used by human shorthand readers; secondly, ergonomic deficiencies of the data input devices used; and finally, writer's lack of familiarity with the system and its capabilities. A proposed strategy for improving the performance of the system by attention to the first two of these deficiencies is given.
The Use of Q-Analysis and Q-Factor Weightings to Derive Clinical Psychiatric Syndromes BIBA 395-407
  P. N. Cowley
The application of q-analysis to define clinical syndromes is described. 139 psychiatric inpatients were rated on a 65-item symptom checklist on a 0-3 scale. The patients and their symptom ratings formed a matrix which was sliced at greater than one, transforming it to a binary matrix composed of the most significant and persistent symptoms. Standard q-analysis was applied to this matrix which demonstrated the predominance of q-connected depressive symptoms.
   The method of applying a weighted relation on the shared face matrix, based on the mean connectivity of each simplex, is described. The various subcomponents, derived by different q-factor weightings could be seen to represent different clinical syndromes. The emergence of these subcomponents was not fully apparent from the standard q-analytic output. The use of q-factor weightings to explore further the pattern of q-connectivity within a simplicial complex is discussed.

IJMMS 1986 Volume 24 Issue 5

Rough Classification of Patients After Highly Selective Vagotomy for Duodenal Ulcer BIBA 413-433
  Zdzislaw Pawlak; Krzysztof Slowinski; Roman Slowinski
The concept of "rough" sets is used to approximate the analysis of an information system describing 77 patients with duodenal ulcer treated by highly selective vagotomy (HSV). The patients are described by 11 attributes. The attributes concern sex, age, duration of disease, complication of duodenal ulcer and various factors of gastric secretion. Two values of sex and age are distinguished, five values of complications and three or four values of secretion attributes, according to norms proposed in this paper. For each patient, the result of treatment by HSV is expressed in the Visick grading which corresponds to four classes. Using the method of rough classification it is shown that the given norms ensure a good classification of patients. Afterwards, some minimum sets of attributes significant for high-quality classification are obtained. Upon analysis of values taken by attributes belonging to these sets a "model" of patients in each class is constructed. This model gives indications for treatment by HSV.
On the Purpose and Analysis of EDP User Systems BIBA 435-452
  Jurgen Pilgrim
In general, behavioral data on the EDP user are not sufficient for planning and organizing highly efficient man-computer interaction. It is therefore proposed to extend the customary user concept and quantify the influence of the user. Proceeding from the interpretation of the user as a "user system", a method for valuation of user systems is presented and discussed, which rests on the detection of indicators of a user system and their valuation on the basis of an achieved level of "complicatedness" from the point of view of the designer or operator of computers or EDP systems. The adequacy of the method has been tested by a practical investigation of program developments in a biomedical research center. The investigation was based on user and expert inquiries.
Discrimination of Words in a Large Vocabulary Using Phonetic Descriptions BIBA 453-473
  A. Giordana; L. Saitta; P. Laface
This paper analyses the results of several experiments performed with the aim of selecting a suitable representation of words for an effective lexical access.
   A large vocabulary comprising the most frequent Italian words has been taken into account.
   Lexical access is performed in a bottom-up phase on the basis of broad phonetic information, in order to reduce the number of vocabulary words that must be verified. In a successive top-down phase, the constraints imposed by the phonemic structure of this set of words select and schedule the (context-dependent) sensory procedures which are most appropriate for performing detailed phoneme verification analyses, in delimited signal intervals, in order to determine, among the candidates, the word actually spoken.
   Access to the lexicon was performed using several different classes of phonetic descriptions of words, ranging from a very rough one to others quite close to the phonemic form, in order to substantiate the relationships between the inaccuracy of the phonetic description and the confusability of the words in the lexicon. Experiments have been performed both to access isolated words and simulate a model for lexical access in continuous speech. The phonetic descriptions of words have been obtained from the orthographic form by means of a set of translation rules, taking into account also the possible degradations that can occur in a real system. The results show that, by using a phonetic description which can reasonably be obtained by means of feasible acoustic processors, the number of words to be verified can be reduced, on average, to about 17 for isolated words and to 260 for continuous speech.
A Comparative Analysis of Methods for Expert Systems BIBA 475-499
  Connie Loggia Ramsey; James A. Reggia; Dana S. Nau; Andrew Ferrentino
Given the current widespread interest in expert systems, it is important to examine the relative advantages and disadvantages of the various methods used to build them. In this paper we compare three important approaches to building decision aids implemented as expert systems: Bayesian classification, rule-based deduction, and frame-based abduction. Our critical analysis is based on a survey of previous studies comparing different methods used to build expert systems as well as our own collective experience over the last five years. The relative strengths and weaknesses of the different approaches are analysed, and situations in which each method is easy or difficult to use are identified.

IJMMS 1986 Volume 24 Issue 6

Editorial: Constructing User Interfaces BIB 501
  Brian Gaines
A Three-Level Human-Computer Interface Model BIBA 503-517
  A. A. Clarke
A unified abstract model of the human-computer interface is presented. Examples from the existing literature that support various aspects of the model are offered. Some other models of the human-computer interface are discussed. The model is used to examine an existing workstation interface. The outcome of the examination is a structured series of questions that could form the basis for a future interface requirement. The potential productivity, application, and development of the model are identified.
Icon-Based Human-Computer Interaction BIBA 519-543
  David Gittins
This paper is concerned with the use of icons in human-computer interaction (HCI). Icons are pictographic representations of data or processes within a computer system, which have been used to replace commands and menus as the means by which the computer supports a dialogue with the end-user. They have been applied principally to graphics-based interfaces to operating systems, networks and document-processing software.
   The paper attempts to provide a more systematic treatment of icon interfaces than has hitherto been made, and to create a classification which it is hoped will be of use to the dialogue designer. The characteristics, advantages and disadvantages of icon-based dialogues are described. Metaphors, design alternatives, display structures and implementation factors are discussed, and there is a summary of some icon design guidelines drawn from a variety of sources. Some mention is also made of attempts by researchers to measure the effectiveness of icon designs empirically.
On Methods for Interfaces Specification and Design BIBA 545-568
  J. N. J. Richards; H. E. Bez; D. T. Gittins; D. J. Cooke
In this paper we analyse a subsystem, MINICON, of the UNICON interface to the UNIX operating system using two well-known formal methods, Reisner's Formal Grammar and Moran's Command Language Grammar. The contribution each technique is able to make towards a complete specification of interface systems is then identified and discussed.
Fuzzy Prolog BIBA 569-595
  C. J. Hinde
Various methods of representing uncertainty are discussed including some fuzzy methods. Representation and calculation of fuzzy expressions are discussed and a symbolic representation of fuzzy quantities coupled with axiomatic evaluation is proposed. This is incorporated into the PROLOG language to produce a fuzzy version. Apart from enabling imprecise facts and rules to be expressed, a natural method of controlling the search is introduced, making the search tree admissible.
   Formal expression of heuristic information in the same language, FUZZY PROLOG, as the main problem language follows naturally and therefore allows the same executor to evaluate in both "problem" space and "heuristic" space.
   In addition, the use of variable functors in the specification of bidirectional logic is discussed. The paper shows two areas of application of higher order fuzzy predicates. As an introduction Warren's examples are outlined and used with variable functors to illustrate their use in describing some relatively conventional applications.
   Translation of English into horn clause format is described and is used to illustrate the simplicity of representation using variable functors. Alternative formulations are also explored, typically the use of the "meta-variable" in MICRO-PROLOG and using the "univ" operator.
   Representation of rule generation and inference is addressed. Examples are given where the expression of meta-rules in standard PROLOG are compared with the expression of the same rules using "variable" predicate symbols. Some meta-rules illustrated are clearly not universally valid and this leads to the addition of fuzzy tokens.
Negative Knowledge Toward a Strategy for Asking in Logic Programming BIBA 597-600
  Ernest Edmonds
Rule-based systems that ask the user and that also allow a not operator to be used in the rules have existed for some time; see, for example, the work of Duda, Gasching, Hart, Konolige, Reboh, Barrett & Slocum (1978). This paper briefly explores an idea of Edmonds (1984) for bringing together such recent developments within logic programming in order to provide a logic-based system with an integral, automatic, strategy for asking. The discussion shows that a simple and natural interpretation of PROLOG can provide a step towards logic-based human-computer co-operation.
Support for Tentative Design: Incorporating the Screen Image, as a Graphical Object, into PROLOG BIBA 601-609
  Andre Schappo; Ernest A. Edmonds
The design process is a prime exemplar of a creative task in which humans often change their minds. The design process considered is that of creating pictures. It is argued that in order to accommodate tentative design of pictures it is necessary to develop tools that maintain and have access to a complete description of the picture being created. Extensions to PROLOG are proposed that would serve as a basis for the development of such a tool. The functioning of these extensions, which include human-computer interaction rules, is shown to relate to the functioning of the design process.
Automatic Speech Recognition Based on Spectrogram Reading BIBA 611-621
  J. H. Connolly; E. A. Edmonds; J. J. Guzy; S. R. Johnson; A. Woodcock
An approach to the problem of automatic speech recognition based on spectrogram reading is described. Firstly, the process of spectrogram reading by humans is discussed, and experimental findings presented which confirm that it is possible to learn to carry out such a process with some success. Secondly, a knowledge-engineering approach to the automation of the linguistic transcription of spectrograms is described and some results are presented. It is concluded that the approach described here offers the promise of progress towards the automatic recognition of multi-speaker continuous speech.
Testing Functional Grammar Placement Rules Using PROLOG BIBA 623-632
  John H. Connolly
This paper is concerned with the testing of grammatical rules by computer. The rules concerned are those which govern the order of nuclear elements in the English clause, formulated in accordance with the principles of functional grammar. The first part of the paper deals with the form and content of these rules. Next, a procedure is described for the testing of these rules by means of a program written in PROLOG. Finally, it is shown how the use of the testing procedure leads to improvements in the formulation of the rules.
Constructing 3-D Object Models Using Multiple Simulated 2.5-D Sketches BIBA 633-644
  A. Sharma; S. A. R. Scrivener
Many applications involve the construction of 3-D object models from which images, often requiring a high degree of realism, are later produced. Constructing such models frequently involves considerable human intervention, even in cases where a physical model or the actual object to be modelled exists. This paper describes an approach to the automatic construction of 3-D object models using images of scenes. This method employs a representation of the visible surfaces in a scene called the 2.5-D sketch and a model construction process is described that utilizes multiple simulated 2.5-D sketches.
Studying Depth Cues in a Three-Dimensional Computer Graphics Workstation BIBA 645-657
  J. D. Waldern; A. Humrich; L. Cochrane
A Three-Dimensional Interactive Graphics Workstation has been constructed within the Human-Computer Interface Research Unit. The principal accomplishment of this workstation has been to provide a tool enabling a user to interact with a computer-generated image perceived in three dimensions. This image is perceived by workstation users to exist in free space forward of a VDU screen. A pilot experiment has been conducted where subjects interact both with a simple model of a cube and a three-dimensional computer-generated representation of the cube. The results indicate a significant positive correlation between performances using the cube model and the 3-D representation.
A Multi-Purpose System for Alpha-Numeric Input to Computers via a Reduced Keyboard BIBA 659-667
  M. Roberts; H. Rahbari
A software package (CIPHERWRITER) is described which functions in a variety of ways to permit alpha-numeric input using a substantially restricted subset of the keys available on a conventional keyboard. It is presented in the joint contexts, of background research which suggests that such a device could conceivably benefit computer-naive personnel, and consideration of certain specific requirements of the physically disabled.