HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 010203040506070809101112131415161718192021

International Journal of Man-Machine Studies 11

Editors:B. R. Gaines; D. R. Hill
Dates:1979
Volume:11
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:44
Links:Table of Contents
  1. IJMMS 1979 Volume 11 Issue 1
  2. IJMMS 1979 Volume 11 Issue 2
  3. IJMMS 1979 Volume 11 Issue 3
  4. IJMMS 1979 Volume 11 Issue 4
  5. IJMMS 1979 Volume 11 Issue 5
  6. IJMMS 1979 Volume 11 Issue 6

IJMMS 1979 Volume 11 Issue 1

Editorial: Intelligent Tutoring Systems BIB 1-3
  D. H. Sleeman; J. S. Brown
An Investigation of Computer Coaching for Informal Learning Activities BIBA 5-24
  Richard R. Burton; John Seely Brown
Computer-based tutoring/coaching systems have the promise of enhancing the educational value of gaming environments by guiding a student's discovery learning. This paper provides an in-depth view of (i) the philosophy behind such systems, (ii) the kinds of diagnostic modeling strategies required to infer a student's shortcomings from observing his behavior and (iii) the range of explicit tutorial strategies needed for directing the Tutor to say the right thing at the right time. Examples of these issues are drawn for a computer-based coaching system for a simple game -- How the West was Won. Our intention in writing this paper is to make explicit the vast amounts of tutorial knowledge required to construct a coaching systems that is robust, friendly and intelligent enough to survive in home or classroom use. During the past three years, we have witnessed how subtle the computer-based coaching problem really is. We hope this paper conveys some of these subtleties -- many of which continue to resist general solution.
Tutoring Rules for Guiding a Case Method Dialogue BIBA 25-49
  William J. Clancey
The first version of an "intelligent computer-aided instruction" program built on MYCIN-like expert systems has been implemented. This program, named GUIDON, is a case method tutor in which the problem-solving and tutorial dialogue capabilities are distinct. The expertise to be taught is provided by a ruled-based consultation program. The dialogue capabilities constitute teaching expertise for helping a student solve a case.
   In this paper we describe the rule-based formalism used by MYCIN-like programs, and then argue that these programs are not sufficient in themselves as teaching tools. We have chosen to develop a mixed-initiative tutor that plays an active role in choosing knowledge to present to a student, based on his competence and interests. Furthermore, we argue that it is desirable to augment the domain expertise of MYCIN-like programs with other levels of domain knowledge that help explain and organize the domain rules. Finally, we claim that it is desirable to represent teaching expertise explicitly, using a flexible framework that makes it possible to easily modify tutorial strategies and communicate them to other researchers.
   The design of the GUIDON program is based on natural language studies of discourse in AI. In particular, our framework integrates domain expertise in tutorial dialogues vie explicit, modular tutoring rules that are controlled by a communication model. This model is based on consideration of the student's knowledge and interests, as well as the tutor's plans for the case session. This paper discusses interesting examples of tutoring rules for guiding discussion of a topic and responding to a student's hypothesis based on the evidence he has collected.
The Genetic Graph: A Representation for the Evolution of Procedural Knowledge BIBAK 51-77
  Ira P. Goldstein
I shall describe a model of the evolution of rule-structured knowledge that serves as a cornerstone of our development of computer-based coaches. The key idea is a graph structure whose nodes represent rules, and whose links represent various evolutionary relationships such as generalization, correction, and refinement. I shall define this graph and describe a student simulation testbed which we are using to analyze different genetic graph formulations of the reasoning skills required to play an elementary mathematical game.
Keywords: Information processing psychology, Learning, Knowledge representation, CAI, ICAI, AI
A Structured Planning and Debugging Environment for Elementary Programming BIBA 79-95
  Mark L. Miller
How could an appropriately structured environment facilitate the acquisition of programming skills? Significant theoretical strides are needed before human-quality performance can be expected from a computer-based programming tutor. As an intermediate step, a system has been implemented which serves primarily as an editing language and diligent clerk. However, it differs from conventional programming environments in two crucial ways: (1) it interacts with the student using a vocabulary of concepts about planning and debugging, derived from an explicit model of the design process; and (2) it actively prompts the student with a menu of design alternatives, within the overall framework of a mixed-initiative dialogue. The current system is not a tutor; but the process of implementing and testing it has been instrumental in refining our model of the design process, thereby bringing us a step closer to realizing a computer-based programming tutor.
A Self-Improving Quadratic Tutor BIBA 97-124
  Tim O'Shea
A self-improving quadratic tutor comprising two principal components is described. One component is an adaptive teaching program where the teaching strategy is expressed as a set of production rules. The second component performs the self-improving function of the system by making experimental changes to the set of production rules. This component employs a deduction procedure which operates on a theory of instruction expressed as a set of modally qualified assertions. These assertions relate educational objectives to modifications which can be made to the teaching strategy. The cycle of operations proposed for the system is as follows -- select an educational objective, make an experimental change in teaching strategy, statistically evaluate the resulting performance, and update both the set of production rules and set of assertions.
   The tutor taught the solution of quadratic equations by the discovery method. The tutor was used by 51 students, and executed five experimental changes on its teaching strategy. This trial demonstrated that it was capable of improving its performance as a result of experimentation. Its limitations include a vulnerability to problems of local optima during "hill-climbing" and to a variant of the frame problem.
ACE: A System which Analyses Complex Explanations BIBA 125-144
  D. H. Sleeman; R. J. Hendley
This paper discusses a Problem Solving Monitor which has been implemented to provide a supportive environment for students solving a non-deterministic task, the interpretation of nuclear magnetic resonance spectra. In particular, this paper discusses the facility which allows the student to give an explanation in Natural Language and which comments on this. The explanations considered here are complex as they involve a series of arguments, which in turn consist of a series of facts and a deduction. The protocols which were collected from various student problem solving sessions are analysed in some detail and the inconsistent and incomplete nature of the dialogues is stressed. A system which is able to cope with these deficient dialogues is presented.
Note: Errata on this
Misconceptions in Student's Understanding BIBA 145-156
  Albert Stevens; Allan Collins; Sarah E. Goldin
Tutorial dialogues can be analyzed as an interaction in which a tutor "debugs" a student's knowledge representation by diagnosing and correcting conceptual misunderstandings. In this paper, we outline some tentative steps toward a theory which describes tutorial interactions. We outline the goal structure of a tutor, describe types of conceptual bugs that students have in their understanding of physical processes and discuss some of the representational viewpoints necessary to diagnose and correct these bugs.

IJMMS 1979 Volume 11 Issue 2

A Display Oriented Programmer's Assistant BIBA 157-187
  Warren Teitelman
This paper continues and extends previous work by the author in developing systems which provide the user with various forms of explicit and implicit assistance, and in general co-operate with the user in the development of his programs. The system described in this paper makes extensive use of a bit map display and pointing device (a mouse) to significantly enrich the user's interactions with the system, and to provide capabilities not possible with terminals that essentially emulate hard copy devices. For example, any text that is displayed on the screen can be pointed at and treated as input, exactly as though it were typed, i.e. the user can say use this expression or that value, and then simply point. The user views his programming environment through a collection of display windows, each of which corresponds to a different task or context. The user can manipulate the windows, or the contents of a particular window, by a combination of keyboard inputs or pointing operations. The technique of using different windows for different tasks makes it easy for the user to manage several simultaneous tasks and contexts, e.g. defining programs, testing programs, editing, asking the system for assistance, sending and receiving messages, etc. and to switch back and forth between these tasks at his convenience.
A Measurement-Informational Discussion of Fuzzy Union and Intersection BIBA 189-200
  Ronald R. Yager
The question of obtaining fuzzy membership grades is discussed. It is then shown that Zadeh's max and min operations are the only possible extension of the classic union and intersection operations which are meaningful in the face of ordinal information on degrees of membership. A discussion of ratio type information-meaningful operations is also presented.
A Bottom-Up and Top-Down Approach to Using Context in Text Recognition BIBA 201-212
  Rajjan Shinghal; Godfried T. Toussaint
Existing approaches to using contextual information in text recognition tend to fall into two categories: dictionary look-up methods and Markov methods. Markov methods use transition probabilities between letters and represent a bottom-up approach to using context which is characterized by being very efficient but exhibiting mediocre error-correcting capability. Dictionary look-up methods, on the other hand, constrain the choice of letter sequences to be legal words and represent a top-down approach characterized by impressive error-correcting capabilities at a stiff price in storage and computation. In this paper, a combined bottom-up top-down algorithm is proposed. Exhaustive experimentation shows that the algorithm achieves the error-correcting capability of the dictionary look-up methods at half the cost.
New Ideas in Decision Theory BIBA 213-234
  Maria Nowakowska
Three different decision models are presented: an analysis of risk-taking behaviour which encompasses both the subjective expected utility model and Atkinson's motivation model; a model concerned with the social behaviour of the interaction between individual and group actions which leads to a new class of decision models capable of accounting for such phenomena as the intransitivity of preferences; and a model of the formal structure of cognitive problems involved in choice based on a linguistic representation of motivation. Finally, an outline is presented of the application of formal action theory to pre-decisional situations.
When Do Diagrams Make Good Computer Languages? BIBA 235-261
  M. Fitter; T. R. G. Green
It is obvious that some diagrammatic notations are better than others, less obvious why. We list some requirements for a good notation with examples and empirical findings. Central requirements are to give the user the useful information (relevance) using a clear perceptual code for the underlying processes (representation); moreover the notation should restrict the writer to "good" structures. Important information in symbolic codes should be redundantly recoded in a perceptual code as well. Unfortunately these principles, especially the last, tend to make extra work if the diagram has to be modified, conflicting with the requirement of revisability unless software aids can be devised. Notation designers cannot turn to behavioural science for detailed guidance, but they could well make more use of empirical evaluations than at present.
Models of the Process Operator BIBA 263-284
  I. G. Umbers
The process control literature is reviewed for evidence on the following aspects of the process operator: characteristics of human control behaviour, development of process control skills, individual differences between process operators, task factors that affect performance, the organization of operator control behaviour. The various theoretical constructs which have been proposed to model these aspects of operator behaviour are described and discussed. Since the majority of the models are based on an analysis of verbal report data, a discussion of some of the methodological problems in using verbal protocols is presented. The review concludes that an information processing approach based on protocol data seem to be the most fruitful technique for modelling the human process controller.

Book Review

"Advances in Computer Chess, Volume 1," edited by M. R. B. Clarke BIB 285-286
  S. Soule

IJMMS 1979 Volume 11 Issue 3

Experiments Concerned with Reading "Newscaster"-Style Displays BIBA 287-300
  A. F. Newell; P. J. Brumfit
A communication aid for the speech impaired has been designed. This used a short "newscaster"-style display and was operated by a keyboard. This paper describes the reading tests which were used to obtain design information for the display. The readability of various types of "newscaster" display were compared. Displays with lengths varying from one to twelve displayed letters were assessed; in addition comparisons were made between "rolling" and "walking" displays. The relationships between readability and display length were obtained, and it was also found that a rolling display could be more easily read than a walking display. In addition the results showed that a display length of only five characters gave adequate readability. This display length was thought to be a good balance between cost and readability for a communication aid of this type. A number of communication aids based on this design have been built and are being successfully used by speech impaired people.
Inducing Explanations for Errors in Computer-Assisted Instruction BIBA 301-324
  M. A. Jones; F. D. Tuggle
This paper categorizes error recognition schemes in Computer-Assisted Instruction (CAI) programs and introduces a top-down method of error detection, classification and treatment. Particular attention is given to CAI drill programs in which the set of incorrect responses given by the student is systematically derivable through well-defined errors strategies. Since this set of "reasonable" incorrect responses may potentially be much larger than possible in a multiple choice format, relatively free-form answers must be allowed. The top-down method is demonstrated in a CAI program (HELPERR -- HELP with ERRors) to provide specific diagnoses of errors in arithmetic drill exercises in addition, subtraction, multiplication and division. In limited program testing of HELPERR utilizing student protocols, 81% of the observed systematic errors were successfully identified and an overall success rate of 72% was obtained.
A Top-Down Language Analyzer BIBA 325-338
  Jeffrey W. Smith; Alan L. Tharp
Parsing is the process which matches a given input against prestored possibilities (a grammar) to discover the structure of acceptable candidates. It is relevant to both natural and artificial languages -- both English and command or query languages. Although parsing is necessary to computer processing of all languages, and the parser described in this paper (PARSYN) is a general-purpose to-down parser, the focus of this paper is its applicability as a computationally efficient processor of natural language. In natural language, a grammar is taken to cover a subset of the language; parsing proceeds both top-down looking for constituents and bottom-up replacing the words by syntactic categories for processing. Since procedural formulation is equivalent to an ATN, complete computational coverage of all natural language parsers can be obtained with template processing and an ATN. PARSYN is a computational accelerator for parsing which provides this coverage. The algorithm of PARSYN is a Recursive Transition Network (RTN) with interrupt capability; the combination can serve as an ATN or template processor. Yet, PARSYN is simple enough to be attached to a microprocessor based English-language processing system (µBE, "microBE", for micro-processor-Based English). It is flexible enough to provide pattern-matching, template processing and parsing all with the same formulation, bringing these facilities to the micro-system level efficiently and economically.
Towards More "Natural" Interactive Systems BIBA 339-350
  M. Fitter
It is argued that to obtain the maximum benefit from interactive computer systems principles of program and dialogue design are needed. It is unlikely that natural languages such as English will provide a suitable basis for designing a man-computer dialogue. The objective should be to model the task domain in a way that will be comprehensible to the user and to provide an explicit "image" of the "underlying processes". Design principles are discussed both for general purpose programming languages and "bespoke" languages intended as tools for a specific purpose. It is concluded that AI research may eventually provide intelligent tools with the inferential powers necessary for a genuine dialogue, but that for the time being it is better to make the underlying mechanisms and their limitations as explicit as possible.
A Model of Fuzzy Reasoning Through Multi-Valued Logic and Set Theory BIBA 351-380
  J. F. Baldwin; B. W. Pilsworth
Various interpretations of conditional propositions are considered, which include relational definitions using Lukasiewicz logical implication rule and Zadeh's Maximin rule. Theorems are presented which describe the relationship between the interpretations.
   An example of reasoning in ordinary set theory is presented as a special case of the method used for approximate reasoning with fuzzy propositions.
   Models of reasoning from multiple conditional propositions of high dimensional state are constructed and theorems for reducing dimensionality are presented. Problems of dimensionality using the Lukasiewicz implication rule are discussed and an alternative method based on fuzzy logic is indicated briefly.
Fuzzy Logic and Approximate Reasoning for Mixed Input Arguments BIBA 381-396
  J. F. Baldwin
In this paper we extend the method of approximate reasoning based upon fuzzy logic as proposed by Baldwin (1978) to arguments of a more complex nature, namely those with mixed inputs. Two approaches are given, both of which have their analogies in ordinary two valued logic.
On the Satisfaction of a Fuzzy Relation by a Set of Inputs BIBA 397-404
  J. F. Baldwin; N. C. F. Guild
Some applications of Fuzzy Set theory require an indication of how well a set of fuzzy sets satisfy a relation: where the relation represents a connection between the spaces on which the fuzzy sets are defined, perhaps in the form of a hypothesis. Two measures of such satisfaction, one a scalar and one a function, are compared in this paper: and the derivation of the scalar from the function is given. The relevance of these measures to applications, approximate reasoning in particular, is indicated.

IJMMS 1979 Volume 11 Issue 4

Editorial BIB 405-406
  E. H. Mamdani
Report on the Panel Session at the Workshop on Fuzzy Reasoning BIB 407-409
  B. S. Sembi; N. J. C. Baker; J. Efstathiou
Verbal Reports as Evidence of the Process Operator's Knowledge BIBA 411-436
  Lisanne Bainbridge
Verbal reports are usually collected with the aim of understanding mental behaviour. As it is not possible to observe mental behaviour directly we cannot test for a correlation between report and behaviour, and cannot assume one. Verbal data cannot therefore be used to test theories of mental behaviour. Verbal data may be produced by a separate report generating process which may give a distorted account. The data can be useful for practical purpose if these distortions are minimal. This paper attempts to assess the conditions in which this is the case. Several methods of obtaining verbal reports are surveyed: system state/action state diagram, questionnaire, interview, static simulation and verbal protocol. Techniques for collecting and analysing the data are described. In each case the small amount of data available on the correlation between reports and observed behaviour are reviewed. The results are not clear. Some verbal data are evidently misleading. Others, however, are sufficiently good to encourage the search for more information about factors affecting their validity.
Do We Need "Fuzzy Logic"? BIBA 437-445
  Susan Haack
This paper gives a critical appraisal of "fuzzy logic" from the viewpoint of a logician and concludes that no acceptable case has been made for the need for it.
Fuzzy Truth Definition of Possibility Measure for Decision Classification BIBAK 447-463
  J. F. Baldwin; B. W. Pilsworth
A new definition of possibility measure is presented which is calculated on truth space and is shown to be equivalent to Zadeh's original definition. This alternative formulation is shown to be the more natural in the context of decision classification because it clearly demonstrates the need for determining both the possibility of a category and not that category in a selection criterion.
   A number of useful possibility theorems are presented and their application to decision classification is demonstrated in a simplistic medical diagnosis problem, which also employs entropy measure as an additional parameter.
   The truth space formulation of possibility measure is shown to be of further value in problems of high dimensional state and an important possibility theorem relating to such problems is presented.
Keywords: Approximate reasoning, Decision classification, Entropy, Fuzzy logic, Fuzzy sets, Medical diagnosis, Possibility measure
Fuzzy Logic and Fuzzy Reasoning BIBA 465-480
  J. F. Baldwin
The concepts of truth value restriction and fuzzy logical relation are used to give a general approach to fuzzy logic and also fuzzy reasoning involving propositions with imprecise or vague description.
Logical Foundations for Database Systems BIBA 481-500
  Brian R. Gaines
Database systems originated as mechanisms for storing and retrieving information. Codd's relational formalism generalized and made far more flexible the forms of data structure and retrieval specification allowed. However, available implementations of relational databases are in terms of hard, static, deterministic relations; whereas in real-world applications data is often imprecise, inherently dynamic and non-deterministic. In recent years there has been a range of developments concerned with representing and using data that can only be represented in these "softer" terms. Some of the work has been explicitly concerned with database systems, but much of it, whilst highly relevant, has been in other application areas.
   This paper classifies and surveys work on a variety of logical systems in the context of its relevance to database systems. The objective is to show through illustrations what can be incorporated into relational database systems to allow for a wider range of real-world requirements and closer man-computer interaction. The current state-of-the-art in natural language interaction with databases is illustrated and discussed. The possibility of paradoxes leading to oscillations in database states is demonstrated. The roles of modal, multi-valued and fuzzy logics in databases are described as discussed.
A General Approach to Linguistic Approximation BIBA 501-519
  F. Eshragh; E. H. Mamdani
This paper describes a technique by which a fuzzy subset can be linguistically labelled. The technique involves the separation of a given fuzzy set into a certain number of specific subsets. The labelling is based on assignment of labels to these specific subsets and their concatenation with connectives "AND" and "OR".
   The technique allows the user to specify, up to a certain number, his own primary subsets and their respective names. The input subset is also freely specified and properties like normality of input subsets do not constitute any constraint.
On Fuzzy Sets, Subjective Measurements and Utility BIBA 521-545
  Ole I. Franksen
Fuzzy reasoning is founded on subjective measurements specified as grades of membership of property categories called fuzzy sets. These membership gradings, it is assumed, may be expressed numerically by functions or corresponding discrete representations the values of which submit to the conventional arithmetic operations. This paper raises the question as to the empirical justification of these assumptions. That is, what empirical support can be established for this approach considering the properties of subjective measurements in psychophysics and those of utility in modern microeconomics or management science. Based on a presentation of the evidence demonstrated in these disciplines a power function seems to be the tentative form of the membership gradings of fuzzy sets representing a large variety of psychophysical continua and the corporate utility under risk. However, practically no empirical evidence was found to support the submission of such power function representations to arithmetic operations. Hence, there is an urgent need to establish more empirical facts on the assignments of subjective membership gradings and, in particular, the combinations of such gradings.

IJMMS 1979 Volume 11 Issue 5

Reflective Analysis BIBA 547-584
  P. J. Boxer
The paper describes a method of computer assisted reflective learning capable of being used by managers. The method enables managers to explore the value of their past experience in relation to a particular problem context; to consider how their own experience relates to that of other managers; and finally to create design criteria for strategic options within a problem context capable of commanding a consensus between the managers. The paper concludes that the method represents a new departure in the use of computers for supporting strategic management.
A Microcomputer-Based Speech Synthesis-by-Rule System BIBA 585-620
  I. H. Witten; J. Abbess
This paper describes a system for generating speech by rule from a phonetic representation, using a resonance analogue speech synthesizer driven by parameters which are computer in real time by a microcomputer. Input to it, in the form of a phonetic transcription with additional markers to control rhythm and intonation, can come from a terminal attached to the microcomputer, or from a host computer linked to it by a serial line.
   A novel feature of the implementation is the use of a high-level structured programming language, "C", which is compiled on the host and transmitted to the microcomputer as object code. This allows changes to be made quickly to the segmental and suprasegmental synthesis routines, which are still under development, and combines flexibility with ease of use in man-machine applications requiring speech output. To the host, the microcomputer/synthesizer system is simply a character-oriented low-data-rate output device.
Conversational Heuristics for Eliciting Shared Understanding BIBA 621-634
  Mildred L. G. Shaw
A conversational method is necessary for experimenter and subject to collaborate in the exploration of the world of human beings. Individuals cannot be treated as objects, or be instructed how to take part in an experiment, without the recognition of the autonomy of each person and the invitation to participate jointly in co-operative exploration of the nature of man. An individual can be seen as a personal scientist who forms theories about the world and tests these theories against his personal experience of reality, adapting his theories for a more effective anticipation of events and hence a more competent interaction with his environment.
   A suite of computer programs (PEGASUS, FOCUS, MINUS, CORE, ARGUS and SOCIOGRIDS) has been developed, each one acting as a cybernetic tool to enhance man's capabilities to understand both himself and his relationships with other perspectives of the world. PEGASUS is described, including PEGASUS-BANK which can be used to explore the relationship of an individual with another individual (or group). The CORE program can be used to chart change in a person over time, and to find the level of understanding and agreement between two people. Shared understanding within small groups can be investigated using the SOCIOGRIDS program which produces a mapping of the intra-group relationships, and the subject content which shows the extent of agreement in the group.
   A study involving the exchange of subjective standards in human judgement is briefly described, and an analogy drawn to the understanding of different perspectives in the treatment of a medical or clinical patient.
A Model for the Representation of Pattern-Knowledge for the Endgame in Chess BIBA 635-649
  M. A. Bramer; M. R. B. Clarke
The endgame in chess has proved surprisingly difficult to program satisfactorily, even in the most elementary cases. This paper presents a model aimed at facilitating the construction of simple algorithms based closely on the chessplayer's knowledge of significant patterns of pieces.
   The use of pattern-knowledge is one important aspect of human chess skill; another is the ability to learn from experience.
   A methodology is accordingly given by which algorithms constructed using the model can be improved in the light of their practical performance by a systematic process of iterative refinement, whilst retaining the properties of simplicity, compactness and close correspondence to human pattern-knowledge.
A Methodological Comparison of the Science, Systems and Metasystem Paradigms BIB 651-663
  John P. van Gigch

IJMMS 1979 Volume 11 Issue 6

Editorial: Palantype Shorthand Transcription to Aid the Deaf BIB 665
  A. F. Newell
An Assessment of Palantype Transcription as an Aid for the Deaf BIBA 667-680
  A. C. Downton; A. F. Newell
This paper discusses the development of a simultaneous speech transcription system for the deaf based upon a Palantype shorthand machine. An initial investigation of such a system presented in an earlier paper is summarized and development stemming from this work is detailed. The design considerations of 2 hard-wired logical transcription systems are discussed and the results of trials of these systems with deaf subjects under practical conditions presented. Methods of improving the output text quality by the application of dictionary search techniques are also discussed.
   The results of emulations of several possible systems using limited dictionaries are presented and compared with the results obtainable using a very large dictionary. The text quality produced by these emulations is illustrated by transcripts of each emulation produced from a single recording of a text from Hansard. Finally, design criteria for a portable and relatively inexpensive microprocessor-based transcription system which would provide a good quality transcript are specified.
Voice Input to English Text Output BIBA 681-691
  A. W. Booth; M. S. Barnden
It is reasonably well known that mechanical shorthand machines can be used to record verbatim proceedings of conferences, committees and law courts etc. The American system for achieving this is known as Stenotype and the British system is called Palantype. A system has been developed at Leicester Polytechnic, the input to which is the output from mechanical shorthand machines. The shorthand code is transcribed to produce high quality English text output, as long as the machine operator has not made any mistakes. This paper outlines the system at Leicester and describes more fully the method used to transcribe shorthand code into English text. One application for the system, which is presently being investigated by the B.B.C., is the automatic subtitling of television to benefit the deaf. A shorthand machine operator could record verbatim from the sound track of the film, and the English text be transmitted to the television simultaneously. The transcription process runs at speeds of greater than 250 words per minute and is more than adequate to deal with fast speech and the speediest of shorthand machine operators.
Subtitling "Live" Television Programmes for the Hearing Impaired BIB 693-699
  A. F. Newell; P. R. Hutt
The Development and Use of an Electric Keyboard for Television Subtitling by Palantype BIBA 701-710
  W. R. Hawkins; R. N. Robinson
In addition to providing a news and information service, the B.B.C. Ceefax teletext system is an ideal means of transmitting subtitles for deaf viewers without impairing normal pictures. Experiments have recently been carried out in the B.B.C. using Palantype machine shorthand for the rapid transcription of subtitles. For this work, a Palantype keyboard was linked to a computer which translate the phonetic shorthand into English subtitles. Two approaches to the problem of deriving electrical signals from the keyboard were investigated: the modification of an existing Palantype printer, and the development of a new keyboard. Tests have shown that a new keyboard designed to duplicate the profile of original printer provides a very satisfactory solution.
   Although real-time subtitling suffers from the disadvantages of not allowing editing of the text and imposing a delay in presentation of the subtitle, live demonstrations of the Palantype system have shown its potential. Further work on improving the accuracy of translation of the shorthand, and improved methods of presentation of subtitles on the screen, may produce an economic method of subtitling where verbatim reporting is essential.
A Profoundly Deaf Businessman's Views on the Palantype Speech Transcription System BIBA 711-715
  G. Hayward
The prototype Palantype Speech Transcription System developed at Southampton University is being used full time by a profoundedly deaf businessman. In this paper the businessman describes how the equipment has made it possible for him to continue working in a managerial capacity and highlights the benefits and limitations. A comparison is made with other methods tried by the author prior to the availability of the Palantype System.
Perceptual and Memory Factors in Simulated Machine-Aided Speaker Verification BIBA 717-728
  Mark Haggard; Quentin Summerfield
Speaker verification by machine alone may be more accurate than by human listener but it is slower and demands powerful programs and peripherals. Simple recording devices can juxtapose a claimant utterance with a stored sample to provide rapid verification by human judgment, but this raises the question of how to optimize the sample size between insufficient information and an overload of auditory memory. To identify the processes at work in such judgements, a simulation was conducted of the situation where a human operator verifies claimant speakers against stored samples of a standard utterance. Realism was incorporated by restricting signals to telephone frequency bandwidth while both control and a stringent level of difficulty were incorporated by the selection of 5 better than average imposters and five more than averagely imitable male speakers. Naive, unselected listeners participated. With a 9-syllable sentence lasting about 2 seconds, correct acceptances varied from 92% to 100% and false acceptances from 54% to 21%. Conditions in which the length of the sample was reduced in various ways gave lower performance. The major factor differentiating the performance of individuals subjects was a bias factor -- the degree to which "same" responses predominated over "different" responses. Despite this, the different sample conditions tended to produce a fixed percentage of acceptance response rather than a proportion varying with the available sensitivity in the fashion of an optimal decision-maker. The data justify several conclusions. (1) Listeners can integrate speaker information over periods as long as 2 seconds and probably longer. (2) Improvement in performance can result from increasing the length of either the claimant utterance or the stored sample even when the other cannot be increased. Thus it appears that listeners are extracting and storing parameters characterising the style of a speaker rather than matching a raw sound image. (3) Speaker verification by skilled listeners should be able to reach levels of sensitivity which, in combination with manipulations of the acceptance criterion, would ensure tolerably low false acceptance rates. (4) Training of the listener in speaker verification should involve training of acceptance criteria as well as perceptual discrimination training.
Memory and Decision in Speaker Recognition BIBA 729-742
  Roger Brown
Two categorizations are presented of aspects of the speaker recognition field. The first examines the memory systems involved in experimental tasks and is based on a critical account of the taxonomy proposed by Bricker & Pruzansky (1976). The second deals with the decisions which listeners are required to make in the experimental situation. Finally, the differences between the experimental situation and the real world are examined.