HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 252627282930313233343536373839

International Journal of Man-Machine Studies 35

Editors:B. R. Gaines; D. R. Hill
Dates:1991
Volume:35
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:55
Links:Table of Contents
  1. IJMMS 1991 Volume 35 Issue 1
  2. IJMMS 1991 Volume 35 Issue 11
  3. IJMMS 1991 Volume 35 Issue 1
  4. IJMMS 1991 Volume 35 Issue 2
  5. IJMMS 1991 Volume 35 Issue 3
  6. IJMMS 1991 Volume 35 Issue 4
  7. IJMMS 1991 Volume 35 Issue 5
  8. IJMMS 1991 Volume 35 Issue 6

IJMMS 1991 Volume 35 Issue 1

A Connectionist and Symbolic Hybrid for Improving Legal Research BIBA 1-33
  Daniel E. Rose; Richard K. Belew
The task of legal research is complex, requiring an understanding of multiple interpretations of legal language, an awareness of the relationships between judicial decisions, and an ability to use analogical reasoning to find relevant documents. Attempts to automate parts of the process with computers have met with limited success. We describe a system called SCALIR which attempts to remedy these problems by combining connectionist and symbolic artificial intelligence approaches. This hybrid representational scheme gives SCALIR the ability to make both associative and deductive inferences. The system also provides an alternative to the traditional view of computer-assisted legal research by using a direct-manipulation style interface, making searches into an interactive process, and by employing user feedback to improve its performance over time. In addition, we suggest a further application of SCALIR to a part of the analogy task, and argue that this approach is complementary with case-based reasoning techniques.

IJMMS 1991 Volume 35 Issue 11

PROLEXS: Creating Law and Order in a Heterogeneous Domain BIBA 35-67
  R. F. Walker; A. Oskamp; J. A. Schrickx; G. J. Van Opdorp; P. H. van den Berg
This article defines a heterogeneous domain as a domain in which typical problems can only be solved by combining several distinctive knowledge sources. The legal domain, in this view, must be considered heterogeneous since classical rule-based knowledge sources like legislation, cooperate with expertise and case-law, both possibly represented quite differently, to produce useful results. The article continues with the description of the architecture of PROLEXS, an expert system built to operate in such domains. An example dialogue is added to sketch the problems of building knowledge-based systems in heterogeneous domains and the way PROLEXS approaches those problems. Finally the current PROLEXS research involving neural networks and case-based reasoning is introduced.

IJMMS 1991 Volume 35 Issue 1

Hierarchical Formalizations BIBA 69-93
  Tom Routen; Trevor Bench-Capon
This paper examines the prospects for using logic to represent legislation. This is important since it offers, via the technology of logic programming, a straightforward way of constructing knowledge-based systems in the legal domain. We suggest requirements which an ideal logical representation would satisfy and find that there is an apparent tension between two of them. Specifically, the need to produce a logically correct representation can appear to work against the need to produce a representation which is also easy to validate and maintain. This conflict, and other tensions which the use of logic is seen to create, have led some researchers to advocate the abandonment of logic. However, we argue that the tensions are created by the assumption that a logically correct representation will be one in which no meta-level features are represented. This assumption is encouraged by previous practice but is erroneous. Relaxing the assumption not only permits software engineering considerations to be respected but eases representational difficulties. The penalty is that the resulting formal representations can no longer serve as simple logic programs.
An Abductive Theory of Legal Issues BIBA 95-118
  Thomas F. Gordon
A normative theory of legal issues, argument moves and clear cases is presented in which abduction, rather than deduction, is of central importance. The theory is a refinement of Fiedler's constructive view of legal reasoning. Like legal positivism, the theory elaborates the concept of a clear case, but here clearness is defined relative to a set of competing interpretations of the law, rather than a single consistent set of "valid" rules. A computational model for the theory is described, which uses an ATMS reason maintenance system to implement abduction. Finally, the theory is compared with Anne Gardner's program for spotting issues in offer and acceptance law school examination questions.

IJMMS 1991 Volume 35 Issue 2

A Full-Speed Listening Typewriter Simulation BIBA 119-131
  A. F. Newell; J. L. Arnott; R. Dye; A. Y. Cairns
For automatic speech recognition applications such as a listening typewriter, there is a pressing need for the evaluation of speech input to machine. Unfortunately current recognition technology is not adequate for such evaluation, and thus simulation must be used. Some simulations have been performed where the conversion from speech to orthography was performed by a typist, but these simulations were restricted by the speed at which the typist could input data. This paper describes a simulation based on a palantype shorthand machine and a commercially available transcription system. The use of a shorthand machine rather than a QWERTY keyboard means that the speech rates can be much greater and thus the simulation need not impose unrealistic speed limitations on the speaker.
Clinical Assessment and Computerized Testing BIBA 133-150
  Irene Styles
It is well known in traditional testing practice in psychology that recognizing and controlling personal factors such as misunderstandings, anxiety, loss of concentration and lack of motivation are vital in obtaining reliable and valid data. To achieve such data, the importance of having persons trained in psychology as test administrators is recognized, and many tests require a trained, registered psychologist for their administration. With the advent of computerized testing, the emphasis in related research has been on its advantages over traditional testing, especially in its efficiency and reliability, with suggestions that the need for a psychologist in testing can be reduced. This paper demonstrates the substantial potential effects of personal factors in computerized testing, and makes the case that, if validity is to be retained, then instead of not needing a trained psychologist to be present, such testing does require one at least as much as traditional testing. The demonstration is based on the observations of a psychologist who was present throughout the testing of 189 children, ranging in age from 9.5 to 15 years as they responded individually to a computerized version of Raven's Progressive Matrices, on two occasions 6 months apart.
The Use of Grounded Theory for Conceptual Analysis in Knowledge Elicitation BIBA 151-173
  Nick F. Pidgeon; Barry A. Turner; David I. Blockley
In many practical knowledge engineering contexts, interview data is the commonest form in which information is obtained from domain experts. Having obtained interview data the knowledge engineer is then faced with the difficult task of analysing what is initially relatively unstructured and complex material. It is argued that the knowledge engineer's task of analysing interview data conceptually as part of the knowledge elicitation process is similar to that of the social scientist analysing qualitative data. One implication of this is that a range of methods originally developed by social scientists for the analysis of unstructured and semi-structured qualitative material will be of assistance to the knowledge engineer. The background philosophical issues linking qualitative social science research and knowledge elicitation are outlined; both are characterized as fundamentally creative, interpretative processes. "Grounded Theory", a social science methodology for the analysis of systematic generation of conceptual models from qualitative data, is described in detail. An example is presented of the use of Grounded Theory for the analysis of expert interview transcripts, drawn from a knowledge engineering project in civil engineering. The discussion focuses upon the processes used to move from an initial unstructured interview transcript to a core set of interrelated concepts, memos and models that fully describe the data.
Signal Flow Graphs vs Fuzzy Cognitive Maps in Application to Qualitative Circuit Analysis BIBA 175-186
  M. A. Styblinski; B. D. Meyer
The Fuzzy Cognitive Maps (FCM) introduced by Kosko represent a novel way of fuzzy causal knowledge processing, using the net rather than the traditional tree knowledge representation. In this paper similarities between the Fuzzy Cognitive Maps and Signal Flow Graphs are pointed out and the inference process use in Fuzzy Cognitive Maps is compared and paralleled with a fixed point iterative solution of the equations describing the Signal Flow Graph. Then, applications to the qualitative circuit analysis for a class of feedback amplifiers and active resistive circuits, using a combination of the Signal Flow Graph and Fuzzy Cognitive Map concepts are discussed. Several examples are given.
Error in Natural Language Dialogue between Man and Machine BIBA 187-217
  Jean Veronis
A great deal of research has been done in the field of error processing, but the difficulty of the problem and the diverse ways in which it has been approached have made synthesis and methodological reflection necessary at this point. The present article begins by proposing a typology of errors in natural language dialogue between man and machine. The discussion of possible correction strategies that follows shows that the correction of user competence errors should take precedence over the correction of performance errors. The discussion also emphasizes that error prevention, although generally neglected, is a major way to avoid dialogue dead-ends, and should be taken into account in natural-language interface design. It is proposed that if dialogue systems cannot as yet be adapted to users, then they must be transparent and consistent enough so that users can adapt to them. As an illustration, the paper gives an overview of various methods developed from these methodological principles for correcting and preventing errors at the lexical, syntactic and semantic levels.
Emergent Complexity and Person-Machine Systems BIBA 219-234
  Kenyon B. De Greene
The question of the effectiveness for design of human factors research has recently received renewed interest. Human factors and ergonomics may represent, after all, fundamentally different approaches, and ergonomics may be better fitted to provide an emergent new paradigm in keeping with the increased complexity of person-machine systems. Evolutionary ergonomics is therefore introduced as an integrated design/operations concept better fitted to the realities of complex person-machine systems. Several recent failures of complex systems are then briefly reviewed in terms of person-generated contributions to failure that transcend customary design considerations. The study of person-computer interaction is synopsized. Advanced computer technologies, applied to manage complexity, can constrain and channel human cognition. Artificial intelligence and expert systems are reviewed in this context. Recent developments in systems theory suggest further the inadequacy of existing approaches and the fundamental need for paradigm shift.
Informing HCI Design through Conversation Analysis BIBA 235-250
  M. A. Norman; P. J. Thomas
HCI encompasses a variety of disciplines to provide knowledge of the user and seeks to make this knowledge available to designers in a variety of practical ways. In seeking to enrich the multidisciplinary base of HCI, to provide designers with enhanced knowledge of the user and to facilitate interface design, the authors are currently exploring conversation analysis, a social-scientific approach to the investigation of interaction. Conversation analysis provides a methodology, a set of analytic constructs and a collection of established findings about interaction which will prove important in the investigation and design of human-computer interaction in a way that transcends technological issues. The paper discusses issues concerned with the applicability of conversation analysis to HCI, and provides examples of studies carried out on a corpus of video-recorded human-computer interaction. These examples demonstrate that the findings of conversation analysis can inform the design of interactive systems, and that its methods may be used productively in the investigation of human-computer interaction.
Cluster Analysis as a Technique to Guide Interface Design BIBA 251-265
  Scott Lewis
One difficulty with designing an effective user interface for a hardware or software system is that the designer frequently does not have specific information about the user's model of the system. In this paper, a methodology to address this problem is demonstrated. The method is presented as a tool to help constrain the choices that the interface designer can make in the construction of a usable human-system interface. The method involves the use of a specific cluster analysis technique to characterize the models of a domain held by one or more user groups. I discuss how the specific information gained from this technique may be applied to help guide interface construction.

European Association for Cognitive Ergonomics: News, Reviews and Reports

Editorial BIB 267-273
  T. R. G. Green

European Association for Cognitive Ergonomics: Book Reviews

"Cognitive Ergonomics and Human-Computer Interaction," edited by J. Long and A. Whitefield BIB 267-273
  A. Sasse
"Cognition in Practice," by Jean Lave; "Plans and Situated Actions," by Lucy A. Suchman BIB 267-273
  R. J. Anderson
"Visual Programming," by N. C. Shu BIB 267-273
  T. R. G. Green
"Programming the User Interface. Principles and Examples," by Judith R. Brown and Steve Cunningham BIB 267-273
  David Ackermann
"Text, ConText, and HyperText: Writing with and for the Computer," edited by Edward Barrett BIB 267-273
  D. Raymond

IJMMS 1991 Volume 35 Issue 3

Display-Based Action at the User Interface BIBA 275-289
  Stephen J. Payne
This paper examines the hypothesis that information flow, from device to user, is a vital part of skilled activity in human-computer interaction. Two studies are reported. The first study questions users of keyboard-driven word processors about the effects of cursor-movement, finding and word-deletion commands in various contexts. The second study questions users of the Apple Macintosh-based systems, MacWrite and Microsoft Word, about the behaviour of the menu-driven find command. In both studies it is discovered that users often do not know the precise effects of frequently-used actions, such as the final position of the cursor, even though these effects are vital for future planning. It is concluded that even experienced users must acquire the information they need from the device's display during interactions, and that they do not necessarily remember regular details that are available in this way. This conclusion conflicts with those current models of user psychology that assume routine skill relies on complete mental specifications of methods for performing tasks.
Symbolic-Neural Systems and the Use of Hints for Developing Complex Systems BIBA 291-311
  Steven C. Suddarth; Alistair D. C. Holden
Neural network systems can be made to learn faster and generalize better through the addition of knowledge. Two methods are investigated for adding this knowledge: (1) decomposition of networks; and (2) rule-injection hints. Both of these approaches play a role similar to adding rules or defining algorithms in symbolic systems. Analyses explain two important points: (1) what functions which are easy to learn (as well as what functions which make effective hints) are known from an analysis of the effect of learning monotonic functions; (2) a set theory and functional entropy analysis shows for what kinds of systems hints are useful. The approaches have been tested in a variety of settings, and an example application using a lunar lander game is discussed.
Information Relationships in PROLOG Programs: How Do Programmers Comprehend Functionality? BIBA 313-328
  David Bergantz; Johnette Hassell
Within the context of software development, psychological complexity is a measure of the difficulty a programmer experiences when interacting with a program. To date there has been little research to investigate models of program comprehension as a basis for developing psychological complexity metrics. The purpose of this study is to identify information relationships that reflect the organization of programmers' cognitive models during the comprehension of unfamiliar PROLOG programs. An analysis of frequency and temporal ordering of subject protocols provides support for a two-model theory of PROLOG comprehension. During comprehension, programmers construct both a program model based on the detection of data structure relationships and a domain or real-world model based on the detection of function relationships.
Hierarchical Model-Based Diagnosis BIBA 329-362
  Igor Mozetic
Model-based reasoning about a system requires an explicit representation of the system's components and their connections. Diagnosing such a system consists of locating those components whose abnormal behavior accounts for the faulty system behavior. In order to increase the efficiency of model-based diagnosis, we propose a model representation at several levels of detail, and define three refinement (abstraction) operators. We specify formal conditions that have to be satisfied by the hierarchical representation and emphasize that the multi-level scheme is independent of any particular single-level model representation. The hierarchical diagnostic algorithm which we define turns out to be very general. We show that it emulates the bisection method, and can be used for hierarchical constraint satisfaction. We apply the hierarchical modeling principle and diagnostic algorithm to a medium-scale medical problem. The performance of a four-level qualitative model of the heart is compared to other representations in terms of diagnostic efficiency and space requirements. The hierarchical model does not reach the time/space performance of dedicated diagnostic rules, but it speeds up the diagnostic efficiency of a one-level model by a factor of 20.
Effects of Icon Design on Human-Computer Interaction BIBA 363-377
  Sven Blankenberger; Klaus Hahn
After subjects practised using a pointing device (two-button mouse) for selecting icons on a computer screen, the effect of "articulatory distance" (i.e. the difference between a picture and its meaning) on performance in menu-selection tasks was analysed. Three icon sets with different articulatory distances and one text set were constructed, validated and tested in a "search and select" experiment with icon positions randomized on the screen. This was contrasted with an experiment in which icons were to be selected from fixed screen positions. Results indicate that articulatory distance indeed had an effect on reaction time in the first design, but not in the latter. A recognition task was finally given to decide whether articulatory distance could influence memory for icons. The fact that subjects were able to recode icon meanings to screen positions after some training backs the everyday experience that icon design seems to be of little influence on the performance of advanced users. Icon-oriented interfaces are aimed, however, at the computer novice.
The Influence of Interface Style on Problem Solving BIBA 379-397
  Gunnvald B. Svendsen
According to a recent theory by Hayes and Broadbent (1988), learning of interactive tasks could proceed in one of two different learning modes. One learning mode, called S-mode, has characteristics not unlike what traditionally has been called "Insight learning". The other mode, called U-mode, is in some respects like trial and error learning. Extending the theory to human-computer interaction, it predicts different problem-solving strategies for subjects (Ss) using command and direct manipulation interfaces. Command interfaces should induce S-mode learning, while direct manipulation should not do this. The theory was supported by two experiments involving the tower of Hanoi problem. Ss with a command interface made the least number of errors, met criterion in the least number of trials and used the most time per trial. They were also more able to verbalize principles governing the solution of the problem than Ss using a direct manipulation interface. It is argued that the theory may explain the "feeling of directness" that goes with good direct manipulation interfaces. Further, the results indicate that user friendliness, as this is traditionally measured, in some cases may prove to reduce the users' problem-solving ability.
Optimum Display Arrangements for Presenting Status Information BIBA 399-407
  D. Scott; J. M. Findlay
A study utilizing an automated human-computer interface monitoring system is described which investigated the efficacy of two methods of displaying state (insert vs type-over) information within a text-editing system. Information presented only at the cursor position resulted in faster overall performance time than when information was presented only on a status line. The results are discussed in terms of visual processing and cognitive factors.
Hypotheses Testing by Fundamental Knowledge BIBA 409-427
  X. Liu
The use of both experiential and fundamental knowledge to do diagnosis has recently received much attention. Such diagnostic programs have been developed in many application areas. To support this approach, we attempt to develop a computational theory of diagnosis which facilitates the use of these two kinds of knowledge. In this paper, we outline the main issues involved in this development and explore one of them in depth: the use of fundamental knowledge to test fault hypotheses. In particular, we address the problems of which kinds of fundamental knowledge should be represented and how they may be used to test fault hypotheses. These problems are examined under the context of the use of first-order logic to describe the knowledge and the use resolution theorem proving techniques to do the testing.

IJMMS 1991 Volume 35 Issue 4

Coping with the Complexities of Multiple-Solution Problems: A Case Study BIBA 429-453
  Philip J. Smith; Deborah Galdes; Jane Fraser; Thomas Miller; Jack W., Jr. Smith; John R. Svirbely; Janice Blazina; Melanie Kennedy; Sally Rudmann; Donna L. Thomas
A model is proposed to account for the expertise of a skilled immunohematologist in solving multiple-solution problems. These problems, which he must deal with daily, are concerned with ensuring the safe transfusion of blood into patients. This model suggests that he copes with this difficult class of problems by: (1) Using patterns in the data to simplify the problem, hypothesizing the number of primitive solutions necessary to account for the test results and, when possible, decomposing the problem into a set of less complex, single-solution problems. Such decompositions then enable more powerful reasoning processes; (2) making use of a mixture of data-driven and hypothesis-driven processes in order to reduce the chances that heuristic (and therefore error-prone) methods and cognitive biases will lead away from critical data; (3) Relying on a mixture of confirmatory and rule-out processes to provide converging evidence, thus reducing the chances of error; (4) uncovering his own errors through the use of "error models" that identify the conditions where one of his processes is likely to make an error (similar to the use of student models by expert tutors to diagnose mistakes made by students).
Designer Integration in CIM: A Motivational Approach BIBA 455-466
  L. J. Guggenheim; T. W. A. Whitfield
An important movement in the area of design and manufacture is the incorporation of CAD and CAM to create a computer integrated manufacturing system. Within this system it is essential that interaction is straightforward for the designer. Research in this area has indicated that no simple way of defining user requirements exists and that probably the most difficult task is not programming the computer, but matching the computer to users' requirements. The present research has focussed upon user requirements at the initial design stage via an investigation of the Designer in action. Attention has been given to the design process (what the designer does) and the design products (what the designer produces). The approach has differed from earlier work, first, by its reliance upon the methods of applied psychology and secondly, by its adoption of a theoretical framework derived from motivation theory.
A Theory of Consolidation for Reasoning about Devices BIBA 467-489
  Tom Bylander
Given a collection of components connected in a certain way, how can the behavioral descriptions of the components be composed into a behavioral description of the collection as a whole? We propose a theory of consolidation based on a conceptual representation of behavior. The behaviors of components are represented using a small number of primitive types of behaviors. The behavior of a device is inferred using rules of composition that describe how one type of behavior can arise from a structural combination of other types of behaviors.
Design of a Hypermedia Interface Translating between Associative and Formal Representations BIBA 491-515
  Francis Heylighen
It is argued that in order to efficiently tackle complex problems, user and support system should intimately interact, complementing each other's weaknesses. Strengths and limitations of human intelligence, respectively computer intelligence, can be derived from the mechanism of associative, respectively "chunk-based", memory. A good interactive interface should hence allow one to translate between associative (context-dependent) and chunk-based (formal) representations. Associative knowledge can be expressed more explicitly through hypermedia, consisting of a network of linked chunks. Different tools for supporting the creation of networks are reviewed: check-lists, outlining, graphic representations, search functions, ... Knowledge can be automatically structured by looking for "closed" subnetworks, which define higher-order chunks and link types that can be used to guide inferences. A prototype implementation of an interactive interface, the CONCEPTORGANIZER, is sketched, and some potential applications in the areas of idea processing, knowledge elicitation, decision support, and CSCW are outlined.
The Initial Stage of Program Comprehension BIBA 517-540
  Susan Wiedenbeck
Beacons are stereotypical segments of code which serve as typical indicators of the presence of a particular programming structure or operation. Four experiments were carried out to study the role of beacons in programmers' initial formation of knowledge of program function. It was found that the presence of a beacon made a program easier for experienced programmers to comprehend on initial study. This was found to be true even when the specific program containing the beacon was previously unfamiliar to the programmer. Also, beacons which were inappropriately placed in a program where they did not belong lead to "false comprehension" of the program's function. If a strong beacon for some operation was present, programmers tended to use it to form their initial idea of a program's function and to largely ignore other information which contradicted it. As a whole, the results of the experiments suggest that beacons may play an important role in the initial high-level comprehension of programs.
Interactive Videodisc Instruction: The Influence of Personality on Learning BIBA 541-552
  Khalil F. Matta; Gary M. Kern
This paper reports the results of an experiment that addresses the influence of personality characteristics on the effectiveness of an interactive videodisc instructional package. The general hypothesis investigated is that the way individuals judge, perceive and desire to interact with others may influence their learning performance via computer-assisted instruction. It was found that certain personality characteristics, as measured by the Myers-Briggs Type Indicator, are correlated with learning success when using this new technology. These results are of relevance to the designers of computer-assisted instructional packages and educators who may consider the use of such packages.
Attitudes Toward Microcomputers: Development and Construct Validation of a Measure BIBA 553-573
  Magid Igbaria; Saroj Parasuraman
This paper reports the results of two studies involving the development and construct validation of a measure of attitudes toward microcomputers. The instrument was factor analytically derived and tested in two separate field studies of employed adults working in a variety of organizations. The composite 20-item attitude measure encompasses cognitive, affective, and behavioral components, and was found to have acceptable levels of internal consistency reliability in both the development and validation studies. The hypothesized nomological network of relationships of attitudes toward microcomputers with both antecedent variables (age, gender, computer experience, user training, and organizational support) as well as outcome variables such as system usage and user satisfaction was confirmed, providing further evidence of the construct validity of the instrument.
Information Search with Dynamic Text vs Paper Text: An Empirical Comparison BIBA 575-586
  Susan H. Gray; C. Bradford Barber; Dennis Shasha
Dynamic text is an information retrieval system that redisplays text fragments so that they always appear to be in a linear sequence, even though the user is following links. We compared information retrieval tasks using dynamic text and paper text. Dynamic text was better than paper text when answering difficult questions. We also found practice effects in the use of dynamic text. A personality measure, locus of control, was part of the study. Internal locus of control users were faster than external locus of control users.
Fuzzy Set Evaluation of Inspection Performance BIBA 587-596
  Mao-Jiun J. Wang; Joseph Sharit; Colin G. Drury
Large individual differences in inspection performance is one of the most consistent findings in inspection studies. This is a major factor that has contributed both to undermining the development of valid inspector selection tests and to complicating the training process for inspectors. This study evaluated cognitive factors that could account for a large part of these differences. A fuzzy set approach formulated as a multi-criteria decision making problem was used to determine whether they can correctly judge the importances as defined by objectively examining inspection activity. Results indicated a close correspondence between subjective and objective approaches, suggesting the possibility for integrating the individual's subjective appraisal of relative importance of cognitive factors into the design, selection and training process for inspection tasks.

IJMMS 1991 Volume 35 Issue 5

A Practical Graphical Tracer for Prolog BIBA 597-631
  Mike Brayshaw; Marc Eisenstadt
We describe a practical and enhanced implementation of a graphical Prolog tracer which not only provides a faithful (slow-motion) representation of the inner workings of the Prolog interpreter, but also allows a high-speed visual overview of execution for rapidly homing in on buggy code. The current work extends our original "Transparent Prolog Machine" in the following ways: (a) complex unification histories for given variables can be displayed; (b) cross-variable dependencies (sharing) across widely-dispersed sections of code can be highlighted; (c) an earlier defect, wherein a given user could write code which defeated the speed/size of the current fastest/largest display capability (i.e. a "horizon effect") is dealt with; (d) users of textual (Byrd Box) tracers are provided with an upward-compatible migration pathway; (e) code can be traced either "live" or "retrospectively" at different grains of detail. We distinguish among four different ways of manipulating the "navigational space" produced by large Prolog programs: (a) by granularity i.e. coarse-grained vs fine-grained; (b) by scale, i.e. close-up vs far away (c) by compression, i.e. the use of a single compact display region or symbol to indicate "additional territory", at the same granularity and scale; (d) by abstraction, i.e. a movement away from the raw Prolog code and towards a representation closer to the programmer's own plans and intentions. The paper includes detailed examples of the tracer in action.
TQ: A Language for Specifying Programming Problems BIBA 633-656
  Fernando Gomez; Viva Wingate
A central problem in designing a specification language for programming problems is that of making it easy to use by those people with very little or no knowledge of programming. This paper describes a specification language which has been designed with that idea in mind. The language has sufficient power to represent a large class of programming problems, is easy to use, and problems represented in it are directly legible. The language combines surface forms with semantic primitives underlying natural language. This results in a representation that, although far from natural language surface forms, is easily legible even by those people who have had a minimum exposure to the language.
A Rule-Based Approach to the Ergonomic "Static" Evaluation of Man-Machine Graphic Interface in Industrial Processes BIBA 657-674
  Christophe Kolski; Patrick Millot
This paper describes a rule-based approach to the "static" ergonomic evaluation of man-machine graphic interface in industrial processes. The first part presents research works which do directly or indirectly contribute to the ergonomic design or evaluation of man-machine interfaces. The second part proposes an ergonomic methodology for designing man-machine interfaces. In this methodology, the presentation of information is evaluated and improved through the use of an expert system: SYNOP, described in the third part. The last part outlines the possible interests of such a rule-based approach.
A Formal Approach to User Models in Data Retrieval BIBA 675-693
  Andre J. Kok
The interaction between a data base system and its users searching for interesting information can be improved by applying user modelling techniques. Currently existing techniques are, however, hard to compare and evaluate, because the models they generate do not have rigorously defined semantics. This paper contains a formal approach to user models in data retrieval applications, with an emphasis on retrieval systems using a browsing interaction style. A first-order logic is defined formalizing interests of a user in the availability of data meeting certain constraints. Next, the logic is augmented with a modal operator, analogous to the possibility operator of the standard Kripke possible-worlds semantics, to express the predictive qualities of a user model. Applications of the formalism are included to illustrate the viability of the approach, and the extension of the method to deal with non-browsing systems is discussed.
Critics: An Emerging Approach to Knowledge-Based Human-Computer Interaction BIBA 695-721
  Gerhard Fischer; Andreas C. Lemke; Thomas Mastaglio; Anders I. Morch
We describe the critiquing approach to building knowledge-based interactive systems. Critiquing supports computer users in their problem solving and learning activities. The challenges for the next generation of knowledge-based systems provide a context for the development of this paradigm. We discuss critics from the perspective of overcoming the problems of high-functionality computer systems, of providing a new class of systems to support learning, of extending applications-oriented construction kits to design environments, and of providing an alternative to traditional autonomous expert systems. One of the critiquing systems we have built -- JANUS, a critic for architectural design -- is used as an example for presenting the key aspects of the critiquing process. We then survey additional critiquing systems developed in our and other research groups. The paper concludes with a discussion of experiences and extensions to the paradigm.
Writing with a Computer: A Longitudinal Study of Writers of Technical Documents BIBA 723-749
  Kerstin Severinson Eklundh; Carin Sjoholm
A longitudinal study has been conducted of the use of computers for writing in an academic setting. Approximately 70 writers employed at a technical university in Sweden participated in two surveys with a 2-5 year interval; 15 of them were also interviewed personally. The results show that the participants who had previously written without a computer viewed the use of a computer for writing mainly as a positive change in their working situation. The most frequently reported advantages of computer-supported writing were that the text can easily be revised at any point in composition, and that the writer has control of the whole process of document production. Only a minority of the participants maintained that the use of a computer has caused them to save time when writing a document. The use of computers has led to new writing strategies, including increased revision of texts. Many writers report it as an advantage that texts do not have to be planned in detail when using the computer; however, others find risks of decreased quality associated with on-screen composition, partly because of the lack of a global view of the text. In the second survey, a majority of participants used a word processor on a personal computer for writing, whereas earlier, many had used text editors on mainframe computers. A majority of participants still found the restricted view of the text on the screen to be a problem in computer-based writing. In spite of this, a greater proportion of texts were composed directly on the screen, without a manuscript, and the general tendency is that less prepared manuscripts are used. This can be seen as a potential conflict in computer-based writing which poses a challenge for system design as well as for writers in their choice of strategies.

IJMMS 1991 Volume 35 Issue 6

Control Aspects of Active Above-Knee Prosthesis BIBA 751-767
  Dejan Popovic; Rajko Tomovic; Dejan Tepavac; Laszlo Schwirtlich
A methodology for control of an active above-knee prosthesis (AKP) is described. This approach is called Artificial Reflex Control (ARC), and depends on the use of production rules, so that the controller may be thought of as a leg movement expert. This control strategy is applicable to a variety of different gait modes. Automatic adaptation, according to the environment, and to the gait mode required, is based on heuristics related to human motor control.
Machine Recognition of Printed Arabic Text Utilizing Natural Language Morphology BIBA 769-788
  Adnan Amin; Sabah Al-Fedaghi
We describe a computer system for recognizing the printed Arabic text of multifonts. The system contains four components: acquisition, segmentation, character recognition and word recognition. The word recognition component includes two sub-systems: a morphological spelling checker/corrector, and a morphological word recognizer. With a 95.5% character recognition rate corresponding to an 81.58% word recognition, rate, tests have resulted in an increase in the rate of word recognition by 6.97% using the proposed morphologically based method.
Ameliorating the Pregnant Man Problem: A Verification Tool for Personal Computer Based Expert Systems BIBA 789-805
  John M. Herod; A. Terry Bahill
With the increase in expert systems development over the last several years, has come a need for verification tools that will allow the knowledge engineer to debug and refine the systems. Although some tools are forthcoming, these tools have traditionally been concerned with completeness and consistency checking of the knowledge base itself. Completeness and consistency checking are important aspects of knowledge base debugging, but there is another aspect that has been largely ignored, that is determining whether the expert system will ask the user foolish or useless questions. This aspect of knowledge base debugging is important, because a system that asks the user foolish or useless questions will quickly lose credibility, and when the credibility of an expert system is in question, it will be abandoned by its users. To address this problem, we developed a procedure to help the knowledge engineer determine whether the system will ask the user foolish or useless questions. This procedure is an iterative process based on identifying conflicting question pairs. It has identified four constructions in rule-based expert systems that lead to inappropriate questioning. This procedure is effective, easy to implement, and has exhibited benefits, which are not confined to identifying constructions in the knowledge base that lead to inappropriate questioning. As a result, this procedure should become a useful tool that can be used by knowledge engineers to build better, more error free expert systems. We developed and tested the procedure on personal-computer-based expert systems; we are not sure how it will scale to main-frame systems.
Stars, Polygons and Clusters; An Investigation of Polygon Displays BIBA 807-824
  Robert O. Turek; Joel S. Greenstein
One technique for displaying multi-variate quantitative data is to represent the values graphically in the form of a polygon or star. This allows the observer to view the complex data quickly, as a whole. Such displays have been used in various applications, for data exploration and presentation, and in status displays; they are also suited to categorization and identification tasks. For polygon displays to be reliably used, they should be capable of being interpreted consistently. An experimental investigation was undertaken to ascertain the effect of certain visual features of the display on the consistency with which untrained participants categorized data presented as polygons. The independent variables included background information, shading of figure and form. Two sets of data were used; the participants performed a categorization task on both sets of data. The results of the categorization task were analysed for consistency with standard clustering algorithms and for consistency across individuals. The results of the analysis that have implications for display design include the level of clustering consistency achieved by the participants, the interaction effects of the visual variables on consistency, and the effect of distinctive visual patterns on human judgments of similarity.
Actions Representation in a 4-D Space BIBA 825-841
  Giovanni Adorni; Agostino Poggi
In this paper a frame-based model is presented for the representation of actions in four dimensions (space + time). This model, called Action Net, allows the composition of different actions descriptions and the representations at different levels of detail: an action is represented by a tree of frames which degenerates in a single frame at the lowest level, and a new higher level description can be built expanding some leaf frames. The model is based on a formal representation of time and concurrency taken from Petri Net theory.
Process Tracing of Decision Making: An Approach for Analysis of Human-Machine Interactions in Dynamic Environments BIBA 843-858
  Gunilla A. Sundstrom
The focus of the present paper is to develop principles for support of joint human-machine reasoning processes in dynamic environments. The central notion is to separate the representations of the technical system from the representations of the human operator during design of knowledge-based support systems. An approach is described which focuses on the analysis of information and knowledge acquisition of human operators in supervision and control systems. In the approach, information search of human operators is analysed using the notions of information processing goals, means and requirements. Examples illustrate how the present approach complements canonical goal-means representations of technical systems. Finally, the design implications of the proposed framework are discussed.
Dynamic Identity Verification via Keystroke Characteristics BIBA 859-870
  John Leggett; Glen Williams; Mark Usnick; Mike Longnecker
The implementation of safeguards for computer security is based on the ability to verify the identity of authorized computer systems users accurately. The most common form of identify verification in use today is the password, but passwords have many poor traits as an access control mechanism. To overcome the many disadvantages of simple password protection, we are proposing the use of the physiological characteristics of keyboard input as a method for verifying user identity. After an overview of the problem and summary of previous efforts, a research study is described which was conducted to determine the possibility of using keystroke characteristics as a means of dynamic identity verification. Unlike static identity verification systems in use today, a verifier based on dynamic keystroke characteristics allows continuous identity verification in real-time throughout the work session. Study results indicate significant promise in the temporal personnel identification problem.
Optimizing Speed and Accuracy of Menu Selection: A Comparison of Walking and Pull-Down Menus BIBA 871-890
  Neff Walker; John B. Smelcer; Erik Nilsen
This paper reports three experiments that investigated factors which affect movement time and accuracy of menu selection with a mouse. The experiments primarily focused on the movements required to select from walking menus. The results suggest that width of the path that the cursor must travel can be an important variable in explaining speed and accuracy of motor movement in a walking menu. The studies also investigated the effects of impermeable borders and the size of menu items on movement time. The results show that borders and changing the size of menu items can improve the speed and accuracy of selection time. A final study found that when borders are used on a pull-down bar menu, the time required to access a second-level menu is less than that required by a walking menu, even though the walking menu pops up at the pointer location and the bar menu is located 15 cm away from the initial pointer position.
A Cost-Effective Evaluation Method for Use by Designers BIBA 891-912
  Peter C. Wright; Andrew F. Monk
A strong case has been made for iterative design, that is, progressing through several versions of a user interface design using feedback from users to improve each prototype. One obstacle to wider adoption of this approach is the perceived difficulty of obtaining useful data from users. This paper argues that quantitative experiment methods may not be practical at early stages of design, but a behavioural record used in conjunction with think-aloud protocols can provide a designer with the information needed to evaluate an early prototype in a cost-effective manner. Further, it is proposed that a method for obtaining this data can be specified which is straightforward enough to be used by people with little or no training in human factors.
   Two studies are reported in which trainee designers evaluated a user interface by observing a user working through some set tasks. These users were instructed to think aloud as they worked in a procedure described as "cooperative evaluation". The instruction received by the designers took the form of a brief how-to-do-it manual. Study 1 examines the effectiveness of the trainee designers as evaluators of an existing bibliographic database. The problems detected by each team were compared with the complete set of problems detected by all the teams and the problems detected by the authors in a previous and more extensive evaluation. Study 2 examined the question of whether being the designer of a system makes one better or worse at evaluating it and whether designers can predict the problems users will experience in advance of user testing.
Readers' Models of Text Structures: The Case of Academic Articles BIBA 913-925
  Andrew Dillon
Hypertext is often described a liberating technology, freeing readers and authors from the constraints of "linear" paper document formats. However, there is little evidence to support such a claim and theoretical work in the text analysis domain suggests that readers form a mental representation of a paper document's structure that facilitates non-serial reading. The present paper examines this concept empirically for academic articles with a view to making recommendations for the design of a hypertext database. The results show that experienced journal readers do indeed possess such a generic representation and can use this to organize isolated pieces of text into a more meaningful whole. This representation holds for text presented on screens. Implications for hypertext document design are discussed.

European Association for Cognitive Ergonomics: News, Reviews and Reports

Editorial BIB 927-933
  T. R. G. Green

European Association for Cognitive Ergonomics: Book Reviews

"Studying the Novice Programmer," edited by E. Soloway and J. C. Spohrer BIB 927-933
  D. J. Gilmore
"Handbook of Human-Computer Interaction," edited by M. Helander BIB 927-933
  Jean-Michel Hoc
"Cognitive Science and its Applications for Human-Computer Interaction," edited by R. Guindon BIB 927-933
  Yvonne Wærn