HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 2021222324252627282930313233343536373839

International Journal of Man-Machine Studies 30

Editors:B. R. Gaines; D. R. Hill
Dates:1989
Volume:30
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:34
Links:Table of Contents
  1. IJMMS 1989 Volume 30 Issue 1
  2. IJMMS 1989 Volume 30 Issue 2
  3. IJMMS 1989 Volume 30 Issue 3
  4. IJMMS 1989 Volume 30 Issue 4
  5. IJMMS 1989 Volume 30 Issue 5
  6. IJMMS 1989 Volume 30 Issue 6

IJMMS 1989 Volume 30 Issue 1

Learning Iteration and Recursion from Examples BIBA 1-22
  Susan Wiedenbeck
Recursion is basic to computer science, whether it is conceived of abstractly as a mathematical concept or concretely as a programming technique. Three experiments were carried out on learning iteration and recursion. The first involved learning to compute mathematical functions, such as the factorial, from worked out examples. The results suggest the subjects are quite able to induce a computational procedure for both iterative and recursive functions. Furthermore, prior work with iterative examples does not seem to facilitate subsequent learning of recursive procedures, nor does prior work with recursive examples facilitate subsequent learning of iterative procedures. The second experiment studied the extent to which people trained only with recursive examples are able to transfer their knowledge to compute other similar recursive mathematical functions stated in an abstracted form. It turned out that subjects who transferred to abstractly stated problems performed somewhat worse than they had performed previously when given examples. However, they did far better than a control group trained only with an abstract description of recursion. The third experiment involved comprehension of iterative and recursive Pascal programs. Comprehension of the iterative program was not affected by prior experience with the recursive version of the same program. Comprehension of the recursive version was only weakly affected by prior experience with the iterative version.
Toward a Theory of Computer Program Bugs: An Empirical Test BIBA 23-46
  Iris Vessey
To develop a theory of computer program bugs and of debugging, we need to classify on an abstract basis the nature of the bug and to relate the nature of the bug to the difficulty of debugging. Atwood and Ramsey (1978) report the only attempt of this nature in a study based on the theory of propositional hierarchies (see Kintsch, 1974) from the text comprehension literature. Propositional hierarchies are a conceptualization of the way in which sentences are stored in memory for the purpose of recall, etc. Atwood and Ramsey's studies did not distinguish between the difficulty of debugging as a function of the location of the bug in the propositional hierarchy or the location of the bug in the program structure. The objective of the series of three studies reported here is to differentiate between bug difficulty based on location in the propositional hierarchy of the sentence structure of the programming language and its location in the serial structure of the program. Little support was found for the effect of the location of the bug in the program structure on debugging difficulty. The effect of the location of the bug in the propositional hierarchy warrants further investigation. The results are interpreted in light of the need to formulate a mental model of correct program functioning and to determine the location of the program bug in terms of the functioning of that model.
Strategies in Controlling a Continuous Process with Long Response Latencies: Needs for Computer Support to Diagnosis BIBA 47-67
  Jean-Michel Hoc
An empirical analysis of blast furnace conductor's strategies in a stimulation of the process is presented and its implications for the design of intelligent computer support is discussed. The rationale for the choice of this situation is the need for cognitive analysis of process control situations which are far from discrete state transformation situations for which information processing psychological models or artificial systems have been designed. The simulation method is justified by the results of the previous steps of the study (behavioural observations in the control room and interviews on tool use and knowledge representation). The strategies are described in terms of representations used and processing performed, their efficiency is evaluated, and correlations between strategic features and efficiency are examined. A number of hypotheses are put forward on types of computer support best suited to satisfying the conditions of implementation of the most efficient strategic features. The computer is seen as an instrument, where it operates as a colleague, rather than a prosthesis capable of replacing the human.
Combining Stochastic Uncertainty and Linguistic Inexactness: Theory and Experimental Evaluation of Four Fuzzy Probability Models BIBA 69-111
  Rami Zwick; Thomas S. Wallsten
Two major sources of imprecision in human knowledge, linguistic inexactness and stochastic uncertainty, are identified in this study. It is argued that since in most realistic situations these two types exist simultaneously, it is necessary to combine them in a formal framework to yield realistic solutions. This study presents such a framework by combining concepts from probability and fuzzy set theories. In this framework four models (Kwakernaak, 1978; Yager, 1979; 1984b; Zadeh, 1968; 1975) that attempt to account for the numeric or linguistic responses in various probability elicitation tasks were tested. The linguistic models were relatively effective in predicting subjects' responses compared to a random choice model. The numeric model (Zadeh, 1968) proved to be insufficient. These results and others suggest that subjects are unable to represent the full complexity of a problem. Instead they adopt a simplified view of the problem by representing vague linguistic concepts by multiple-crisp representations (the α-level sets). All of the mental computation is done at these surrogate levels.
Patterns of Inductive Reasoning in a Parallel Expert System BIBA 113-120
  William Siler; Douglas Tucker
The general characteristics of an expert system shell which fires rules in parallel rather than sequentially are briefly reviewed. Particular reference is made to the management of uncertainty, ambiguities and contradictions and truth maintenance through fuzzy systems theory. Within this context, the overall organization of parallel expert system programs is discussed. The basic element of such programs is the rule block, a collection of concurrently fireable rules which are fired effectively in parallel. The rules in such a block may be fired once, or fired repetitively until no more rule instances are fireable. Each rule block firing then constitutes a completely non-procedural step. However, the firing order of rule blocks tends to be procedurally controlled. In the simplest case, the rule blocks are fired sequentially in order; in more complex cases, a flow chart may be used to describe the flow of control among rule blocks, with conditional firing of certain rule blocks. In a blackboard system programs written in procedural languages may be called within the non-procedural rule blocks to execute tasks for which an expert system is unsuitable, such as number crunching or searching large external files.

IJMMS 1989 Volume 30 Issue 2

Interactive Communication of Sentential Structure and Content: An Alternative Approach to Man-Machine Communication BIBA 121-148
  R. Chandrasekar; S. Ramani
Natural language communication interfaces have usually employed linear strings of words for man-machine communication. A lot of 'intelligence' -- in the form of semantic, syntactic and other information -- is used to analyse these strings, and to puzzle out their structures. However, use of linear strings of words, while appropriate for communication between humans, seems inappropriate for communication with a machine using video displays, keyboards and a mouse. One need not demand too much out of machines in this area of analysis of natural language input; one could bypass these problems by using alternative approaches to man-machine communication. One such approach is described in this paper, for the communication of the content and structure of natural language sentences. The basic idea is that the human user of the interface should use the two dimensional screen, mouse and keyboard to create structures for input, guided by appropriate software. Another key idea is the use of a high degree of interaction to avoid some problems usually encountered in natural languages understanding. Based on this approach, a system called ScreenTalk has been implemented in Common LISP on a VAX workstation. The man-machine interface is used to interactively input both the content and the structure of sentences. Users may then ask questions, which are answered using stored information. ScreenTalk now operates on a database of brief news items. It has been designed to be fairly domain independent, and is expected to be used soon in other applications. The conceptual framework for such an approach, the design of the experimental interface used to test this framework and the authors' experience with this interface are presented.
DYNABOARD: User Animated Display of Deductive Proofs in Mathematics BIBA 149-170
  Marc Kaltenbach; Claude Frasson
The computer medium provides new ways of clarifying the presentation to students of complex mathematical proofs. We provide pedagogical motivations for using partially animated network representations to facilitate student understanding of richly structured, spatially organizable, deductive proofs. We stress the importance of giving students an active role in making the representations of proof steps evolve in accordance with students' understanding. Then we describe some functional features of Dynaboard, a prototype system we have built to study this form of information access. Finally we outline possible future developments of Dynaboard to give it greater autonomy of decisions and achieve a better symbiosis with human thought.
Cognitive Issues in the Process of Software Development: Review and Reappraisal BIBA 171-191
  Richard J. Koubek; Gavriel Salvendy; Hubert E. Dunsmore; William K. LeBold
The current information age has brought about radical changes in workforce requirements just as did the industrial revolutions of the 1800's. With the presence of new technology, jobs are requiring less manual effort and becoming more cognitive-oriented. With this shift, new techniques in job design and task analysis are required. One area which will greatly benefit from effective task analysis procedures is software development. This paper attempts to lay a groundwork for developing such procedures by discussing important methodological issues, and examining current theories and research findings for their potential to identify the cognitive tasks of computer programming. Based on the review, this paper suggests guidelines for development of a methodology suitable for knowledge elicitation of the programming process.
Study of Combination of Belief Intervals in Lattice-Structured Networks BIBA 193-211
  L. W. Chang; R. L. Kashyap
According to different topological connection of links in a lattice, basically four types of operation are involved, namely, the parallel, serial, conjunctive and disjunctive types of operation. We develop a formalism to implement desired combining rules for each of the four types of operation which possess properties such as commutativity, associativity, identity and annihilator. The decision strategy for the problem of belief propagation in a lattice involves the combination of these four different types of rules. Different combinations of four types of rules lead to different strategies. We numerically illustrate the use of different types of rules and their combination for solving the problem of damage assessment of a civil engineering structure described in a lattice.
The Structure of Command Languages: An Experiment on Task-Action Grammar BIBA 213-234
  Stephen J. Payne; T. R. G. Green
Task-Action grammar (TAG), a formal model of the mental representation of task languages, makes predictions about the relative learnability of different command language structures. In particular TAG predicts that consistent structuring of task-action mappings across semantic domains of the task world will facilitate learning, but that consistent structuring within domains that are orthogonal to the semantic organization of the task world cannot be accommodated within users' mental representations, and so will not help learners. Other models of human-computer interaction either fail to address this distinction, or make quite different predictions. The prediction is tested by a syntax induction experiment in which subjects learn to operate a toy "lost property" computer system by a lexical command language. The results of the experiment are consistent with the predictions of task-action grammar.

IJMMS 1989 Volume 30 Issue 3

Underlying Dimensions of Human Problem Solving and Learning: Implications for Personnel Selection, Training, Task Design and Expert System BIBA 235-254
  Takao Enkawa; Gavriel Salvendy
This study presents experimental evidence on the dimensions underlying the interaction of the learning process with human cognitive representation of problem solving. An experiment was conducted in three sessions. Each session consisted of performing 12 problem solving tasks and obtaining their similarity rating. The observed four-way data made up of 12 tasks x 12 tasks x 6 subjects x 3 sessions were analyzed using a multidimensional scaling model. The analysis resulted in three major dimensions across the learning process. Two of the dimensions are described as distinctions of bottom-up vs. top-down and conscious vs. subconscious reasoning. The third one is identified as a dimension inherent to the task characteristics. It is shown that the former two dimensions are affected significantly by learning and hence become more dominant in the process of understanding. The implication of the findings for personnel selection, job rotation, task design, and expert system design is discussed.
Issues in the Verification of Knowledge in Rule-Based Systems BIBA 255-271
  Derek L. Nazareth
As expert system technology spreads, the need for verification of system knowledge assumes greater importance. This paper address the issues involved in demonstrating a rule-based system to be free from error. A holistic perspective is adopted, wherein sources, manifestations, and effects of errors are identified. A general taxonomy is created, and the implications for system performance and development outlined. Existing strategies for knowledge verification are surveyed, their applicability assessed, and some directions for systematic verification suggested.
Syntactic Decision Procedures in Information Systems BIBA 273-285
  Anita Wasilewska
There are a number of algebraic models of information systems. They have been proposed by Codd (1972), Salton (1968), Scott (1970) and others. We deal here with a model which is the basis of a rough set investigations (Orlowska, 1984; Pawlak, 1982; Pawlak, 1984). This model was proved in (Marek, 1985) to be equivalent with the Codd's model of relational database with one schema. We focus here on purely syntactical problems within this model. In particular we point out problems which can be solved using the automatic syntactic methods. We do it by first constructing, for a given system S its description language {mathsL}S. Then we define a set of a Gentzen-like (Gentzen, 1934) transformation rules for its terms and describe an easy programmable procedure which generates the answers for queries submitted to the system. We show how to extend this procedure to a procedure for generating the equivalent normal form of a given term. This leads to a method of constructing not only definable sets within a given system, but also all its elementary components.
Heuristic Graph Displayer for G-BASE BIBA 287-302
  Hiroyuki Watanabe
A proto-type graph displaying system for a data base management system is described. The system uses a heuristic algorithm for drawing a graph. The system was developed to draw a schema graph for the G-Base-Data Base Management System based on a graph data model. For quick display, the algorithm used no or very limited backtracking and it attempts to minimize unnecessary arc intersections. The graph is embedded in an integer grid plane and displayed on a window. The graph drawer has several useful features for producing visually pleasant drawings such as automatic font selection, and automatic abbreviation of long node label. It can draw both directed graphs and non-directed graphs and can display arc labels. The system is implemented using the X-window system running on Sun workstations and Sony NEWS workstations. It is written in C language and is portable to any workstation which supports the X-window system. Complex graphs, such as a graph with 149 nodes and 153 arcs, have been displayed in a visually pleasing manner.
An Interface Architecture to Provide Adaptive Task-Specific Context for the User BIBA 303-327
  Sherman W. Tyler; Siegfried Treu
An abstract architecture for the design of user-computer interfaces is described. It is intended to serve the user-oriented principles of learnability and usability. The primary interface features selected as responsive to these principles are task-specific context presented to the user, reinforced by system adaptability to users needs. Both of these features are undergirded by interface system modularity. Context is defined to include not only high-level direction and step-by-step guidance toward task completion but also intelligent advice on the different user actions and commands. A prototype interface system was implemented, using a Xerox 1108 LISP workstation and a VAX 11/780 UNIX system as the target computer. It was organized around the definition of a multi-phase interaction event flowchart and is very dependent on object-oriented and rule-based paradigms. Limited test results indicate a favorable performance pattern.
Concept Learning from Examples and Counter Examples BIBA 329-354
  A. L. Ralescu; J. F. Baldwin
This paper describes a method of concept learning from examples and counter-examples. The technique makes use of the conceptual graph theory and the support logic programming theory. It is most appropriate for those concepts which cannot be defined in terms of necessary and sufficient conditions. Conceptual graph theory provides a powerful means of representing the knowledge described by each example and provides a mechanism to compare these examples. Support logic programming provides a means for evidential reasoning under uncertainty and also gives an alternative for the knowledge representation. It is suggested that the learning technique developed might be adequate to model the human learning and it is compared to other existing models for learning.

IJMMS 1989 Volume 30 Issue 4

The Utility of Speech Input in User-Computer Interfaces BIBA 355-375
  Gale L. Martin
This paper focuses on two commonly-made claims about the utility of speech input: (1) It is faster than typed input; and (2) it also increases user productivity by providing an additional response channel. These claims are investigated, both through a review of research, and through an empirical evaluation of speech input. The research review supports both claims. Further, it suggests that speech input will be more beneficial when users are engaged in multiple tasks mapped onto multiple user-response modalities, and when speech is used in tasks characterized by short transactions of a highly interactive nature. The empirical study evaluated the utility of speech input in the context of a VLSI chip design package, and compared speech to typed, full-word input, single keypresses, and mouse clicks. Results supported the benefits of speech input over typed, full-word commands, and to a lesser extent, single keypresses. For the restricted set of commands that could be accomplished with mouse clicks, speech input and mouse clicks were equally efficient. These results are interpreted in terms of a general "ease vs expressiveness" guideline for assigning modalities to tasks in a user interface.
Building Routine Planning Systems and Explaining Their Behaviour BIBA 377-398
  B. Chandrasekaran; John Josephson; Anne Keuneke; David Herman
It has become increasingly clear to builders of knowledge-based systems that no single representational formalism or control construct is optimal for encoding the wide variety of types of problem solving that commonly arise and are of practical significance. In this paper we identify a class of problem solving activities which we have labeled routine planning. We consider the constructs necessary to represent the problem solving which appropriately characterizes this class, and describe DSPL, a high-level language designed specifically to encompass the required knowledge structures and control methodology for routine planning. Finally, we consider what type of structure is appropriate to represent an agent's understanding of how the plan itself works.
Automatically Generating Natural Language Reports BIBA 399-423
  Jugal Kalita
In this paper, we described a system which generates natural language status reports for a set of inter-related processes as various stages of progress. The system has three modules -- a rule-based domain knowledge representation module, and elaborate text planning module, and a surface generation module. The knowledge representation module models a set of processes that are encountered in a typical office environment, using a body of production rules which are explicitly sequenced using an augmented Petri net mechanism. The system employs an interval-based temporal network for string historical information. A text planning module traverses this network to search for events which need to be mentioned in a coherent report describing the current status of the system. The planner combines similar information for succinct presentation whenever applicable. It also employs discourse focus techniques and a simple notion of view transforms for the generation of good quality text. Finally, an available surface generation module which has been suitably augmented is used to produce well-structured textual reports for our chosen domain.
Towards Software Metrics for Visual Programming BIBA 425-445
  Ephraim P. Glinert
A framework for formulating metrics for visual computing environments is established, based on the concept that for any community of users such environments must be viewed in terms of a multi-faceted collection of relevant attributes. These attributes ultimately allow us to defined, for any environment, a pair of measures termed coefficients of attraction and repulsion, which together enable us to select from among several candidate environments that which is best suited, in a certain sense, to the users in question. The exposition and theoretical development are followed by an example of how our tool might be applied to several of the better-known visual environments that had been implemented up to the early 1980's.
Usability of SQL and Menus for Database Query BIBA 447-455
  J. Steve Davis
Experiments which compared the usability of menus to that of SQL for database query showed that first-time users generally performed better with SQL and considered it easier to use than menus. Some specific problems with both SQL and menu systems were revealed. New users of SQL experienced difficulty in using proper syntax and in choosing the appropriate table. Users of the menu system, IBM's MAPICS, complained about crowded screens, and found it hard to interpret some of the menu choices. Operators of menu systems performed better when provided with a system directory.
Rough Sets and Dependency Analysis among Attributes in Computer Implementations of Expert's Inference Models BIBA 457-473
  A. Mrozek
It is proposed, in the case when no proper mathematical model is obtainable, to use human experts' inference models in computer control algorithms. The notion of inference model is introduced and it is demonstrated that the formal apparatus of rough sets theory can be used to identify, to analyse and to evaluate this model. A method of computer implementation of such inference models is presented. The method is based on the analysis of dependencies among decision, measurable and observable attributes. PROLOG is proposed as the language of the model implementation. Formal considerations, the proposed approach and notions introduced are illustrated with a real-life example. It concerns a computer implementation of inference model of a rotary clinker kiln stoker. The model was used to control the process and an analysis of control results is presented.

IJMMS 1989 Volume 30 Issue 5

Visual Information Chunking in Spreadsheet Calculation BIBA 475-488
  Pertti Saariluoma; Jorma Sajaniemi
Spreadsheet calculation causes a heavy memory load, since it is necessary to remember complex cell and calculation systems. A series of experiments were carried out to study the role of visual information chunking in spreadsheet calculation. The experiments showed that a possibility to visual information chunking substantially decreases the memory load caused by spreadsheet calculation. If subjects are able to induce the structure of a formula or a network of connected formulas, they usually learn it fast. The surface structure of a formula may cause subjects essential difficulties in chunking. Badly ordered formula networks, in which cell layers are embedded within each other and references cross each other, are difficult to learn and remember. Subjects are not able to abstract the deep structure and encode formula networks.
Are There Individual Concepts? Proper Names and Individual Concepts in SI-Nets BIBA 489-503
  M. Frixione; S. Gaglio; G. Spinelli
We discuss some aspects of the role played by descriptional knowledge in expressions denoting individual objects. In knowledge representation systems such as KL-ONE and KRYPTON which use semantic networks to express descriptional information, the problem is to establish whether the use of individual concepts in a network is justified. In the light of theoretical and applicational considerations, in the proposed solution only definite descriptions are considered as being characterizable by means of definitions. Vice versa, proper names, in the strict sense, will be treated as having no definitional dimension, and as such only appear in the assertional knowledge base. A description is given of how this solution was realized in PROCNE, a knowledge representation tool in which logic representation and structured inheritance semantic nets (SI-Nets) are combined.
The Electronic Book Ebook3 BIBA 505-523
  Jacques Savoy
The electronic book is a project which examines the possibilities offered by computers for animating scientific texts. With the advent of the micro-computer, access to the text (textual base), need no longer be sequential. The user of the electronic book can now "jump", to the table of contents, the index, the bibliography or to the links placed in the text, such as, "see Chapter 2". In addition to these links, the user can ask for the execution of methods (computer programs) attached to the subject of the book. The system connects itself to a series of modules written by the author (concept of "methods base" and "model base"), or to methods available in the machine (MultiPlan, DBase, Lotus, Word, Lisp, C compiler, etc.). The passage from the text to the methods takes place with or without the recuperation of the models. In this way, the exercises, guided examples and simulations are accessible for the enrichment of the text. Ebook3 is composed of three principal characteristics (the number incorporated in the name of the system refers to this point). First, the displacement in a text takes place either sequentially, or by means of references, tables or indices. Secondly, static images are inserted in the text and dynamic graphics allow the visualization of animated sequences. Thirdly, the attachment of methods brings with it the possibility of processing data in order to calculate the models attributed, by the author, to perform sensibility analyses, or finally to create problems representing the reader's individual requirements.
A Computationally Efficient Approximation of Dempster-Shafer Theory BIBA 525-536
  Frans Voorbraak
An often mentioned obstacle for the use of Dempster-Shafer theory for the handling of uncertainty in expert systems is the computational complexity of the theory. One cause of this complexity is the fact that in Dempster-Shafer theory the evidence is represented by a belief function which is induced by a basic probability assignment, i.e. a probability measure on the powerset of possible answers to a question, and not by a probability measure on the set of possible answers to a question, like in a Bayesian approach. In this paper, we define a Bayesian approximation of a belief function and show that combining the Bayesian approximations of belief functions is computationally less involving than combining the belief functions themselves, while in many practical applications replacing the belief functions by their Bayesian approximations will not essentially affect the result.
Absolute Dates and Relative Dates in an Inferential System on Temporal Dependencies between Events BIBA 537-549
  Elzbieta Hajnicz
In this paper a calculus of relations between intervals and points is recalled. The notions of absolute and relative dates are introduced and their application in an inferential system is described. The notions of the current moment (now) and event duration are discussed by means of the concept of relative dates.
Development and Validation of a Reader-Based Documentation Measure BIBA 551-574
  Ronald A. Guillemette
Despite the widely acknowledged importance of reader feedback in the assessment of application software documentation, few studies have sought to establish what subjective factors readers use when evaluating documentation and how to reliably measure those factors. This paper reports the development and validation of an instrument which measures seven important reader-oriented factors: credibility, demonstrative, fitness, personal affect, systematic arrangement, task relevance, and understandability.
Modelling Blind Users' Interactions with an Auditory Computer Interface BIBA 575-589
  Alistair D. N. Edwards
Modern window, icon, menu and pointer (WIMP) systems represent a significant new obstacle to access to computers for people with visual disabilities. A project was carried out which demonstrated the possibility of adapting such highly visual interfaces into an auditory form so that even totally blind people could use them. This paper describes the development of a model of user's interaction with such an auditory interface. It is based on the approach applied by Card, Moran & Newell (1980; 1983) to modelling visual interfaces. The model concerns the time taken to locate an object within a screen which is defined by sounds. It states that: Tposition = Tthink + dTmove, where Tthink is a constant, representing the time component during which the mouse is not moved, d is the distance to the target and Tmove is the time to cross one object. Measurements taken yielded values of: Tthink = 3.99s and Tmove = 0.80s. The model does provide a good description of the behavior of most the test subjects. This work represents a first step towards expanding models of human-computer interactions to include auditory interactions. This should be of benefit not only to the development of interfaces for blind users, but also in the enhancement of interfaces for sighted users by the addition of an auditory component.

IJMMS 1989 Volume 30 Issue 6

Making the Transition from Print to Electronic Encyclopaedias: Adaptation of Mental Models BIBA 591-618
  Gary Marchionini
A study of how high school students use an encyclopaedia in both print and electronic form was conducted from a mental models perspective. Over three sessions and prompted by a set of protocols administered by a participant observer, 16 subjects each conducted three searches, one a verbal simulation, and one each with the print and electronic versions of a general purpose encyclopaedia. Observer notes, audio tapes of all sessions, captured keystrokes of the electronic searches, and responses to a final interview were used to compare print and electronic versions, analyse subjects' development of mental models for the electronic system, and evaluate the human-computer interface effects. Encyclopaedias seemed to be default sources of information for these subjects. Subjects demonstrated satisfactory use of them by sequentially reading articles rather than scanning and without using the index. Some subjects simply applied print models to their electronic searches, not taking advantage of full-text searching or hypertext capabilities. Most were able to use some of the electronic system's features and a few took good advantage of these features and thus appeared to develop distinct mental models for the electronic encyclopaedia by adapting their existing mental models. Subjects took almost twice as much time, posed more queries, and examined more articles in the electronic searches. Designers and instructors are encouraged to guide adaptive transitions by focusing attentions on the interactive features of electronic systems and the unique features for browsing, querying and filtering information. Recommendations about display effects, navigational aids, and query formulation aids are also made.
Theoretical Training and Problem Detection in a Computerized Database Retrieval Task BIBA 619-637
  Tom Dayton; Charles F. Gettys; J. Thaddeus Unrein
Subjects attempted to find problems in computerized database query commands. Procedural subjects were trained merely by acquainting them with correct command procedures. Theoretical subjects were given additional instruction in the theory underlying the command procedures. The problems to be detected were either slips or mistakes, and were embedded in database queries that were either orderly or scrambled. The procedural group's sensitivity to errors was lower for scrambled queries than for orderly ones. In contrast, the theoretical group's sensitivity was just as good for scrambled queries as for orderly ones. Surprisingly, when the commands were orderly, theoretical subjects were no more sensitive to errors than procedural subjects were. Sensitivity and bias data suggested that training affected problem detection strategies by modifying mental models. Bias changes implied the existence of multiple stages in problem detection.
Support for Browsing in an Intelligent Text Retrieval System BIBA 639-668
  R. H. Thompson; W. B. Croft
Browsing is potentially an extremely important technique for retrieving text documents from large knowledge bases. The advantages of this technique are that users get immediate feedback from the structure of the knowledge base and exert complete control over the outcome of the search. The primary disadvantages are that it is easy to get lost in a complex network of nodes representing documents and concepts, and there is no guarantee that a browsing search will be as effective as a more conventional search. In this paper, we show how a browsing capability can be integrated into an intelligent text retrieval system. The disadvantages mentioned above are avoided by providing facilities for controlling the browsing and for using the information derived during browsing in more formal search strategies. The architecture of the text retrieval system is described and the browsing techniques are illustrated using an example session.
A Formal Representation System for the Human-Computer Interaction Process BIBA 669-696
  Muneo Kitajima
This paper presents a formal representation system for interpretive understanding of users interacting with systems. In order to fully characterize the interaction process, a local-interaction-based approach is taken. An interactive system is represented in the form of rules expressed in terms of cognitive units. A cognitive unit is a combination of a concept and an attribute. The concepts are distinct cognitive objects concerning the system and the attributes are different aspects of each concept. Thus, cognitive units can be regarded as objects through which a user communicates with the system. The interaction process is represented in a sequence of applied system rules. A method for inferring user's cognitive states in the interaction process such as working memory and planning units is presented. Through an investigation on hypothesized user actions carried out on the existing screen-oriented editor system represented by the proposed framework, it is discussed that some statistics of working memory load indicate cognitive complexity of particular tasks, and quite understandable planning units are derived by the method.
Measuring Change in the Programming Process BIBA 697-711
  Richard T. Redmond; Jean B. Gasen
A type of data (change data) associated with the programming process is defined. Change data is defined as the set of changes made to a program during program enhancement or development. Included in this set are changes made and later discarded. A collection of measures which are derived from change data are developed. Special attention is given to program language independence when isolating changes made to programs. A methodology for gathering change data is presented and insights into the meaning of these data are provided. The ease of gathering change data is shown through an example case. Potential application areas for use of change data and the associated measures are suggested.