HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 2930313233343536373839

International Journal of Man-Machine Studies 39

Editors:B. R. Gaines; D. R. Hill
Dates:1993
Volume:39
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:53
Links:Table of Contents
  1. IJMMS 1993 Volume 39 Issue 1
  2. IJMMS 1993 Volume 39 Issue 2
  3. IJMMS 1993 Volume 39 Issue 3
  4. IJMMS 1993 Volume 39 Issue 4
  5. IJMMS 1993 Volume 39 Issue 5
  6. IJMMS 1993 Volume 39 Issue 6

IJMMS 1993 Volume 39 Issue 1

The Phenotype of Erroneous Actions BIBA 1-32
  Erik Hollnagel
The study of human actions with unwanted consequences, in this paper referred to as human erroneous actions, generally suffers from inadequate operational taxonomies. The main reason for this is the lack of a clear distinction between manifestations and causes. The failure to make this distinction is due to the reliance on subjective evidence which unavoidably mixes manifestations and causes. The paper proposes a clear distinction between the phenotypes (manifestations) and the genotypes (causes) of erroneous actions. A logical set of phenotypes is developed and compared with the established "human error" taxonomies as well as with the operational categories which have been developed in the field of human reliability analysis. The principles for applying the set of phenotypes as practical classification criteria are developed and described. A further illustration is given by the report of an action monitoring system (RESQ) which has been implemented as part of a larger set of operator support systems and which shows the viability of the concepts. The paper concludes by discussing the principal issues of error detection, in particular the trade-off between precision and meaningfulness.
Performance Amplification and Process Restructuring in Computer-Based Writing BIBA 33-49
  Ronald T. Kellogg; Suzanne Mueller
This research compared composing on a word processor with writing in longhand to explore whether the computer-based tool amplifies performance and restructures attentional allocation to writing processes. Performance was assessed in terms of the quality of the resulting documents, based on subjective ratings and text analysis, and the fluency of language production. The allocation of attentional resources was monitored in terms of the degree of cognitive effort (secondary task reaction times) and processing time (directed retrospective reports) devoted to planning ideas, translating ideas into text, and reviewing ideas and text. In Experiment 1, word processing increased the attentional investment in and nature of planning and reviewing, without improving either the quality or fluency of writing. In Experiment 2 these restructuring effects were again observed both for writers who reported modest experience composing on a computer and to an even greater degree for those who reported extensive experience. Only participants with extensive word processing experience matched the quality and fluency of those who wrote in longhand.
Towards Ecological Validity in Menu Research BIBA 51-70
  Shannon L. Halgren; Nancy J. Cooke
The goal of this research was to test the effects of menu organization on user performance under situations representative of typical user-computer interactions. Effects of alphabetical, categorical, and random organizations on response time and accuracy were tested using a problem-solving task in which problems differed in degree of complexity. In addition, explicit or implicit targets were searched for in menus consisting of items from distinct or overlapping categories. Results indicated that alphabetical and categorical organizations were generally equivalent and superior to random organizations, replicating the results of others. Unexpectedly, decreasing category distinctiveness, although generally detrimental, did not seem to negatively affect performance with categorical menu organizations. On the other hand, problem complexity had large effects on performance and magnified the effects of other factors such as target type. These results extend some previous conclusions about menu organization to situations that are more typical of user interactions. The limits of generalizing from results of controlled experiments to truly ecologically valid settings are also discussed.
Correlates of Learning in a Virtual Classroom BIBA 71-98
  Starr Roxanne Hiltz
The Virtual Classroom consists of software enhancements to the basic capabilities of a computer-mediated communication system in order to support collaborative learning. Results of quasi-experimental field trials which included matched sections of college courses delivered in the traditional and virtual classrooms indicate that there is no consistent significant difference between the two modes in mastery of material by students, as measured by grades: in a computer science course, grades were better in the on-line section. Subjectively, most students report that the Virtual Classroom improves access to educational activities and is, overall, a "better" mode of learning. However, these favorable outcomes are contingent upon a number of variables, including student characteristics, adequate equipment access, and instructor-generated collaborative learning processes.
Design Issues in the Simulation of Beliefs in Corporate Intelligence Systems: REALPOLITIK II BIBA 99-112
  Norman D. Livergood
This paper explores preliminary design issues in simulating beliefs in corporate intelligent systems. The study integrates the two fields, belief system simulation and corporate intelligence. Belief system simulation has been carried out primarily within artificial intelligence. Corporate intelligence is a relatively new field in strategic planning within the business technology environment. This paper explores design issues in belief system simulation and suggests how such systems could be used as part of a corporate intelligence capability, increasing the scope and effectiveness of corporate strategic planning.
   The belief system of high-placed Japanese business and government leaders is the focus of this preliminary study. The specific issue chosen to illustrate the simulation model is the $25-billion American disk-drive market. The hypothetical user is a US corporate strategic planning group in a disk-drive company.
   The program illustrated in this study determines what changes occur if certain beliefs and belief factor values are input into a simulation system. For example, once having modeled the belief system of the competitor's executives and strategic planners, it would be possible to determine what effect specific beliefs and belief factors such as danger, confidence, importance and emotional charge will have on this belief system.
Improving Application Development Productivity by Using ITS BIBA 113-146
  John D. Gould; Jacob Ukelson; Stephen J. Boies
Perhaps the key problem in application development today is the need to increase the productivity of development organizations. This paper identifies the main factors affecting application development productivity, and then describes a new application development environment (called ITS, which stands for Interactive Transaction System) that is aimed at, among other things, addressing these factors. A unique feature of ITS is the support of multiple, rule-based user interface styles, which has the implication of allowing multiple applications to run in the same style and the same application to run in multiple styles. The results of four case studies of developers using ITS to implement serious applications are summarized, with emphasis upon the effects of ITS on development productivity. These results demonstrate that ITS (a) greatly enhances application development productivity, and (b) provides a mechanism for creating applications that can lead to improved end-user productivity and that of their work organizations. These studies can also serve as a model for how to do human factors work within very advanced technological projects -- ones where the preoccupation of necessity first centers on establishing technical feasibility.
Comprehending Rule-Based Programs: A Graph-Oriented Approach BIBA 147-175
  Micheal B. O'Neal; William R., Jr. Edwards
This paper describes the construction of a Restricted Flow Graph (RFG) which should be useful for aiding comprehension of programs written in forward-chaining, non-monotonic, rule-based languages such as OPS5. An RFG is composed of nodes and arcs and is derived from a synthetic execution of a program. The nodes of the RFG represent abstracted working memory states, while the arcs represent transformations between these states. These transformations correspond to one or more executions of a program rule. Five versions of the RFG are presented. Each successive version is more highly constrained, or restricted, in the arcs and states that it may contain.
   Three RFG-based measures of program complexity are proposed: the number of nodes in an RFG, the number of arcs in an RFG, and a measure, similar to McCabe's measure, which combines counts of both nodes and arcs. These measures were computed for each of the five versions of the RFG's of eight rule-based programs. The number of nodes was found to correlate well with the performance of a group of 14 programmers who examined the programs and were tested on their level of understanding using a series of objective questions. In addition, the correlation coefficient was found to improve as the RFG became more constrained. The authors conclude that measures based on RFG's may be good indicators of program complexity and that a tool for presenting graphical representations of RFG's could be useful in increasing programmer comprehension.

IJMMS 1993 Volume 39 Issue 2

The Blinking Cursor: A Two-Experiment Sequence Investigating Whether a Blinking Cursor Facilitates User Performance BIBA 177-185
  Joan H. Coll; William C. Callahan; Jude H. Flaherty; Richard Coll
In a two experiment sequence, the authors investigated whether a blinking cursor facilitates performance for word processing and form-entry type applications. Previous reported work has not focused on this important aspect of blink, but rather on the blink of complete groups of target elements to distinguish them from other non-target elements. The results of both experiments reported here demonstrate that a blinking cursor does in fact produce significantly faster performance.
A General Approach to Criteria Aggregation using Fuzzy Measures BIBA 187-213
  Ronald R. Yager
The central focus of this work is to provide a general formulation for the aggregation of multi-criteria. This formulation is based upon the use of fuzzy subsets to model the criteria and the use of fuzzy measures to capture the interrelationship between criteria. A form of the fuzzy integral is used to connect these two to obtain the overall decision function. We are particularly interested here in the formulations obtained under different assumptions about the nature of the underlying fuzzy measure. We show how a number of the classic aggregation methods fall out as special cases of this very general formulation.
APT: A Description of User Interface Inconsistency BIBA 215-236
  Phyllis Reisner
One of the basic tenets of interface design is that an interface should be "consistent". However, the meaning of the term remains elusive. Several attempts have been made to represent consistency (and inconsistency) formally. Although each formalism has built on its predecessors to increase our understanding, a crucial assumption is still missing. APT (Agent Partitioning Theory) is a formal description of inconsistency that includes the missing assumption. In addition to being a formal description, APT embodies assumptions about human cognitive behavior. The psychological correlates of API involve notions of generalization and of inference which are frequently used in describing inconsistency. These psychological correlates are used to explain (1) which user errors will occur as a result of inconsistency, and why, and (2) why users are sometimes correct, sometimes not, in an inconsistent system.
   Although some of the early formalisms state or imply that these formal descriptions can be used to identify inconsistency, they cannot do so. APT, and its predecessors, are not discovery procedures. They do not mechanically identify inconsistencies. They are tools to help an analyst do so.
Models and Theories of Programming Strategy BIBA 237-267
  Simon P. Davies
Much of the literature concerned with understanding the nature of programming skill has focused explicitly upon the declarative aspects of programmers' knowledge. This literature has sought to describe the nature of stereotypical programming knowledge structures and their organization. However, one major limitation of many of these knowledge-based theories is that they often fail to consider the way in which knowledge is used or applied. Another strand of literature is less well represented. This literature deals with the strategic elements of programming skill and is directed towards an analysis of the strategies commonly employed by programmers in the generation and the comprehension of programs. In this paper an attempt is made to unify various analyses of programming strategy. This paper presents a review of the literature in this area, highlighting common themes and concerns, and proposes a model of strategy development which attempts to encompass the central findings of previous research in this area. It is suggested that many studies of programming strategy are descriptive and fail to explain why strategies take the form they do or to explain the typical strategy shifts which are observed during the transitions between different levels of skill. This paper suggests that what is needed is an explanation of programming skill that integrates ideas about knowledge representation with a strategic model, enabling one to make predictions about how changes in knowledge representation might give rise to particular strategies and to the strategy changes associated with developing expertise. This paper concludes by making a number of brief suggestions about the possible nature of this model and its implications for theories of programming expertise.
Throwing, Pitching and Catching Sound: Audio Windowing Models and Modes BIBA 269-304
  Michael Cohen
After surveying the concepts of audio windowing, this paper elaborates taxonomies of three sets of its dimensions -- spatial audio ("throwing sound"), timbre ("pitching sound"), and gain ("catching sound") -- establishing matrices of variability for each, drawing similes, and citing applications. Two audio windowing systems are examined across these three operations: repositioning, distortion/blending, and gain control (i.e. state transitions in virtual space, timbre space, and volume space). Handy Sound is a purely auditory system with gestural control, while MAW exploits egocentric graphical control. These two systems motivated the development of special user interface features. (Sonic) piggyback-channels are introduced as filtear manifestations of changing cursors, used to track control state. A variable control/response ratio can be used to map a near-field work envelope into perceptual space. Clusters can be used to hierarchically collapse groups of spatial sound objects. WIMP idioms are reinterpreted for audio windowing functions. Reflexive operations are cast an instance of general manipulation when all the modified entities, including an iconification of the user, are projected into an egalitarian control/response system. Other taxonomies include a spectrum of directness of manipulation, and sensitivity to current position crossed with dependency on some target position.
Fuzzy Model of a Human Control Operator in a Compensatory Tracking Loop BIBA 305-332
  I. S. Shaw
The subject of this work is the characterization of a fuzzy control operator model that can successfully emulate the functioning of a human control operator in a compensatory tracking loop on a real-time basis. After a teaming period spent in the proximity of the real human operator being modelled, the model is capable of functioning when physically removed from the site. A systematic fuzzy modelling technique capable of emulating a prototype dynamic system on the basis of its input-output behaviour only is concisely presented and the application of these techniques to a human operator/controller in a tracking loop is illustrated. Experimental results are shown with different driving functions, plant dynamics, and model orders. The fuzzy model can emulate human operators with widely varying tracking abilities.
A Probabilistic Logic for the Development of Safety-Critical, Interactive Systems BIBA 333-351
  C. W. Johnson
This paper starts from the premise that the human contribution to risk must be assessed during the development of safety-critical systems. In contrast to previous approaches, discrete numerical values are rejected as means of quantifying the probability of operator "error" for many different users of many different systems. Numerical probabilities are used to rank the importance that designers attach to par ocular system failures. Adequate development resources must be allocated so that operators will resolve and not exacerbate high-priority failures. In order to do this, human factors and systems engineers must be provided with notations that can represent risk assessments. Many techniques that are in widespread use, such as fault-tree analysis, provide inadequate support for the development of interactive systems. They do not capture the temporal properties that can determine the quality of interaction between operators and stochastic application processes. It is argued that probabilistic temporal logics avoid this limitation. Notations which are built around linear models of time cannot easily capture the semantics of risk assessments. We have developed Probabilistic Computation Tree Logic (PCTL) to avoid this problem. PCTL is built around a branching model of time. Finally, it is argued that PCTL specifications and Monte Carlo techniques can be used to provide faithful simulations of stochastic interactive systems. The implementation of the Risklog prototyping tool is briefly described. Partial simulations can be shown to system operators in order to determine whether they are likely to intervene and resolve system failures.

IJMMS 1993 Volume 39 Issue 3

Supporting Command Reuse: Empirical Foundations and Principles BIBA 353-390
  Saul Greenberg; Ian H. Witten
Current user interfaces fail to support some work habits that people naturally adopt when interacting with general-purpose computer environments. In particular, users frequently and persistently repeat their activities (e.g. command line entries, menu selections, navigating paths), but computers do little to help them to review and re-execute earlier ones. At most, systems provide ad hoc history mechanisms founded on the premise that the last few inputs form a reasonable selection of candidates for reuse.
   This paper provides theoretical and empirical foundations for the design of a reuse facility that helps people to recall, modify and re-submit their previous activities to computers. It abstracts several striking characteristics of repetitious behaviour by studying traces of user activities. It presents a general model of interaction called "recurrent systems". Particular attention is paid to the repetition of command lines given a sequential history list of previous ones, and this distribution can be conditioned in several ways to enhance predictive power. Reformulated as empirically-based general principles, the model provides design guidelines for history systems specifically and modern user interfaces generally.
Supporting Command Reuse: Mechanisms for Reuse BIBA 391-425
  Saul Greenberg; Ian H. Witten
Reuse facilities help people to recall and modify their earlier activities and re-submit them to the computer. This paper examines such mechanisms for reuse. First, guidelines for building reuse facilities are summarized. Second, existing reuse facilities are surveyed under four main headings: history mechanisms, adaptive systems, programming by example, and explicit customization. The first kind relies on temporally ordered lists of interactions, the second builds statistical dynamic models of past activities and uses them to expedite future interactions, the third collects and generalizes more extensive sequences of activities for future reuse, while in the fourth the user explicitly collects items of interest. Third, the paper presents WORKBENCH, a reuse facility that uses an empirically-derived history system as a way of capturing and organizing one's situated activities. An appendix reports a study of a widely-available history system, the UNIX csh, and explains why it is poorly used in practice.
Expectations and Feedback in User-System Communication BIBA 427-452
  Frits L. Engel; Reinder Haakma
In terms of speed and accuracy of intention transfer, normal human conversation proves to be very efficient: exchanged messages carry only subagent information relative to contextual knowledge assumed to be present at the receiver's end. Furthermore, by receiving layered feedback from the recipient, the speaker is able to verify at an early stage of communication whether his intentions are being accurately perceived. Finally, in divergencies from the expected messages, the listener may ask for clarification at an early stage of message interpretation.
   For user-system communication to become similarly more efficient, machine interfaces should display both early layered (I)-feedback about partial message interpretations as well as layered expectations (E-feedback) about the message components still to be received. Examples of interfaces are given which already possess these desirable characteristics in part.
   The "layered-protocol model", proposed by Taylor (1988, Layered protocols for computer-human dialogue. I. Principles, International Journal of Man-Machine Studies, 28, 175-218) as a framework for user-system interface design, details the use of layered I-feedback and related repair messages in user-system communication. In this paper we suggest that the model can be improved by providing it with layered E-feedback, as derived from assumed intentions and layered knowledge of the interaction history.
From Icons to Interface Models: Designing Hypermedia from the Bottom Up BIBA 453-472
  John A. Waterworth; Mark H. Chignell; Shu Min Zhai
We describe a method to derive design models for hypermedia interfaces from the bottom up. Firstly, we compile a list of hypermedia interface features which we classify according to the category of functions they fulfill. We then describe an experiment in which candidate designs for low-level interface features were designed and tested for recognizability. In the experiment, icons for each of 61 hypermedia concepts were generated and then judged. Finally, we outline and illustrate a model induction phase in which low-level features are combined into an overall interface model, via "micro-models" that take account of the types of icons that worked best for each class of interface feature. We suggest that, at least for hypermedia systems, a bottom-up approach to interface design based on the functions of low-level features is preferable to the dominant, top-down approach based around one or more metaphors.
An Approach to Assessment of Plant Man-Machine Systems by Computer Simulation of an Operator's Cognitive Behavior BIBA 473-493
  Kazuo Furuta; Shunsuke Kondo
Computer simulation of an operator's cognitive behavior is a promising approach for the purpose of human factors study and man-machine systems assessment. In this paper an architecture of the simulation is proposed, based on the current AI technologies. The simulation system has been constructed as a knowledge-based system of the blackboard model, which can represent well the revisable nature of human thought, by use of the Truth Maintenance System. Assessment of cognitive workload imposed on an operator was attempted by scoring information-processing activities of this system. The scoring was done separately for each type of activity. A test simulation was performed on some cases of nuclear power plant operation under abnormal plant conditions. In the case study, investigation of the mechanism of mistakes and assessment of the cognitive workload were performed.
Modelling Error Recovery and Repair in Automatic Speech Recognition BIBA 495-515
  C. Baber; K. S. Hone
While automatic speech recognition (ASR) has achieved some level of success, it often fails to live up to its hype. One of the principal reasons for this apparent failure is the prevalence of "recognition errors". This makes error correction a topic of increasing importance to ASR system development, with a growing awareness that, by designing for error, a number of problems can be overcome. Currently, there is a wide range of possible techniques which could be used for correcting recognition errors, and it is often difficult to compare the techniques objectively because their performance is closely related to their implementation. Furthermore, different techniques may be more suited to different applications and domains. It would be useful to have some means of defining the requirements of an error correction dialogue, based on characteristics of the dialogue and ASR system in which it is to be used, in order to develop design specifications for appropriate error correction. This paper reports an approach, based on task-network modelling, which could be used to this end.
Technical Note: Theoretical and Simulation Approaches to Error Correction Strategies in Automatic Speech Recognition BIBA 517-520
  W. A. Ainsworth
Baber and Hone (Error recovery and repair in automatic speech recognition systems, International Journal of Man-Machine Studies, 39, 495-515, 1993) describe four strategies for error correction in automatic speech recognition. In simulation studies using task-network modelling they show that the nth choice strategy is the best in terms of transaction time for low recognition rates. At high rates there is little to choose between the strategies. This note shows that for three of the strategies the formulae developed by Ainsworth and Pratt (Feedback strategies for error correction in speech recognition systems, International Journal of Man-Machine Studies, 36, 833-842, 1992) can be used to estimate the results within less than 6%.

European Association for Cognitive Ergonomics: Book Reviews

"The Reactive Keyboard," by J. Darragh and I. Witten BIB 521-528
  Alice Dijkstra; Carla Huis
"Person-Centred Ergonomics: A Brantonian View of Human Factors," edited by D. J. Oborne, R. Branton, F. Leal, P. Shipley, and T. Stewart BIB 521-528
  Jacques Leplat
"Critiquing Human Error: A Knowledge-Based Human-Computer Collaboration Approach," by B. G. Silverman BIB 521-528
  Erik Hollnagel
"Methods and Tools in User-Centred Design in Information Technology," edited by M. Galer, S. Harker, and J. Ziegler BIB 521-528
  Tom Carey
"What Computers Still Can't Do: A Critique of Artificial Reason," by H. L. Dreyfus BIB 521-528
  Antoni Diller
"Intelligent Help: Communicating with Knowledge-Based Systems," by Rachel M. Pilkington BIB 521-528
  Deborah K. Stone

IJMMS 1993 Volume 39 Issue 4

Plan Recognition Strategies for Language Understanding BIBA 529-577
  Sandra Carberry; W. Alan Pope
In recent years the emphasis in natural language understanding research has shifted from studying mechanisms for understanding isolated utterances to developing strategies for interpreting sentences within the context of a discourse or an extended dialogue. A very fruitful approach to this problem has derived from a view of human behavior as goal-directed and understanding as explanation-based. According to this view, people perform actions and communicate to advance their goals, and language understanding therefore involves recognizing and reasoning about the goals and plans of others. This paper explores plan inference in natural language understanding. It presents a core set of ideas on which most models of plan recognition are based and illustrates these by critically analyzing three systems in detail. It then discusses issues that have been addressed by various research efforts, explores the major problems that limit the capability of current plan recognition systems, and describes current research directed toward solving some of these problems.
Strategy Choice and Change in Programming BIBA 579-598
  Quanfeng Wu; John R. Anderson
This research studied looping or iterative choice and change, especially between the "while-do" and "repeat-until" looping constructs in the PASCAL programming language. The empirical results from the first experiment, in which subjects were free to choose between the two looping alternatives, indicated that most PASCAL programmers are quite sensitive to the nature of the problems being solved and adaptable in choosing appropriate looping strategies. Another two experiments were performed in which subjects were either forced or induced to use one of the two looping strategies. These two experiments indicated that subjects are quite tenacious in using the appropriate strategy and their performance deteriorates when they are forced to use a different strategy.
Visualization Ability as a Predictor of User Learning Success BIBA 599-620
  Maung K. Sein; Lorne Olfman; Robert P. Bostrom; Sidney A. Davis
A novice user's cognitive abilities can influence how difficult he/she finds learning to use a software package. To ensure effective use, it is important to identify specific abilities that can influence learning and use, and then develop training methods or design interfaces to accommodate individuals who are lower in those abilities. This paper reports the integrated findings of five studies that examined a specific cognitive variable, visualization ability, for different systems (electronic mail, modeling software and operating systems), applying different training methods (analogical or abstract conceptual models) and computer interfaces (command-based or direct manipulation). Consistent with past results in other domains, we found that visualization ability is a strong predictor of user learning success. More importantly, we also found that subjects with lower visualization ability can be helped to narrow, and in some cases equal or surpass, the performance gap between themselves and subjects with higher visualization ability through appropriate training methods and direct manipulation interfaces. Based on our findings, we discuss implications for practitioners and designers and suggest possible avenues for future research.
Coping with Complex Environments: The Effects of Providing Overviews and a Transparent Interface on Learning with a Computer Simulation BIBA 621-639
  Ton de Jong; Robert de Hoog; Frits de Vries
Computers are used in increasingly complex environments for increasingly complex tasks. An example hereof is the use of computer simulations in instruction. Simulation offers an environment in which learners have to extract information from the system and must construct their knowledge themselves. This requires a high level of control for the learner over the (complex) environment. The present study investigates the influence of two representation aspects of simulation environments on the way of interacting with a simulation and on resulting test performance. The first aspect is giving learners additional navigation support by providing them with separate overviews of input and output. The second aspect concerns the type of interface: a conversational interface vs. a direct manipulation interface. Subjects had to learn about a theory of decision support with the use of one of four versions of basically the same simulation. In a control condition subjects were directly confronted with the simulation model in the form of a formula. Results showed that navigation support did not raise the subjects' scores. To the contrary, subjects receiving navigation support tended to have lower test performance. Subjects who received navigation support made fewer iterations during the simulation than the other subjects and the number of iterations was related to test performance. An explanation for their low scores might be that the navigation support distracted the subjects from their main task: learning about the model by manipulating the simulation. The direct manipulation interface was successful in increasing the number of changes to model variables. This, however, neither increased nor lowered the subjects' test performance. As expected, the direct manipulation interface resulted in far more efficient learning compared with the conversational interface.
Types of Expertise: An Invariant of Problem Solving BIBA 641-665
  Paul E. Johnson; Imran A. Zualkernan; David Tukey
One invariant of problem solving is based on properties (e.g. memory capacity) of the symbol system used to process information and events. This invariant generalizes across agents and domains but usually lacks the power to explain success on specific problem-solving tasks. A second invariant is based on properties of the knowledge required to perform a given task. This invariant, often termed the knowledge principle, attempts to account for success in specific tasks but typically does not generalize from one domain to the next or from one agent to the next. In this paper a third invariant is proposed, one that is based on the relationship between a problem-solving agent and its environment. This invariant captures the requirements of a problem-solving task as well as the role of domain knowledge at a level that is independent of a particular agent, representation or implementation. We call this invariant "expertise". Five types of expertise are proposed. Features of each type are described using the concept of argument. For each type of expertise there is a corresponding type of argument. Examples of types of expertise are given from chess, business and social policy, and medicine. Evidence is provided for the presence of types of expertise from the analysis of the behavior of two individuals (Ph.D.-level statisticians) solving problems as consultants in the domain of industrial experimental design. Types of expertise represented in several first-generation expert systems are also identified and discussed.
Speech versus Mouse Commands for Word Processing: An Empirical Evaluation BIBA 667-687
  Lewis R. Karl; Michael Pettey; Ben Shneiderman
Despite advances in speech technology, human factors research since the late 1970s has provided only weak evidence that automatic speech recognition devices are superior to conventional input devices such as keyboards and mice. However, recent studies indicate that there may be advantages to providing an additional input channel based on speech input to supplement the more common input modes. Recently the authors conducted an experiment to demonstrate the advantages of using speech-activated commands over mouse-activated commands for word processing applications when, in both cases, the keyboard is used for text entry and the mouse for direct manipulation. Sixteen experimental subjects, all professionals and all but one novice users of speech input, performed four simple word processing tasks using both input groups in this counterbalanced experiment. Performance times for all tasks were significantly faster when using speech to activate commands as opposed to using the mouse. On average, the reduction in task time due to using speech was 18.7%. The error rates due to subject mistakes were roughly the same for both input groups, and recognition errors, averaged over all the tasks, occurred for 6.3% of the speech-activated commands. Subjects made significantly more memorization errors when using speech as compared with the mouse for command activation. Overall, the subjects reacted positively to using speech input and preferred it over the mouse for command activation; however, they also voiced concerns about recognition accuracy, the interference of background noise, inadequate feedback and slow response time. The authors believe that the results of the experiment provide evidence for the utility of speech input for command activation in application programs.
Toward a Taxonomy of Multi-Agent Systems BIBA 689-704
  Shawn D. Bird
As intelligent systems become more pervasive and capture more expert and organizational knowledge, the expectation that they be integrated into larger problem-solving systems is heightened. To capitalize on these investments and more fully exploit their potential as knowledge repositories, general principles for their integration must be developed. Although simulated and prototype systems described in the literature provide solutions to some practical problems, most are empirical (or often simply intuitive) in design, emerging from implementation strategy instead of general principles. As a step toward the development of such principles, this paper presents a taxonomy for multi-agent systems that defines alternative architectures based on fundamental distributed, intelligent system characteristics.

IJMMS 1993 Volume 39 Issue 5

Iconic Reference: Evolving Perspectives and an Organizing Framework BIBA 705-728
  M. Elliott Familant; Mark C. Detweiler
Icons are now routinely used in human-computer interactions. Despite their widespread use, however, we argue that icons are far more diverse and complex than normally realized. This article examines some of the history behind the evolution of icons from simple pictures to much richer and more complex representational devices. Then we develop and present a new framework that distinguishes: (1) different kinds of sign relations; (2) different kinds of referent relations; and (3) differences between sign and referent relations. In addition, we highlight a fundamental symmetry between icons and symbols, and use this framework to raise a number of basic questions about the kinds of representational issues and challenges designers will need to consider as they create the next generation of icons for user interfaces.
The Minimal Manual: Is Less Really More? BIBA 729-752
  Ard W. Lazonder; Hans van der Meij
Carroll, Smith-Kerker, Ford and Mazur-Rimetz (The minimal manual, Human-Computer Interaction, 3, 123-153, 1987) have introduced the minimal manual as an alternative to standard self-instruction manuals. While their research indicates strong gains, only a few attempts have been made to validate their findings. This study attempts to replicate and extend the original study of Carroll et al. Sixty-four first-year Dutch university students were randomly assigned to a minimal manual or a standard self-instruction manual for introducing the use of a word processor. During training, all students read the manual and worked training tasks on the computer. Learning outcomes were assessed with a performance test and a motivation questionnaire. The results closely resembled those of the original study: minimalist users learned faster and better. The students' computer experience affected performance as well. Experienced subjects performed better on retention and transfer items than subjects with little or no computer experience. Manual type did not interact with prior computer experience. The minimal manual is therefore considered an effective and efficient means for teaching people with divergent computer experience the basics of word processing. Expansions of the minimalist approach are proposed.
The Semiotic Engineering of User Interface Languages BIBA 753-773
  Clarisse Sieckenius de Souza
Semiotic approaches to design have recently shown that systems are messages sent from designers to users. In this paper we examine the nature of such messages and show that systems are messages that can send and receive other messages -- they are metacommunication artefacts that should be engineered according to explicit semiotic principles. User interface languages are the primary expressive resource for such complex communication environments. Existing cognitively-based research has provided results which set the target interface designers should hit, but little is said about how to make successful decisions during the process of design itself. In an attempt to give theoretical support to the elaboration of user interface languages, we explore Eco's Theory of Sign Production (U. Eco, A Theory of Semiotics, Bloomington, IN: Indiana University Press, 1976) and build a semiotic framework within which many design issues can be explained and predicted.
Impact of Screen Density on Clinical Nurses' Computer Task Performance and Subjective Screen Satisfaction BIBA 775-792
  Nancy Staggers
This study examined the effect of a finite amount of information being displayed in three different methods: one high density, two moderate density, or three low density screens. Screens displaying retrieved laboratory results were used to test clinical nurses' task times, accuracy, and subjective screen satisfaction. The study sample was 110 randomly selected clinical nurses from a university medical center. Repeated measures with post-hoc analyses indicated that, for all repetitions and for practiced tasks, nurses found information targets significantly more quickly on high density than either moderate or low, and significantly more quickly on moderate versus low density screens. Nurses' mean accuracy and screen satisfaction scores were essentially the same for the three screens. These results suggest increases in screen information density, within the study restrictions here, can result in faster performance times without sacrificing nurses' accuracy or screen satisfaction. Implications for system designers and clinicians indicate low density laboratory results retrieval screens in federal computer systems may be redesigned into one high information-density screen without loss of user accuracy or screen satisfaction.
Characteristics of the Mental Representations of Novice and Expert Programmers: An Empirical Study BIBA 793-812
  Susan Wiedenbeck; Vikki Fix; Jean Scholtz
This paper presents five abstract characteristics of the mental representation of computer programs: hierarchical structure, explicit mapping of code to goals, foundation on recognition of recurring patterns, connection of knowledge, and grounding in the program text. An experiment is reported in which expert and novice programmers studied a Pascal program for comprehension and then answered a series of questions about it, designed to show these characteristics if they existed in the mental representations formed. Evidence for all of the abstract characteristics was found in the mental representations of expert programmers. Novices' representations generally lacked the characteristics, but there was evidence that they had the beginnings, although poorly developed, of such characteristics.
Shared Workspaces: How Do They Work and When Are They Useful? BIBA 813-842
  Steve Whittaker; Erik Geelhoed; Elizabeth Robinson
We investigated the effect on synchronous communication of adding a Shared Workspace to audio, for three tasks possessing key representative features of workplace activity. We examined the content and effectiveness of remote audio communication between pairs of participants, who worked with and without the addition of the Workspace. For an undemanding task requiring the joint production of brief textual summaries, we found no benefits associated with adding the Workspace. For a more demanding text editing task, the Workspace initially hampered performance but, with task practice, participants performed more efficiently than with audio alone. When the task was graphical design, the Workspace was associated with greater communication efficiency and also changed the nature of communication. The Workspace permits the straightforward expression of spatial relations and locations, gesturing, and the monitoring and coordination of activity by direct visual inspection. The results suggest that, for demanding text-based tasks, or for complex graphical tasks, there are overall benefits in adding a visual channel in the form of a Workspace. These benefits occur despite the costs involved in attempting to coordinate activity with this unfamiliar form of communication. Our findings provide evidence for early claims about putative Workspace benefits. We also interpret these results in the context of a theory of mediated communication.
On the Use of the Dempster Shafer Model in Information Indexing and Retrieval Applications BIBA 843-879
  Shimon Schocken; Robert A. Hummel
The Dempster Shafer theory of evidence concerns the elicitation and manipulation of degrees of belief rendered by multiple sources of evidence to a common set of propositions. Information indexing and retrieval applications use a variety of quantitative means -- both probabilistic and quasi-probabilistic -- to represent and manipulate relevance numbers and index vectors. Recently, several proposals were made to use the Dempster Shafer model as a relevance calculus in such applications. This paper provides a critical review of these proposals, pointing at several theoretical caveats and suggesting ways to resolve them. The methodology is based on expounding a canonical indexing model whose relevance measures and combinations mechanisms are shown to be isomorphic to Shafer's belief functions and to Dempster's rule, respectively. Hence, the paper has two objectives: (i) to describe and resolve some caveats in the way the Dempster Shafer theory is applied to information indexing and retrieval, and (ii) to provide an intuitive interpretation of the Dempster Shafer theory, as it unfolds in the simple context of a canonical indexing model.

IJMMS 1993 Volume 39 Issue 6

Editorial BIB 881-883
  Brian Gaines
Document Annotation: To Write, Type or Speak? BIBA 885-900
  Philip Tucker; Dylan M. Jones
Although the visual display unit (VDU) is becoming an increasingly popular means of displaying documents, users often show a strong preference for the "hard-copy" medium of document presentation when it comes to reading activities such as those that involve proof-reading or refereeing the document. This is partly attributed to the difficulties of annotating documents presented in the electronic medium. Voice recording may be a more acceptable medium for annotating documents that are presented on VDUs, as it overcomes many of the problems associated with the typed annotation of electronic documents. Experiment 1 compared two computer-based annotation media (typed and spoken input) with the method of writing annotations on the document. Findings suggested that writing was a superior method of annotation to the other media in terms of number of annotations elicited, speed of recording and user preference. Experiment 2 differed from the first experiment in the way in which written annotations were recorded and in the amount of pre-trial practice given to subjects. In the second experiment voice resulted in shorter task completion times than either writing or typing. This is taken as limited support for a theory that a small amount of pre-trial practice is of greater benefit to the utility of a voice annotation facility than it is to a facility for typing annotations. The majority of differences between writing and the other conditions observed in Experiment 1 were not found in Experiment 2. The contrast between the two sets of results is discussed in terms of the subjects' familiarity with the methods of annotation involved and the advantages of a facility for annotating on the document. The discussion concludes with a set of guide-lines for the implementation of a voice annotation facility.
An Advice-Giving Interface Based on Plan-Recognition and User-Knowledge Assessment BIBA 901-924
  Michel C. Desmarais; Luc Giroux; Serge Larochelle
Users of powerful but complex software packages do not take full advantage of the functionality of their tools. Advisory systems, or consultants, offer a solution to this problem by providing continuous and on-the-job help and training advice. However, consultants have not yet had any practical implementation outside an experimental setting. We propose an architecture for a consultant that is feasible and scalable in a practical context.
   The architecture is implemented in a system called EdCoach. It addresses two important issues for advisory systems: (1) the task analysis problem and (2) the user knowledge assessment problem. The system's task analysis module infers the user's goals (task) from the analysis of actions and identifies the method chosen to complete the task. It is based on the parsing of user actions with an attribute grammar. The second component is an "overlay model" of the user's knowledge state (KS). The knowledge of the user is represented by a subset of known and unknown nodes in a set of knowledge units (KUs), representing the whole knowledge domain. The knowledge assessment module uses a probabilistic model combined with an implication network to infer user knowledge from the result of the task analysis.
   A third component of the system is the didactic module, which consists essentially in the application of a straightforward principle: if the user adopts an inefficient method to complete a goal, the system first checks that the efficient method is unknown and, if so, it advises the user about that method.
   The system's performance was empirically tested with a text-editing application. A simulation of all three modules integrated in EdCoach shows that after about two weeks, 75% of the potential recommendations were progressively and correctly administered, or withheld, according to whether or not the efficient methods were known or unknown. The advantages and limits of the general approach adopted in EdCoach are discussed.
Planning for the Support of Computer Users BIBA 925-964
  G. Kelleher
This paper describes and discusses the design of an automatic planning system (LEAPS: the Leeds Educational Automated Planning System) for the support of educational help in information processing systems. Domain level planning is an important aspect of providing help to users of information processing systems (IPSs) as it is the mechanism that underpins answers to requests for help from the user of the form "How do I...?". Unfortunately planning technology has been inadequate for the task of supplying the answers to such problems. LEAPS provides solutions to some of the major problems of domain level planning in IPS help systems, and thus represents a technology which may be exploited to extend the flexibility and areas of application of current help systems. The difficulties of using current artificial intelligence planning technology in help systems are reviewed and the approach taken by LEAPS is described. Particular difficulties of the area, such as the absence of a complete world model and the problems of efficient but reliable plan creation, are analyzed. The means by which LEAPS deals with and eases these difficulties are reviewed. LEAPS is a nearly domain-independent planner (it provides plans within a range of IPSs), which uses a constraint-based least commitment approach to the generation of its plans. The planner provides reliable plans quickly by decomposing the planning problem into its component parts and applying well understood but efficient algorithms to each component individually. The decomposed planning algorithm is reconstructed by the use of a reason maintenance system. LEAPS is able to deal with incomplete world models by applying an assumption-making plan creation algorithm, the assumptions being restricted by domain constraints representing the possible configurations of the planner's world.
An Agent-Theoretic Approach to Computer Participation in Dialogue BIBA 965-998
  A. E. Blandford
There is a range of situations -- for example, in the context of advice-giving or tutoring -- in which a computer system might be required to take an active role in the interaction (rather than simply responding unquestioningly to the user's input). In such situations, the system must be able to decide how to respond to the user -- sometimes taking the initiative and sometimes responding to the user's initiative. At any time, selecting the most appropriate response will depend on the context, and on what both system and user are aiming to achieve through the interaction. This paper presents the design and implementation of a computer-based agent that can engage a user in a mixed-initiative dialogue. In this work, the generation of language is viewed as opportunistic rational action. The computer-based agent constructs utterances in the context of the preceding dialogue, deciding what to say in the light of its own beliefs, goals and values. The prototype system has been tested with users. From the small-scale evaluation study that was conducted, it was concluded that the system is capable of engaging in extended dialogue that remains largely coherent and reasonable (at a semantic level), and that it provides a reasonable base for further work in this direction.
User Identification via Keystroke Characteristics of Typed Names using Neural Networks BIBA 999-1014
  Marcus Brown; Samuel Joe Rogers
A method for identifying computer users by analyzing keystroking patterns with neural networks and a simple geometric distance is presented. A model of each user's normal typing style was created and compared with later typing samples. Preliminary results demonstrate complete exclusion of importers and a reasonably low false alarm rate when the sample text was limited to the user's name.
Monitoring Behavior in Manual and Automated Scheduling Systems BIBA 1015-1029
  Yili Liu; Robert Fuld; Christopher D. Wickens
Human monitoring behavior in manual and automated scheduling systems is examined through an experiment that required the subjects to perform scheduling and monitoring tasks. The task required the assignment of a series of incoming customers to the shortest of three parallel service lines. The subject was either in charge of the customer assignment (Manual Mode) or was monitoring an automated system performing the same task (Automatic Mode). In both cases, the subjects were required to detect the nonoptimal assignments that they or the computer had made. The results showed better error detection performance and lower subjective workload in the automatic mode. The subjects in the manual mode were both biased against declaring their own assignment errors and less sensitive to their misassignments. Results are compared with previous findings of monitoring behavior in manual control systems, and are discussed in terms of human decision making, reliability, workload and system design.
Structure from Associative Learning BIBA 1031-1050
  John H. Andreae; Shaun W. Ryan; Mark L. Tomlinson; Peter M. Andreae
It is frequently pointed out that a tabula rasa learning system needs constraints in order to extract structural information from its input-output sequence. We have been experimenting with a learning-system (PP) that incorporates a simple associative form of learning in a production system architecture. It is demonstrated that PP, implemented in a simulated robot, can learn the structure of a multi-level task with the help of speech and one or more auxiliary actions. Following a suggestion that structure could be acquired by a stress/nonstress distinction in the teacher's verbal presentation, we report briefly on an experiment that shows that stress can replace the auxiliary action.

European Association for Cognitive Ergonomics: Book Reviews

"Developing User Interfaces: Ensuring Usability Through Product and Process," by D. Hix and H. R. Hartson BIB 1051-1057
  Alistair Sutcliffe
"Knowledge Negotiation," edited by R. Moyse and M. T. Elsom-Cook BIB 1051-1057
  Ann Blandford
"Watch What I Do: Programming by Demonstration," edited by Allen Cypher BIB 1051-1057
  Ruven Brooks
"Human Error," by James Reason BIB 1051-1057
  Wayne D. Gray; Haresh Sabnani; Susan S. Kirschenbaum