HCI Bibliography Home | HCI Journals | About IJHCS | Journal Info | IJHCS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJHCS Tables of Contents: 4041424344454647484950

International Journal of Human-Computer Studies 40

Editors:B. R. Gaines
Dates:1994
Volume:40
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:55
Links:Table of Contents
  1. IJHCS 1994 Volume 40 Issue 1
  2. IJHCS 1994 Volume 40 Issue 2
  3. IJHCS 1994 Volume 40 Issue 3
  4. IJHCS 1994 Volume 40 Issue 4
  5. IJHCS 1994 Volume 40 Issue 5
  6. IJHCS 1994 Volume 40 Issue 6

IJHCS 1994 Volume 40 Issue 1

Editorial BIB 1-3
  Brian Gaines
The Development of Cognitive Models of Planning for Use in the Design of Project Management Systems BIBA 5-30
  Christine M. Pietras; Bruce G. Coury
The research presented in this paper is concerned with the planning component of project management and describes the use of interviewing techniques to develop cognitive models of planning for project management systems. Interviews were conducted with six project managers from six different problem domains. Protocol analysis was used to develop two types of cognitive models: process models that provided detailed descriptions of planning actions; and a higher level model of the planning process based on the Hayes-Roth theoretical model of planning. A visual representation of the planning process, called DMAP, was created to identify the type of planning that occurs at each stage of the planning process in project management. The discussion focuses on the use of cognitive models in the design of knowledge-based systems for project management.
A Shell for Developing Non-Monotonic User Modeling Systems BIBA 31-62
  Giorgio Brajnik; Carlo Tasso
This paper first presents a general structured framework for user modeling, which includes a set of basic user modeling purposes exploited by a user modeling system when providing a set of services to other components of an application. At a higher level of abstraction such an application may perform a generic user modeling task, which results from an appropriate combination of some basic user modeling purposes. The central aim of the paper is to present, within the proposed framework, a flexible general-purpose shell, called UMT (User Modeling Tool), which supports the development of user modeling applications. UMT features a non-monotonic approach for performing the modeling activity: more specifically, it utilizes a modeling approach called assumption-based user modeling, which exploits a truth maintenance mechanism for maintaining the consistency of the user model. The modeling task is divided into two separate activities, one devoted to user classification and user model management, and the other devoted to consistency maintenance of the model. The modeling knowledge exploited by UMT is represented by means of stereotypes and production rules. UMT is capable of identifying, at any given moment during an interaction, all the possible alternative models which adequately describe the user and are internally consistent. The choice of the most plausible one among them is then performed using an explicit programmable preference criterion. UMT is also characterized by a very well defined and simple interface with the hosting application, and by a specialized development interface which supports the developer during the construction of specific applications. This paper includes an example application in the field of information-providing systems. UMT has been developed in Common LISP.
Productivity Gains via an Adaptive User Interface: An Empirical Analysis BIBA 63-81
  James E. Trumbly; Kirk P. Arnett; Peter C. Johnson
This study examines the impact of a user's level of computer knowledge and an adaptive software interface on performance. Performance is examined with a simulation game decision-making task and with the user's expertise on the type of computer interface used to accomplish the task. MIS literature offers various interface characteristics for novice users that are different from those recommended for experienced users. The premise of this research is that the characteristics of the computer user will change based on exposure to the experience with the application at hand. When such change transpires, the characteristics of the interface can be software-adjusted to maximize the productivity of the user. This empirical study investigates the impact of using an adaptive user interface that evolves to correspond to the skill level of the user. Results indicate that the level of computer knowledge, when using an adaptive interface, is not significant on either task performance or interface learning. More importantly, however, the type of interface coupled to a particular user -- novice, adaptive, or experienced -- produces significant performance differences.
Making the Abstraction Hierarchy Concrete BIBA 83-117
  Ann M. Bisantz; Kim J. Vicente
The abstraction hierarchy (AH) is a multileveled representation framework, consisting of physical and functional system models, which has been proposed as a useful framework for developing representations of complex work environments. Despite the fact that the AH is well known and widely cited in the cognitive engineering community, there are surprisingly few examples of its application. Accordingly, the intent of this paper is to provide a concrete example of how the AH can be applied as a knowledge representation framework. A formal instantiation of the AH as the basis for a computer program is presented in the context of a thermal-hydraulic process. This model of the system is complemented by a relatively simple reasoning mechanism which is independent of the information contained in the knowledge representation. This reasoning mechanism uses the AH model, along with qualitative user input about system states, to generate reasoning trajectories for different types of events and problems. Simulation outputs showing how the AH model can provide an effective basis for reasoning under different classes of situations, including challenging faults of various types, are presented. These detailed examples illustrate the various benefits of adopting the AH as a knowledge representation framework, namely: providing sufficient representations to allow reasoning about unanticipated fault and control situations, allowing the use of reasoning mechanisms that are independent of domain information, and having psychological relevance.
Knowledge Assessment: Tapping Human Expertise by the QUERY Routine BIBA 119-151
  Maria Kambouri; Mathieu Koppen; Michael Villano; Jean-Claude Falmagne
The QUERY procedure is designed to systematically question an expert, and construct the unique knowledge space consistent with the expert's responses. Such a knowledge space can then serve as the core of a knowledge assessment system. The essentials of the theory of knowledge spaces are given here, together with the theoretical underpinnings of the QUERY procedure. A full scale application of the procedure is then described, which consists in constructing the knowledge spaces of five expert-teachers, pertaining to 50 mathematics items of the standard high school curriculum. The results show that the technique is applicable in a realistic setting. However, the analysis of the data indicates that, despite a good agreement across experts concerning item difficulty and other coarse measures, the constructed knowledge spaces obtained for the different experts are not as consistent as one might expect or hope. Some experts appear to be considerably more skillful than others at generating a usable knowledge space, at least by this technique.
Trust, Self-Confidence, and Operators' Adaptation to Automation BIBA 153-184
  John D. Lee; Neville Moray
The increasing use of automation to supplant human intervention in controlling complex systems changes the operators' role from active controllers (directly involved with the system) to supervisory controllers (managing the use of different degrees of automatic and manual control). This paper examines the relationship between trust in automatic controllers, self-confidence in manual control abilities, and the use of automatic controllers in operating a simulated semi-automatic pasteurization plant. Trust, combined with self-confidence, predicted the operators' allocation strategy. A Multitrait-multimethod matrix and logit functions showed how trust and self-confidence relate to the use of automation. An ARMAV time series model of the dynamic interaction of trust and self-confidence, combined with individual biases, accounted for 60.9-86.5% of the variance in the use of the three automatic controllers. In general, automation is used when trust exceeds self-confidence, and manual control when the opposite is true. Since trust and self-confidence are two factors that guide operators' interactions with automation, the design of supervisory control systems should include provisions to ensure that operators' trust reflects the capabilities of the automation and operators' self-confidence reflects their abilities to control the system manually.

IJHCS 1994 Volume 40 Issue 2

Special Issue: Sisyphus: Models of Problem Solving

Editorial BIB 185-186
  Brian Gaines
Problem Statement for Sisyphus: Models of Problem Solving BIBA 187-192
  Brian Gaines
We devised the Sisyphus problem to compare different approaches to the modeling of problem-solving processes in knowledge-based systems and the influences of the models on the knowledge acquisition activities. To this purpose we give a short description of a sample problem concerned with office assignment in a research environment.
   After a brief description of the settings in which the problem occurs, we describe the organizational structure of the research group and that group's facilities. We will then render a sample annotated protocol of the local expert Siggi D. solving the office assignment problem. A slightly different second problem allows authors to illustrate the flexibility of their approach and to show how their model reacts to borderline cases.
   The papers that describe solutions to the office assignment problem address the following points: (1) the description of the tools and the generic structures used to provide a solution; (2) the authors' interpretations of the protocol, and assumptions and decisions taken about unclear points; (3) the description of the solution proposed, the instantiated generic structures, and traces for the solution of the problem; (4) the knowledge acquisition approach, that is, authors describe the links between the knowledge structures of the tool and actions or utterances of the protocol, the design decisions and their justifications and the questions arising from the modeling process that they would address to Siggi D.
   To refer to the protocol elements in the papers, authors use the numbers on the left of the action descriptions (e.g. Action 1), and those on the left of the comments (e.g. Comment la).
Making a Method of Problem Solving Explicit with MACAO BIBA 193-219
  Nathalie Aussenac-Gilles; Nada Matta
This paper synthesizes the experiment carried out during two years with the MACAO knowledge acquisition method within the Sisyphus project. It relates our contribution to the project purpose of clarifying the definition and use of problem-solving methods when acquiring knowledge and designing a knowledge base. The lessons we derive from this work concern the definition, structure, building and use of the problem-solving model in MACAO, as well as the steps in the methodology.
   Firstly, we present the MACAO methodology, the associated software and its modelling structures. Next, we explain our interpretation of the expert's protocol given as an example of problem solving, and we report decisions taken about implicit knowledge. Then, we detail how the expertise is modelled with MACAO and we describe the model. This model is used to simulate the solving of the two problems given as possible inputs to the system to be designed. These cases provide support in explaining the acquisition process. They also prove the importance of an explicit problem-solving method in the model to guide the knowledge acquisition and the system design from the model. Comparing the problem solutions helps to evaluate how the model resists changes, and its behaviour when dealing with conflicting situations.
   These results highlight limitations in MACAO concerning the knowledge representation structures and the tool supporting the methodology. We finally discuss evolutions aiming at an easier identification and representation of the problem-solving method and at its use for knowledge elicitation and for the system design. We also consider possible extensions of the representation language in order to model the problem solving at different levels of abstraction.
Solving Sisyphus by Design BIBA 221-241
  Alan Balkany; William P. Birmingham; Jay Runkel
This paper demonstrates how the Domain-Independent Design System (DIDS) was used to solve the Sisyphus room-assignment problem by viewing it as a configuration-design task. We have developed a general problem-solving method for configuration design, based on constraint-satisfaction techniques. This method efficiently solves the Sisyphus problem, and provides strong guidance for knowledge acquisition. This paper presents both the problem solver and knowledge-acquisition support created by DIDS to solve the Sisyphus problem.
A Situated Classification Solution of a Resource Allocation Task Represented in a Visual Language BIBA 243-271
  Brian R. Gaines
The Sisyphus room allocation problem-solving example has been solved using a situated classification approach. A solution was developed from the protocol provided in terms of three heuristic classification systems, one classifying people, another rooms, and another tasks on an agenda of recommended room allocations. The domain ontology, problem data, problem-solving method, and domain-specific classification rules, have each been represented in a visual language. These knowledge structures compile to statements in a term-subsumption knowledge representation language, and are loaded and run in a knowledge representation server to solve the problem. The user interface has been designed to provide support for human intervention in under-determined and over-determined situations, allowing advantage to be taken of the additional choices available in the first case, and a compromise solution to be developed in the second.
Solving the Office Allocation Task in Reflective ASSIGN BIBA 273-291
  Werner L. Karbach; Angi W. Voss; Uwe Drouven
We show how we solved the Sisyphus task with an existing assignment problem solver, ASSIGN. The system is written in MODEL-K, a language for mechanizing KADS models of expertise. MODEL-K was extended to build reflective systems that reason about other problem solvers. Our assignment system has reflective components to simplify the problem, to solve it incrementally and to relax inconsistent constraints in the problem definition.
Exploiting Problem Descriptions to Provide Assistance with the Sisyphus Task BIBA 293-314
  Georg Klinker; Marc Linster; David Marques; John McDermott; Gregg Yost
We describe the Spark, Burn, FireFighter solution to the office allocation problem as defined for the Sisyphus task. The Spark, Burn, FireFighter framework assists a development team with building application programs. The framework pays particular attention to the problems that arise from the nature of real-world tasks: they are messy, filled with details, and change continuously. To overcome these problems, the Spark, Burn, FireFighter approach exploits a problem description of a real-world workplace. This workplace description is at the heart of our approach and integrates all of the development team's activities. It is exploited to assist a development team with extending the description of a workplace, managing the activities in the workplace, describing programming constructs, finding programming constructs that automate some activities in the workplace, customizing those programming constructs, and doing the work.
Grounding GDMs: A Structured Case Study BIBA 315-347
  Enrico Motta; Kieron O'Hara; Nigel Shadbolt
In this paper a solution to the Sisyphus room allocation problem is discussed which uses the generalized directive model (GDM) methodology developed in the ACKnowledge project, together with the knowledge engineering methodology developed in the VITAL project. After briefly introducing these methodologies, the paper presents a "walkthrough" of the Sisyphus solution from analysis to implementation in detail, so that all acquisition, modelling and design decisions can be seen in context. The selection of a reusable off-the-shelf model from the GDM library is presented, together with a discussion of the ways in which this selection process can drive the knowledge acquisition process. Next, there is an account of the instantiation of the GDM and the imposition of a control regime over the dataflow structure; we show how this process uncovers hidden constraints and inconsistencies in Siggi's account of his own problem solving. The output of this KA phase consists of a conceptual model of the problem which is discussed in detail and formalized in terms of the VITAL conceptual modelling language. From this analysis of the problem, we move on to discussion of the issues concerning the design and implementation of a system, and we show how our implementation satisfies the specification of the Sisyphus problem.
Applying KADS to the Office Assignment Domain BIBA 349-377
  A. Th. Schreiber
In this article, the KADS approach is used to model and implement the office assignment problem. We discuss both the final products (the model of expertise and the design) and the process that led to these product. Emphasis is put on modelling the problem in such a way that it closely corresponds to the behaviour of the expert in the sample protocol. The last section of the paper addresses the evaluation points raised by the initiators of Sisyphus.
Sisyphus Project: EMA Approach BIBA 379-401
  Susan Spirgi; Dieter Wenger
This paper outlines the Sisyphus project realized with the EMA methodology. For this application two problem-solving methods are used: one based on the solution described by Gaines (The Sisyphus problem solving example through a visual language with KL-ONE-like knowledge representation, Proceedings of EKAW'91, 1991) and the other on a genetic algorithm. This application allows intervention by the user and the solution is represented graphically.

IJHCS 1994 Volume 40 Issue 3

Performance Comparisons of Classification Techniques for Multi-Font Character Recognition BIBA 403-423
  Antonette M. Logar; Edward M. Corwin; William J. B. Oldham
This paper reports the performance of several neural network models on the problem of multi-font character recognition. The networks are trained on machine generated, upper-case English letters in selected fonts. The task is to recognize the same letters in different fonts. The results presented here were produced by back-propagation networks, radial basis networks and a new hybrid algorithm which is a combination of the two. These results are compared to those of the Hogg-Hubermann model as well as to those of nearest neighbor and maximum likelihood classifiers. The effects of varying the number of nodes in the hidden layer, the initial conditions, and the number of iterations in a back-propagation network were studied. The experimental results indicate that the number of nodes is an important factor in the recognition rate and that over-training is a significant problem. Different initial conditions also had a measurable effect on performance. The radial basis experiments used different numbers of centers and differing techniques for selecting the means and standard deviations. The best results were obtained with one center per training vector in which the standard deviation for each center was set to the same small number. Finally, a new hybrid technique is discussed in which a radial basis network is used to determine a starting point for a back-propagation network. The back-propagation network refines the radial basis means and standard deviations which are replaced in the radial basis network and used for another iteration. All three networks out-performed the Hogg-Hubermann network as well as the maximum likelihood classifiers.
A Method for Checking and Restoring the Consistency of Knowledge Bases BIBA 425-442
  Stephane Loiseau
The arrival of knowledge-based systems in industrial sectors requires specific software engineering, sometimes known as knowledge engineering. A good criterion to validate a knowledge-based system is to check its consistency. In this paper, a distinction between sure and heuristic knowledge is proposed. These two kinds of knowledge can be expressed in a single formalism. A powerful criterion of consistency for knowledge bases, based on this differentiation, is presented and formally expressed. The reasons why a knowledge base can be inconsistent are studied, and formally expressed. We present the COCO system, which checks knowledge bases. Then, we present the X system, which helps restore the consistency of an inconsistent knowledge base. The COCO-X system, which enables the expert to refine (check and restore) knowledge bases, is described.
A Multi-Modal Mouse with Tactile and Force Feedback BIBA 443-453
  Motoyuki Akamatsu; Sigeru Sato
We have developed a mouse with tactile and force feedback. Tactile information is provided to the operator by a small pin which projects slightly through the mouse button when pulsed. Force information is provided by an electromagnet inside the mouse in conjunction with an iron mouse pad. Tactile and force feedback are controlled by software linked to the visual information of targets on the visual display. In an empirical evaluation using a target selection task, the addition of tactile and force feedback shortened the response time and widened the effective area of targets. Design issues for interactive systems are discussed.
Adaptively Supported Adaptability BIBA 455-472
  Reinhard Oppermann
This paper presents an adaptive and adaptable system and its evaluation. The system is based on a commercial spreadsheet application and provides adaptation opportunities for defining a user- and task-specific user interface (new menu entries and key shortcuts for subrouting names and parameters, changing default parameters). The development following a design-evaluation-redesign approach has shown that adaptations are accepted if the user has the opportunity to control their timing and content. This does not necessarily mean that the adaptation is initiated and performed by the user alone (adaptability). On the contrary, the strictly user-controlled adaptation is too demanding for the user. The paper shows how the user's own adaptations can be supported by the system by initial adaptive suggestions showing the rationale of adaptations and the way to perform them.
EDWARD: Full Integration of Language and Action in a Multimodal User Interface BIBA 473-495
  Edwin Bos; Carla Huls; Wim Claassen
This paper presents EDWARD, a multimodal user interface fully integrating several interface styles, viz. natural language (Dutch), manipulation of graphical representations, menus, and command language. The focus is on the following two issues: (1) the new design principle of making all interface styles available at all times; and (2) the new generic approach in processing referring expressions, which is applied both in interpretation and generation, for all sorts of expressions, including multimodal deictic expressions such as "put that there" with simultaneous pointing gestures. EDWARD is a generic interface, currently applied to the file system domain.
Attribute-Mastery Patterns from Rule Space as the Basis for Student Models in Algebra BIBA 497-508
  Menucha Birenbaum; Anthony E. Kelly; Kikumi K. Tatsuoka; Yaffa Gutvirtz
Student models for procedural tasks in mathematics have relied heavily on analyses of bugs to guide their remediation. This paper reports on an analysis of data that first confirms the results of recent studies by finding a relatively large number of bugs to be unstable, with stable bugs tending to be infrequent. The paper then illustrates a method for classifying students according to higher-level (and presumably more stable) knowledge deficits using a psychometric classification technique, known as rule space. A rule space analysis is performed on the same test items. The resulting diagnoses (describing attribute-mastery patterns) are shown to demonstrate within-test stability. These patterns are then discussed in the light of their potential contribution to the design of machine-delivered remediation.
Conversations with Graphics: Implications for the Design of Natural Language/Graphics Interfaces BIBA 509-541
  Irene Neilson; John Lee
The design of interfaces which support a user's natural cognitive processes and structures depends on an understanding of communicational codes as well as task structures etc. Research in human-computer interaction has, however, tended to neglect the former in favour of the latter. This paper seeks to redress this imbalance by reporting in detail the results of an empirical enquiry into how people use two communicational codes -- natural language and drawing -- to achieve a shared understanding of a problem (the redesign of a kitchen) and its solution. This enquiry clearly indicates the complex interdependency of these forms of communication when used in combination. While a graphical depiction may provide a context for linguistic interpretation, especially in respect of the disambiguation of spatial expressions, graphical expressions (pictures and drawings) themselves require a context-dependent interpretation which, itself, can derive from an accompanying natural language expression. Often, however, neither form of expression can be independently interpreted. Rather the meaning of the situation is dependent on the synergistic combination of both forms of expression and is heavily dependent on the common background knowledge of participants in the interaction. While natural language expressions may be explicitly linked to graphical depictions through pointing actions, such actions are not mandatory for effective communication. The implications of these observations for the design of natural language/graphics interfaces are discussed. Among the questions raised by the paper are: how to characterize the difference between representation or modelling and communication in graphics; how to apply current object-oriented theories of knowledge representation to the highly fluid yet knowledge-rich use of pictures that was observed in our study; and finally what differences might emerge between dialogues of this type in different domains.
Machines, Social Attributions, and Ethopoeia: Performance Assessments of Computers Subsequent to "Self-" or "Other-" Evaluations BIBA 543-559
  Clifford Nass; Jonathan Steuer; Lisa Henriksen; D. Christopher Dryer
We show that individuals use inappropriate social rules in assessing machine behavior. Explanations of ignorance and individuals' views of machines as proxies for humans are shown to be inadequate; instead, individuals' responses to technology are shown to be inconsistent with their espoused beliefs. In two laboratory studies, computer-literate college students used computers for tutoring and testing. The first study (n=22) demonstrates that subjects using a computer that praised itself believed that it was more helpful, contributed more to the subject's test score, and was more responsive than did subjects using a computer that criticized itself, although the tutoring and testing sessions were identical. In the second study (n = 44), the praise or criticism came from either the computer that did the tutoring or a different computer. Subjects responded as if they attributed a "self" and self-focused attributions (termed "ethopoeia") to the computers. Specifically, subjects responses followed the rules "other-praise is more valid and friendlier than self-praise", "self-criticism is friendlier than other-criticism", and "criticizers are smarter than praisers" to evaluate the computers, although the subjects claimed to believe that these rules should not be applied to computers.
Bulletin BIB 561-566
 

Book Reviews

"Gendered by Design? Information Technology and Office Systems," by E. Green, J. Owen and D. Pain BIB 567-569
  Elizabeth Churchill
"Bluff Your Way in Science," by Brian Malpas BIB 569-570
  T. R. G. Green
"Novice Programming Environments: Explorations in HCI and AI," edited by M. Eisenstadt, M. T. Keane and T. Rajan BIB 570-572
  Jim Spohrer
"Things That Make Us Smart," by D. A. Norman BIB 572-573
  T. R. G. Green
"Pictorial Communication in Virtual and Real Environments," edited by S. R. Ellis, M. Kaiser and J. Grunwald BIB 573-575
  Carol-Ina Trudel

IJHCS 1994 Volume 40 Issue 4

The Effects of Paradigm on Cognitive Activities in Design BIBA 577-601
  Adrienne Lee; Nancy Pennington
This research examines differences in cognitive activities and final designs among expert designers using object-oriented and procedural design methodologies, and among expert and novice object-oriented designers, when novices have extensive procedural experience. We observed, as predicted by others, a closer alliance of domain and solution spaces in object-oriented design compared to procedural design. Procedural programmers spent a large proportion of their time analysing the problem domain. In contrast, object-oriented designers defined objects and methods much more quickly and spent more time evaluating their designs through simulation processes. Novices resembled object-oriented experts in some ways and procedural experts in others. Their designs had the general shape of the object-oriented experts' designs, but retained some procedural features. Novices were very inefficient at defining objects, going through an extensive situation analysis first, in a manner similar to the procedural experts. Some suggestions for instruction are made on the basis of novice object-oriented designers' difficulties.
Argumentation-Based Design Rationale: What Use at What Cost? BIBA 603-652
  Simon Buckingham; Nick Hammond
A design rationale (DR) is a representation of the reasoning behind the design of an artifact. In recent years, the use of semiformal notations for structuring arguments about design decisions has attracted much interest within the human-computer interaction and software engineering communities, leading to a number of DR notations and support environments. This paper examines two foundational claims made by argumentation-based DR approaches: that expressing DR as argumentation is useful, and that designers can use such notations. The conceptual and empirical basis for these claims is examined, firstly by surveying relevant literature on the use of argumentation in non-design contexts (from which current DR efforts draw much inspiration), and secondly, by surveying DR work. Evidence is classified according to the research contribution it makes, the kind of data on which claims are based (anecdotal or experimental), the extent to which the claims made are substantiated, and whether or not the users of the approach were also the researchers.
   In the survey, a trend towards tightly integrating DR with other design representations is noted, but it is argued that taken too far, this may result in the loss of the original vision of argumentative design. In examining the evidence for each claim, it is demonstrated firstly, that research into semiformal argumentation outside the design context has failed to substantiate convincingly either of the two claims implicitly attributed to it in current DR research, and secondly, that there are also significant gaps in the DR literature. There are emerging indications, however, that argumentation-based DR can assist certain kinds of design reasoning by turning the representational effort to the designer's advantage, and that such DRs can be useful at a later date. This analysis of argumentation research sets an agenda for future work driven by a concern to support the designer in the whole process of externalizing and structuring DR, from initially ill-formed ideas to more rigorous. coherent argumentation. The paper concludes by clarifying implications for the design of DR training, notations, and tools.
Improving Conceptual Database Design through Feedback BIBA 653-676
  Dinesh Batra; Maung K. Sein
Design aids can improve the quality of systems developed by end-users and non-expert designers. This paper reports a study undertaken to establish the concept validation of a design aid that is based on feedback to improve the quality of conceptual and logical relational databases. We describe the design of SERFER (Simulated ER based FEedback system for Relational databases) and test its effectiveness in a laboratory experiment using the "hidden operator" method. The results show that feedback can help users detect and correct certain types of database design errors in modeling ternary relationships. However, no improvement seems possible in the case of unary relationships. The experiment could not determine whether errors can be corrected in modeling binary relationships, since the subjects were reasonably adept and rarely committed serious errors in this ease.
The Design of Joint Cognitive Systems: The Effect of Cognitive Coupling on Performance BIBA 677-702
  Nikuni P. Dalal; George M. Kasper
In recent years, there has been a growing interdisciplinary interest in designing intelligent human-computer systems for problem-solving. Although progress has been made, we are far from building intelligent human-computer systems that fully exploit the natural synergies of the combination of human and intelligent machine One of the significant paradigms of intelligent decision support is the cognitive systems engineering approach. This approach considers the human and the intelligent machine as components of a joint cognitive system and focuses on the need to maximize the overall performance of the joint system. Factors influencing the performance of the joint cognitive system include the cognitive characteristics of the human, the computer system, and the task. An important relationship between the cognitive characteristics of the human and those of the system is cognitive coupling, which has a number of dimensions.
   The study described in this paper explores the style dimension of cognitive coupling by presenting a laboratory experiment that examines the interactions among human cognitive style (analytic vs. heuristic), problem type (analysis-inducing vs. heuristic-inducing), and nature of decision aid (analytic vs. heuristic). The study demonstrates that, depending on the characteristics of the human, the computerized aid, and the problem to be solved, the joint human-computer system performance can be better or worse than the performance of the individual human or computer system working alone. Furthermore, the results suggest that the impact of cognitive style on decision-making performance may depend upon the characteristics of the problem, the nature of the decision-aid, and the measures used to evaluate performance. Inadequate recognition of these factors and their interactions may have led to conflicting results in prior decision-making studies using cognitive style.
Knowledge Restructuring and the Acquisition of Programming Expertise BIBA 703-726
  Simon P. Davies
This paper explores the relationship between knowledge structure and organization and the development of expertise in a complex problem-solving task. An empirical study of skill acquisition in computer programming is reported, providing support for a model of knowledge organization that stresses the importance of knowledge restructuring processes in the development of expertise. This is contrasted with existing models which have tended to place emphasis upon schemata acquisition and generalization as the fundamental modes of learning associated with skill development. The work reported in this paper suggests that a fine-grained restructuring of individual schemata takes place during the later stages of skill development. It is argued that those mechanisms currently thought to be associated with the development of expertise may not fully account for the strategic changes and the types of error typically found in the transition between intermediate and expert problem solvers. This work has a number of implications. Firstly, it suggests important limitations of existing theories of skill acquisition. This is particularly evident in terms of the ability of such theories to account for subtle changes in the various manifestations of skilled performance that are associated with increasing expertise. Secondly, the work reported in this paper attempts to show how specific forms of training can give rise to this knowledge restructuring process. It is argued that the effects of particular forms of training are of primary importance, but these effects are often given little attention in theoretical accounts of skill acquisition. Finally, the work presented here has practical relevance in a number of applied areas including the design of intelligent tutoring systems and programming environments.
A Comparison of Algorithms for Hypertext Notes Network Linearization BIBA 727-752
  Mike Sharples; James Goodlet; Andrew Clutterbuck
New computer-based writing environments are being developed which combine a hypertext "ideas organizer" with a text editor. We compare two algorithms which could be used in such environments for turning networks of notes indicating ideas into linear draft documents. The algorithms are designed to produce a linear ordering of the notes which is acceptable to the writer as a first draft of the document. We report on experiments to test their effectiveness. Subjects were asked to create notes networks which were then linearized by the two algorithms. The resulting linearizations, plus a random ordering of nodes and a linearization created by hand, were assessed for textual organization. The experiments indicate that both algorithms produce linearizations which are acceptable as draft texts, that the "best first" algorithm is marginally superior to the "hillclimbing" one, and that providing information to the algorithms about link types had little effect on their effectiveness. The paper concludes by describing an implementation of the best first algorithm as part of the Writer's Assistant writing environment.
Bulletin BIB 753-754
 

IJHCS 1994 Volume 40 Issue 5

The Effects of Naming Style and Expertise on Program Comprehension BIBA 757-770
  Barbee E. Teasley
The question of whether the use of good naming style in programs improves program comprehension has important implications for both programming practice and theories of program comprehension. Two experiments were done based on Pennington's (Stimulus structures and mental representations in expert comprehension of computer programs. Cognitive Psychology, 19, 295-341, 1987) model of programmer comprehension. According to her model, different levels of knowledge. ranging from operational to functional, are extracted during comprehension in a bottom-up fashion. It was hypothesized that poor naming style would affect comprehension of function, but would not affect the other sorts of knowledge. An expertise effect was found, as well as evidence that knowledge of program function is independent of other sorts of knowledge. However, neither novices nor experts exhibited strong evidence of bottom-up comprehension. The results are discussed in terms of emerging theories of program comprehension which include knowledge representation, comprehension strategies, and the effects of ecological factors such as task demands and the role-expressiveness of the language.
Memory for Task-Action Mappings: Mnemonics, Regularity and Consistency BIBA 771-794
  Adrienne Y. Lee; Peter W. Foltz; Peter G. Polson
Much of the knowledge required to use modern computing systems takes the form of mappings or associations. These associations occur between user goals and the functions that accomplish those goals, between functions and the user actions that activate a desired function, and between a menu item or a button label and the function associated with that item or label. The question we explore in this paper is: when is it worthwhile, if ever, to make a user pay the price of learning a new set of task-action mappings? In other words, how much interference is there when the new set is inconsistent with the original set of task-action mappings of the previously known system? We consider three factors that determine the ease of learning and retention of task-action mappings: mnemonics, regularity within a set of mappings, and consistency of mapping across different system contexts. In two experiments, we found that Irregular-Non-Mnemonic mappings take much longer to master than Regular-Mnemonic mappings and that Irregular-Non-Mnemonic mappings are more rapidly forgotten and subject to interference effects due to inconsistency. Regular-Non-Mnemonic mappings fall between the two groups. They are easier to learn and retain than Irregular-Non-Mnemonic but harder than Regular-Mnemonic mappings. We conclude that transferring from a well-learned set of old task-action mappings is simple when the new set is regular (completely consistent) and mnemonic.
Mental Models and Computer Programming BIBA 795-811
  Jose Juan Canas; Maria Teresa Bajo; Pilar Gonzalvo
Programming is a cognitive activity that requires the learning of new reasoning skills and the understanding of new technical information. Since novices lack domain-specific knowledge, many instructional techniques attempt to provide them with a framework or mental model that can be used for incorporating new information. A major research question concerns how to encourage the acquisition of good mental models and how these models influence the learning process. One possible technique for providing an effective mental model is to use dynamic cues that make transparent to the user all the changes in the variable values, source codes, output, etc., as the program runs. Two groups of novice programmers were used in the experiment. All subjects learned some programming notions in the C language (MIXC). The MIXC version of the programming language provides a debugging facility (C trace) designed to show through a system window all the program components. Subjects were either allowed to use this facility or not allowed to do so. Performance measures of programming and debugging were taken as well as measures directed to assess subjects' mental models. Results showed differences in the way in which the two groups represented and organized programming concepts, although the performance tasks did not show parallel effects.
A Neural Network Tool for Identifying Text-Editing Goals BIBA 813-833
  Leticia Villegas; Ray E. Eberts
When performing a computer task, a user will decompose the task into cognitive goals and subgoals. These goals are accomplished through the use of external operators (e.g. keystrokes, mouse button presses) or internal mental operators (e.g. reading parts of the display, deciding on the goal). Users may utilize different goals and sequence the goals differently to accomplish the same overall task. Determining the goals and the sequencing of the goals could be useful for several reasons, such as providing a means for on-line assistance with the task. Determining these goals in the past, however, has been a time-consuming process. A neural network tool for automatically identifying cognitive text-editing goals from operators is investigated. The first of three memos edited by subjects was used to train the neural network successfully to map the operators (keystrokes) to cognitive goals. In a test of the trained network's ability to generalize to new input -- the second and third memos edited by the subjects -- the net could identify the cognitive goals with an overall performance accuracy of 96%. Two methods were used to investigate the validity of the goals which were identified by the tool. The characteristics of the goals were consistent with that which could be expected based upon previous research. This research illustrates that a neural network tool can identify the cognitive goals of a task.
The Measurement of Computer Literacy: A Comparison of Self-Appraisal and Objective Tests BIBA 835-857
  Paul J. A. van Vliet; Marilyn G. Kletke; Goutam Chakraborty
Whenever decisions are made based upon a person's level of computer literacy, it is important that such expertise is accurately assessed. This paper takes a thorough methodological approach to the measurement of computer literacy using both objective and self-appraisal tests. While objective tests have been used on many occasions to measure computer literacy, they suffer from generalizability problems. Self-appraisal tests, on the other hand, are subject to leniency bias by the respondents. Taken together, though, the potential exists for the establishment of a computer literacy assessment instrument with high levels of generalizability and accuracy. For this research, an objective test for computer literacy was developed and an existing self-appraisal test was extended for use in a computer literacy assessment experiment. It was found that the self-appraisal test is a more lenient performance indicator than the objective test. Both male and female subjects exhibited substantial self-leniency in their self-appraisals, but both self-leniency and gender-based differences in self-appraisal decreased as the subjects' level of computer expertise increased. Finally, the low level of convergence between the self-appraisal test and the objective test found in this study cast doubt on the ability of any self-appraisal test to assess accurately computer literacy by itself. A combination of different measures may be more appropriate when it is important to determine computer literacy levels accurately.
An Extended Fisheye View Browser for Collaborative Writing BIBA 859-878
  C. Chen; R. Rada; A. Zeb
This study investigated information-seeking tasks and associated cognitive issues in the context of interacting with an evolving collaborative hypertext. Fisheye view browsers were used to facilitate exploring in a large information space. The fisheye view browser was extended to incorporate word frequencies. The effects of the fisheye view browser and the changing document were tested with a 2 x 2 factorial experiment. Multivariate tests found a significant interaction between the two factors and a significant main effect of the fisheye view browser. The users who had access to the word frequency information performed their tasks more effectively than the users without access to word frequencies. This work implies that several aspects of an evolving hypertext might also be usefully incorporated in an associated fisheye view browser.
Consistency versus Compatibility: A Question of Levels? BIBA 879-894
  Alan Tero; Pamela Briggs
Consistency can be expressed in terms of minimal components of an interaction language. However, what is taken as a unit in describing stimulus events is crucial. A particular command set may be generated by very few rules (internal consistency) but should also map on to the users' expectations (higher level consistency, or compatibility). Sixty subjects took part in a simple computer game in order to explore the relationship between internal consistency, compatibility, and mode of learning. Internal consistency was found to be related to the subjects' ability to create an explicit model of the task, and compatibility was related to enhanced performance on the task. There was evidence that properties of a consistent underlying rule structure were made more salient when the mappings were consistent with users' expectations -- and only under these circumstances were performance benefits observed.
A Laboratory Evaluation of a Human Operator Support System BIBA 895-931
  J. M. Annemarie Sassen; Eric F. T. Buiel; Jan H. Hoegee
A possible way of supporting a human operator with the supervision of complex industrial processes is to provide him or her with a knowledge-based system that can detect faults in the process and infer their cause(s) and consequences. The effects of the introduction of such an aid cannot be predicted. On the one hand, it may improve the performance of the operator since it provides additional information about causes and consequences of a malfunction. On the other hand, it may worsen the operator's performance since he or she is either mentally underloaded (when all that is left for the operator to do is to follow the advice of the knowledge-based system) or mentally overloaded (because there is one more system which the operator must understand and check). In order to improve our understanding of this point, we constructed a monitoring and diagnosis system for a nuclear power plant simulation and carried out a laboratory evaluation of this system. During the evaluation, we compared the performance of a group of operators using the support system with a control group. The latter had to perform the same task in a model of a normally equipped control room. Results show that the aided operators performed better when they had to diagnose malfunctions caused by multiple failures, and when they had to diagnose malfunctions which they did not practise during their training.

IJHCS 1994 Volume 40 Issue 6

A Probabilistic Theory of Model-Based Diagnosis BIBA 933-963
  Jiah-Shing Chen; Sargur N. Srihari
Diagnosis of a malfunctioning physical system is the task of identifying those component parts whose failures are responsible for discrepancies between observed and correct system behavior. The result of diagnosis is to enable system repair by replacement of failed parts.
   The model-based approach to diagnosis has emerged as a strong alternative to both symptom-based and fault-model-based approaches. Hypothesis generation and hypothesis discrimination (action selection) are two major subtasks of model-based diagnosis. Hypothesis generation has been partially resolved by symbolic reasoning using a subjective notion of parsimony such as non-redundancy. Action selection has only been studied for special cases, e.g. probes with equal cost. Little formal work has been done on repair selection and verification.
   This paper presents a probabilistic theory for model-based diagnosis. An objective measure is used to rank hypotheses, viz., posterior probabilities, instead of subjective parsimony. Fault hypotheses are generated in decreasing probability order. The theory provides an estimate of the expected diagnosis cost of an action. The result of the minimal cost action is used to adjust hypothesis probabilities and to select further actions.
   The major contributions of this paper are the incorporation of probabilistic reasoning into model-based diagnosis and the integration of repair as part of diagnosis. The integration of diagnosis and repair makes it possible to troubleshoot failures effectively in complex systems.
Menus and Memory Load: Navigation Strategies in Interactive Search Tasks BIBA 965-1008
  Patricia Wright; Ann Lickorish
When tasks offer alternative methods for attaining subgoals, several factors may determine which method is selected. People might choose procedures that are, or appear, cognitively less demanding. These demands can operate over several different dimensions (e.g. learnability, solution speed, number of motor actions). A series of studies is reported of method selection for a task that involved locating and comparing information within an electronic document. A variety of computer-based memory aids were also available to readers. Experiment 1 showed that subjects' navigation choices were predicted more successfully by a GOMS analysis than by the number of discrete actions (mouse clicks) required for the alternative procedures. However, the GOMS model failed to predict subjects' choices in experiment 2 where the previously chosen navigation method was modified to increase its procedural length and reduce its perceptual affordances. Subjects still frequently chose this navigation method but they also significantly increased their use of memory aids. Experiment 3 examined whether subjects' navigation choices arose from the memory demands of the alternative methods. The results showed that the method rejected by subjects in experiment 2 gave faster performance and reduced the use of some of the memory aids. It is suggested that number of motor actions in a procedure determines the use of memory aids but not the selection of navigation method. The perceptual characteristics of the alternative methods, which may relate to factors such as subjective risk, need to be incorporated into models predicting the procedures that people will select in complex tasks.
A Comprehensive Comparison between Generalized Incidence Calculus and the Dempster-Shafer Theory of Evidence BIBA 1009-1032
  Weiru Lui; Alan Bundy
Dealing with uncertainty problems in intelligent systems has attracted a lot of attention in the AI community. Quite a few techniques have been proposed. Among them, the Dempster-Shafer theory of evidence (DS theory) has been widely appreciated. In DS theory, Dempster's combination rule plays a major role. However, it has been pointed out that the application domains of the rule are rather limited and the application of the theory sometimes gives unexpected results. We have previously explored the problem with Dempster's combination rule and proposed an alternative combination mechanism in generalized incidence calculus. In this paper we give a comprehensive comparison between generalized incidence calculus and the Dempster-Shafer theory of evidence. We first prove that these two theories have the same ability in representing evidence and combining DS-independent evidence. We then show that the new approach can deal with some dependent situations while Dempster's combination rule cannot. Various examples in the paper show the ways of using generalized incidence calculus in expert systems.
Creating, Comprehending and Explaining Spreadsheets: A Cognitive Interpretation of What Discretionary Users Think of the Spreadsheet Model BIBA 1033-1065
  D. G. Hendry; T. R. G. Green
Ten discretionary users were asked to recount their experiences with spreadsheets and to explain how one of their own sheets worked. The transcripts of the interviews are summarized to reveal the key strengths and weaknesses of the spreadsheet model. There are significant discrepancies between these findings and the opinions of experts expressed in the HCI literature, which have tended to emphasize the strengths of spreadsheets and to overlook the weaknesses. In general, the strengths are such as allow quick gratification of immediate needs, while the weaknesses are such as make subsequent debugging and interpretation difficult, suggesting a situated view of spreadsheet usage in which present needs outweigh future needs. We conclude with an attempt to characterize three extreme positions in the design space of information systems: the incremental addition system, the explanation system and the transcription system. The spreadsheet partakes of the first two. We discuss how to improve its explanation facilities.

Book Reviews

"Communication at a Distance: The Influence of Print on Sociocultural Organization and Change," by David S. Kaufer and Kathleen M. Carley BIB 1067-1068
  Davida Charney
"Human Reasoning: The Psychology of Deduction," by J. St. B. T. Evans, S. E. Newstead and R. M. J. Byrne BIB 1068-1069
  T. C. Ormerod
"A Small Matter of Programming," by B. A. Nardi BIB 1069-1071
  Frank Wales
"Intelligence as Adaptive Behaviour; An Experiment in Computational Neuroethology," by R. D. Beer BIB 1071-1073
  D. Benyon
Bulletin BIB 1075-1081