HCI Bibliography Home | HCI Journals | About IJHCS | Journal Info | IJHCS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJHCS Tables of Contents: 424344454647484950515253545556575859606162

International Journal of Human-Computer Studies 52

Editors:B. R. Gaines
Dates:2000
Volume:52
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:43
Links:Table of Contents
  1. IJHCS 2000 Volume 52 Issue 1
  2. IJHCS 2000 Volume 52 Issue 2
  3. IJHCS 2000 Volume 52 Issue 3
  4. IJHCS 2000 Volume 52 Issue 4
  5. IJHCS 2000 Volume 52 Issue 5
  6. IJHCS 2000 Volume 52 Issue 6

IJHCS 2000 Volume 52 Issue 1

The Impact of Animated Interface Agents: A Review of Empirical Research BIBA 1-22
  Doris M. Dehn; Susanne Van Mulken
Over the last years, the animation of interface agents has been the target of increasing interest. Largely, this increase in attention is fuelled by speculated effects on human motivation and cognition. However, empirical investigations on the effect of animated agents are still small in number and differ with regard to the measured effects. Our aim is two-fold. First, we provide a comprehensive and systematic overview of the empirical studies conducted so far in order to investigate effects of animated agents on the user's experience, behaviour and performance. Second, by discussing both implications and limitations of the existing studies, we identify some general requirements and suggestions for future studies.
Evaluating Focus Theories for Dialogue Management BIBA 23-76
  Renaud Lecuche; Dave Robertson; Catherine Barry; Chris Mellish
Interactive reasoning tools are usually driven by an agenda of tasks to perform, rather than by conventions of human dialogue. On the other hand, theories of dialogue in natural language tend to ignore the constraints imposed by reasoning tools. This paper presents a system composed of a reasoning module and a dialogue manager which cooperate to produce dialogues that are suitable for reasoning and follow human dialogue conventions. The dialogue manager is driven by focus rules. Various competing focus theories exist but there have been few comparative studies of their use in non-trivial tasks. We make a comparative study of the use of focus theories, which requires us to be precise about our interpretation of our chosen focus theories, and to develop an innovative means of empirical testing for them. We evaluate the theories on an example of combined dialogue and reasoning from the domain of requirements elicitation.
On the Use of Shared Task Models in Knowledge Acquisition, Strategic User Interaction and Clarification Agents BIBA 77-110
  Frances M. T. Brazier; Catholijn M. Jonker; Jan Treur; Niek J. E. Wijngaards
In this paper, three different roles of a shared task model as an intermediate representation of a task are presented and illustrated by applications developed in cooperation with industry. First the role of a shared task model in knowledge acquisition is discussed. In one of the two applications, decision support in the domain of soil sanitation, one of the existing generic task models for diagnostic reasoning provided a means to structure knowledge acquisition. In the second application, diagnosis of chemical processes, the acquisition process resulted in a shared task model for diagnostic reasoning on Nylon-6 production. Secondly, the role of a shared task model in designing user interaction is addressed. Three levels of interaction are considered of importance: interaction at the object level, at the level of strategic preferences, and at the level of task modification. In an application in the domain of environmental decision making, this led to the design of a user interface based on the acquired shared task model, within which all three levels of interaction were available to users. Finally, the role of shared task models within a multi-agent system including a clarification agent is addressed. Two software agents were designed that each share a task model with the user: one for a diagnosis task, and one for a clarification task. The shared model of the clarification task reflects the shared task model of diagnosis; clarification includes clarification of the overall diagnostic reasoning process. The multi-agent architecture presented has been developed to support a user both at the level of the diagnostic task he or she is performing and at the level of clarification. The architecture has been applied to the diagnosis of chemical processes.
Navigation Strategies with Ecological Displays BIBA 111-129
  Catherine M. Burns
Ecological interface design (EID) has shown success as an approach for interface design in the case of a process control microworld. However, in applying the EID approach to larger systems, questions arise as how to support the navigation and integration of abstract information. In this study, three ecological displays were developed for a simulated power plant from the same abstraction hierarchy. The displays differed in the integration of abstract information, demonstrating high-space low-time, low-space high-time, and high-space high-time integration. While using the displays, the screen actions of subjects were recorded and their navigation movements studied through maps of navigation trajectories. Distinct differences were apparent between the temporally integrated and the temporally separated displays. In the temporally separated displays, clear scanning patterns emerged and these scanning patterns were correlated with improved performance on the display. This suggests that scanning patterns are an adaptation to needed but separated information. It also suggests that functional integration is an important characteristic to support when designing large ecological displays.
A Framework for Understanding Human Factors in Web-Based Electronic Commerce BIBA 131-163
  Gareth E. Miles; Andrew Howes; Anthony Davies
The World Wide Web and email are used increasingly for purchasing and selling products. The use of the internet for these functions represents a significant departure from the standard range of information retrieval and communication tasks for which it has most often been used. Electronic commerce should not be assumed to be information retrieval, it is a separate task-domain, and the software systems that support it should be designed from the perspective of its goals and constraints. At present there are many different approaches to the problem of how to support seller and buyer goals using the internet. They range from standard, hierarchically arranged, hyperlink pages to "electronic sales assistants", and from text-based pages to 3D virtual environments. In this paper, we briefly introduce the electronic commerce task from the perspective of the buyer, and then review and analyse the technologies. A framework is then proposed to describe the design dimensions of electronic commerce. We illustrate how this framework may be used to generate additional, hypothetical technologies that may be worth further exploration.
Video Data and Video Links in Mediated Communication: What Do Users Value? BIBA 165-187
  Anne H. Anderson; Lucy Smallwood; Rory Macdonald; Jim Mullin; Annemarie Fleming; Claire O'Malley
Most studies of video-mediated, computer-supported cooperative work have investigated the impact of video conference communication links between users. Fewer studies have explored the use of multimedia systems which provide video data. In our study, the perceived benefits of these two sorts of video provision have been directly compared. We explored how users rate the value and usefulness of video links and video data in the same collaborative task, where the video links and data were delivered at different frame rates. Our comparisons of the perceived relative values of teledata and telepresence are based on the responses of 117 users each of whom took part in a session lasting around 45 min in one of the two simulations. Both studies manipulated the quality of multimedia delivery for telepresence and teledata in the same way. The simulations were: (i) the Travel Service Simulation where participants plan a holiday itinerary and (ii) the Financial Service Simulation where participants choose a property and arrange an appropriate mortgage. Participants produced very similar ratings for the perceived quality of the telepresence and the teledata. Subjects across both studies were also in broad agreement on the relative usefulness of the various kinds of multimedia data, teledata being regarded as generally more useful than telepresence. Subjects in both studies tended to rank teledata high in terms of (a) what was most useful, (b) what was the most important feature to preserve and (c) what was the most important to improve. For these multimedia customer services, teledata is more highly valued by users than telepresence. Within such complex multimedia applications, the indication for service delivery then is that, if bandwidth is limited, it would be better assigned to teledata services than to telepresence.

IJHCS 2000 Volume 52 Issue 2

Dialogues on Function Allocation BIBA 191-201
  John C. McCarthy; Enda Fallon; Liam Bannon
Irish poet Seamus Heaney, reflecting on the co-existence of industry and agriculture, the acorn and the rusted bolt, the engine shunting and the trotting horse in Derry when he was growing up, asks:
   Is it any wonder when I thought I would have second thoughts?
   His dialogical sensibility to "both-and", Derry as both industrial and agricultural, modern and traditional, left Heaney "suffering the limits of each claim" (Heaney, 1998, p. 295). This discomfort with limiting "either-or" claims on descriptions of a personal history reminds us of the dialogicality of people's meaning making (McCarthy & O'Connor, 1999). Given that dialogicality, is it any wonder that thoughts steal second thoughts?
Function Allocation: Algorithm, Alchemy or Apostasy? BIBA 203-216
  T. B. Sheridan
Fitts gave us our list, the function allocation counterpart of Moses' 10 commandments, or Luther's 95 theses. Based on the qualitative axioms of Fitts, we have sought to evolve function allocation into a science. But can usable algorithms or procedures be attained? Thus far the logic has eluded us, in spite of valiant efforts. Some have declared in disgust that function allocation can never be more than black art. Must we admit that function allocation is mostly art, judgement based on experience, with little prospect for rigorous science? Or in the end must we become apostates, and abandon our hopes for rational function allocation according to those scientific principles we have held so dear in practicing our profession?
   The paper discusses the following problems of function allocation: (1) computers, automation and robotics offer ever greater capability, but at the cost of greater system complexity and designer bewilderment, making the stakes for function allocation ever higher than before; (2) proper function allocation differs by process stage; (3) automation appears most promising at intermediate complexity, but the bounds of "intermediate" are undefined; (4) "human-centered design", while an appealing slogan, is fraught with inconsistencies in definition and generalizability; (5) "naturalistic decision-making" and "ecological" design are incompatible with normative decision theory; (6) function allocation is design, and therefore extends beyond science; and (7) living with the technological imperative, letting our evolving machines show us what they can do, acceding or resisting as the evidence becomes clear, appears inevitable.
   In spite of our best efforts to cope with these and other problems of function allocation, error and dispute over allocation criteria are human nature. Perhaps that is part of the Darwinian reality, the requisite variety, the progenitor of progress. At least we have it in our power to say no to new technology, or do we?
The Fiction of Function Allocation, Revisited BIBA 217-233
  Robert B. Fuld
In the human factors engineering literature, the function allocation concept has been a source of debate for decades, particularly in terms of its practical utility for general design. The present article revisits some fundamental criticisms of the hypothesized function allocation process, reviews related experience in the US nuclear power industry and draws parallels to the histories of modern philosophy and science.
The "Charge of the Byte Brigade" and a Socio-Technical Response BIBA 235-251
  Chris W. Clegg; Melanie Older Gray; Patrick E. Waterson
We discuss the "Charge of the Byte Brigade", manifest in the over-emphasis on technological solutions to system design and a set of criticisms levied against humans. Function allocation is central to these issues. We identify a number of different perspectives on the meaning and status of function allocation and outline our own position. This is critical of the field generally, and we argue for a number of actions, not least for a change in name in an attempt to capture a more integrative perspective on socio-technical allocations. Much of what we recommend is concerned with re-branding and re-positioning the content and underlying mindsets of this field of endeavour, and with extending the roles of people working in it. In particular, we argue that socio-technical allocations are central to system design, that we need a more integrated approach to the design and use of systems and that this process should be owned by system managers and users. We argue the need for more research and development, for the development of improved tools to support design, for raising public awareness and for developing new partnerships with relevant stakeholders. In the final part of the paper, we describe our own approach to the development and use of a set of interrelated tools to support socio-technical allocations. Finally, we comment on the existing marginalization of these issues.
Principles for Modelling Function Allocation BIBA 253-265
  Erik Hollnagel; Andreas Bye
Automation is the key element in safety, reliability of industrial processes. Selecting the right type and level of automation requires careful consideration of how to allocate tasks between operators and automation. This is important in order that the joint system, human and machine as seen together, perform in the intended manner. The Halden Reactor Project is currently engaged in a project to study this topic, with an emphasis on maximizing the operator's ability to maintain control and handle unexpected events. Functional models can be used to study this in a process control environment, because they explicitly describe the functions that must be provided by the process or the operator. This paper describes how functional modelling of the joint system can be used to provide a basis for how functions should be allocated.
KOMPASS: A Method for Complementary Function Allocation in Automated Work Systems BIBA 267-287
  Gudela Grote; Cornelia Ryser; Toni Waler; Anna Windischer; Steffen Weik
A method supporting complementary function allocation in automated work systems called KOMPASS will be presented. KOMPASS supports interdisciplinary design teams in deciding about function allocation in automated systems, taking into account the need for an integral consideration of people-related, technological and organizational factors in the design of work systems in order to satisfy the demands for effectiveness and safety of the overall work system as well as for motivating jobs for the human operators. A set of empirically tested criteria for the evaluation of the complementarity of system design forms the basis of guidelines for the analysis of work systems, individual tasks and human-machine systems as well as for a heuristic for system design. The method is described, including a practical example of an automation project to which it was applied.
Allocation of Function: Scenarios, Context and the Economics of Effort BIBA 289-318
  Andy Dearden; Michael Harrison; Peter Wright
In this paper, we describe an approach to allocation of function that makes use of scenarios as its basic unit of analysis. Our use of scenarios is driven by a desire to ensure that allocation decisions are sensitive to the context in which the system will be used and by insights from economic utility theory. We use the scenarios to focus the attention of decision makers on the relative costs and benefits of developing automated support for the activities of the scenario, the relative impact of functions on the performance of the operator's primary role and on the relative demands placed on an operator within the scenario. By focussing on relative demands and relative costs, our method seeks to allocate the operator's limited resources to the most important and most productive tasks within the work system, and to direct the effort of the design organization to the development of automated support for those functions that deliver the greatest benefit for the effective operation of the integrated human-machine system.
Exploring the Implications of Allocation of Function for Human Resource Management in the Royal Navy BIBA 319-334
  John Strain; Ken Eason
Automation changes the allocation of function between machines and people and there can be many concerns about the effects on individual human performance. However, these changes also have wider consequences because the number of people in the system may be reduced and the skills they require may be different with consequential impact upon manning, recruitment and training policies. These wider implications are rarely considered in a systematic manner when a new technical system is being developed. This paper presents a method for the assessment of these wider implications during the system development process. This method has been developed and demonstrated in a Royal Navy context to explore the impact of automation in a new class of warships on the manning of the warship and on human resource planning in the Navy. The paper describes the method and the results of applying it in the naval context. The method utilizes the approach of organisational requirements definition for information technology systems (ORDIT) to determine the responsibilities within the planned socio-technical system and a scenario-based workshop approach for establishing the implications and options at each stage of the analysis. The results demonstrate that it is possible to trace the implications of a technical change of this kind for a major organization but that it is a multi-stage and multi-layered process. There are within the process many options with different implications which reveals where the organization has leverage to plan for the future.
Function Allocation: A Perspective from Studies of Work Practice BIBA 335-355
  Peter Wright; Andy Dearden; Bob Fields
Function allocation is a central component of systems engineering and its main aim is to provide a rational means of determining which system-level functions should be carried out by humans and which by machines. Such allocation, it is assumed, can take place early in design life cycle. Such a rational approach to work design sits uneasily with studies of work practice reported in the ACI and CSCW literature. In this paper we present two case studies of work in practice. The first highlights the difference between functional abstractions used for function allocation decision making and what is required to make those functions work in practice. The second highlights how practice and technology can co-evolve in ways that change the meanings of functions allocated early in design. The case studies raise a number of implications for function allocation. One implication is that there is a need for richer representations of the work context in function allocation methods. Although some progress has been made in function allocation methodologies, it is suggested that the method of Contextual Design might offer useful insights. A second implication is that there is a need for better theories of work to inform function allocation decision making. Activity Theory is considered as a possible candidate since it incorporates a cultural-historical view of work evolution. Both Contextual Design and Activity Theory challenge assumptions that are deeply embedded in the human factors and systems engineering communities. In particular, that functions and tasks are an appropriate unit of analysis for function allocation.
Cooperation, Reliability of Socio-Technical Systems and Allocation of Function BIBA 357-379
  Laurence Rognin; Pascal Salembier; Moustapha Zouinar
When (re)designing a work environment, tasks or functions are allocated more or less explicitly among humans and between humans and machines. After a brief review and discussion of issues related to task allocation, we argue that an important aspect to be addressed when (re)designing socio-technical systems is the systematic evaluation of the impact of allocation decisions on the overall reliability of such systems. It is contended that the cooperative dimension of such systems is one of the main elements that contribute to this reliability. This claim leads us to present a conceptual framework for modelling the human contribution to the overall reliability of complex cooperative work systems. The framework is characterized here as a set of notions, mainly regulation and shared context, used to discuss and reason about this role of humans in the error tolerance properties of such systems. These notions are demonstrated with different examples derived from empirical studies of work practices in two complex cooperative work settings (air traffic and nuclear reactor control). We then show how this conceptual framework can be used for the evaluation of allocation decisions and more generally to inform design.

IJHCS 2000 Volume 52 Issue 3

The Role of Knowledge Modelling Techniques in Software Development: A General Approach Based on a Knowledge Management Tool BIBA 385-421
  J. Cuena; M. Molina
The aim of the paper is to discuss the use of knowledge models to formulate general applications. First, the paper presents the recent evolution of the software field where increasing attention is paid to conceptual modelling. Then, the current state of knowledge modelling techniques is described where increased reliability is available through the modern knowledge-acquisition techniques and supporting tools. The knowledge structure manager (KSM) tool is described next. First, the concept of knowledge area is introduced as a building block where methods to perform a collection of tasks are included together with the bodies of knowledge providing the basic methods to perform the basic tasks. Then, the CONCEL language to define vocabularies of domains and the LINK language for methods formulation are introduced. Finally, the object-oriented implementation of a knowledge area is described and a general methodology for application design and maintenance supported by KSM is proposed. To illustrate the concepts and methods, an example of system for intelligent traffic management in a road network is described. This example is followed by a proposal of generalization for reuse of the resulting architecture. Finally, some concluding comments are made regarding the feasibility of using the knowledge modelling tools and methods for general application design.
Understanding and Facilitating the Browsing of Electronic Text BIBA 423-452
  Elaine G. Toms
Browsing tends to be used in two distinctive ways, alternatively associated with the goal of the activity and with the method by which the goal is achieved. In this study, the definition of browsing combines aspects of both concepts to define browsing as an activity in which one gathers information while scanning an information space without an explicit purpose. The objective of this research was to examine how browsers interact with their browsing environment while manipulating two types of interface tools constructed from the content.
  • Menus: these were considered a stable device facilitating navigation,
       orientation and route finding. One version was presented in traditional
       hierarchical form while the other displayed all levels of the hierarchy
       simultaneously.
  • "Items-to-browse" tools: these were meant to encourage meandering and
       diversion and to prime the browsing activity. One version displayed
       automatically a set of Suggestions while the second was a typical Search
       Tool. Forty-seven adults (24 males) performed the two types of tasks (one with no purpose and the second, a control, purposeful) in four sessions over a period of four weeks. Participants scanned and/or searched the textual content of current issue plus three months of back issues of the Halifax Chronicle Herald/Mail Star using a system designed specifically for this research. At any one time only one of each type of tool was available.
       Those with no assigned goal examined significantly more articles and explored more menu options. They made quick decisions about which articles to examine, spending twice as much time reading the content. They tended not to explore the newspaper to a great extent, examining only 24% of the articles in a single issue. About three-quarters of what they examined was new information on topics that they had not known about before being exposed to the paper. The type of menu had no impact on performance, but differences were discovered between the two items-to-browse tools. Those with no goal selected more articles from the Suggestions and found more interesting articles when the Suggestions were available.
  • Assessing Word-Processing Skills by Event Stream Analysis BIBA 453-469
      R. D. Dowsing
    This paper derives the algorithms required to process the stream of textual events collected from a candidate's interaction with a word processor to produce an assessment of their word-processing skills. The information that can be extracted from the textual event stream is compared to that which can be deduced from a comparison of the candidate's submitted answer with the model answer(s) generated by the examiner. For many examinations, document comparison is simpler and more efficient than event stream analysis but it is not always possible to fully analyse errors from document comparison; hence a mixture of document comparison and event stream analysis is desirable for computer-based word-processing assessment.
    Measurement of Presence and its Consequences in Virtual Environments BIBA 471-491
      Sarah Nichols; Clovissa Haldane; John R. Wilson
    A sense of presence is one of the critical components required by any effective virtual environment (VE). In contrast, side effects such as sickness may be produced in some virtual environments, detracting from the enjoyment or usefulness of the VE and from subsequent performance of the participant. Both presence and sickness in virtual environments are multifactorial phenomena not easily amenable to understanding or measurement. The first experiment reported here compares use of direct performance measures and rating scales to assess presence, whilst varying the VE display medium (head mounted and desktop displays) and whether or not sound was used in the VE. The second experiment addresses associations between presence, sickness and enjoyment of virtual environment participation. There was enough comparability between a reflex response within the VE and the rating scales to justify future exploration of the former measure of presence. A number of explanations are given for the partial association found between presence and sickness.
    Incremental Acquisition of Search Knowledge BIBA 493-530
      Ghassan Beydoun; Achim Hoffmann
    The development of highly effective heuristics for search problems is a difficult and time-consuming task. We present a knowledge acquisition approach to incrementally model expert search processes. Though, experts do not normally have complete introspective access to that knowledge, their explanations of actual search considerations seem very valuable in constructing a knowledge-level model of their search processes.
       Furthermore, for the basis of our knowledge acquisition approach, we substantially extend the work done on Ripple-down rules which allows knowledge acquisition and maintenance without analysis or a knowledge engineer. This extension allows the expert to enter his domain terms during the KA process; thus the expert provides a knowledge-level model of his search process. We call this framework nested ripple-down rules.
       Our approach targets the implicit representation of the less clearly definable quality criteria by allowing the expert to limit his input to the system to explanations of the steps in the expert search process. These explanations are expressed in our search knowledge interactive language. These explanations are used to construct a knowledge base representing search control knowledge. We are acquiring the knowledge in the context of its use, which substantially supports the knowledge acquisition process. Thus, in this paper, we will show that it is possible to build effective search heuristics efficiently at the knowledge level. We will discuss how our system SmS1.3 (SmS for Smart Searcher) operates at the knowledge level as originally described by Newell. We complement our discussion by employing SmS for the acquisition of expert chess knowledge for performing a highly pruned tree search. These experimental results in the chess domain are evidence for the practicality of our approach.
    Toward the Optimal Link Structure of the Cyber Shopping Mall BIBA 531-551
      Jinwoo Kim; Byunggon Yoo
    This study aims at identifying the optimal link structure, which is an essential requirement for convenient and pleasant cyber shopping. To achieve the goal, this paper presents a research framework in which different types of links are hypothesized to cause different patterns of customer navigation, which in turn is expected to influence cognitive convenience and emotional pleasure of cyber shopping. Based on two dimensions of links, link direction and link target, various links are classified into five types; NBR (Neighbourhood), PAR (Parent), TOP, NEP (Nephew), and IND (Index). Two consecutive experiments were conducted in order to evaluate the cognitive and emotional usability of various combinations of the five link types. Experimental results clearly indicated that different combinations of link types influenced customers' navigation patterns, which in turn had effects on the convenience and pleasure of shopping. It was found that the combination of NBR, TOP and IND generated the optimal link structure, whereas PAR and NEP rather decreased the degree of shopping pleasure and convenience. This paper concludes with its limitations and implications on the construction of effective cyber shopping malls.
    The Reuse of Knowledge: A User-Centred Approach BIBA 553-579
      Debbie Richards
    The motivation for the work reported in this paper is the belief that not only is it beneficial to reuse knowledge but it is essential if we wish to build knowledge-based systems (KBS) that meet the needs of users. The focus of most KBS research is on complex modelling at the knowledge level which requires a knowledge engineer to act as the intermediary between the expert and the system. The type of reuse primarily considered is the reuse of ontologies or problem-solving methods so that improvements can be made in system quality and development time. However, there is little focus on the needs of users to access the knowledge in a variety of ways according to the individual's decision style or situation. The system described in this paper seeks to support the user in a number of different activities including knowledge acquisition, inferencing, maintenance, tutoring, critiquing, "what-if" analysis, explanation and modelling. The ability to ask different types of questions and to explore the knowledge in alternative ways is a different type of knowledge reuse. The knowledge acquisition and representation technique used as the foundation is known as ripple-down rules (RDR). To support the exploration activities, RDR have been combined with formal concept analysis which automatically generates an abstraction hierarchy from the low-level RDR assertions. The paper suggests that rapid and incremental KA together with retrospective modelling can be used to provide the user with a system that they can own, build and explore without the difficulties associated with capturing and validating the conceptual models of experts via the mediation of a knowledge engineer.

    IJHCS 2000 Volume 52 Issue 4

    A Plan-Based Agent Architecture for Interpreting Natural Language Dialogue BIBAK 583-635
      Liliana Ardissono; Guido Boella; Leonardo Lesmo
    This paper describes a plan-based agent architecture for modelling NL cooperative dialogue; in particular, the paper focuses on the interpretation of dialogue and on the explanation of its coherence by means of the recognition of the speakers' underlying intentions. The approach we propose makes it possible to analyze an explain in a uniform way several apparently unrelated linguistic phenomena, which have been often studied separately and treated via ad-hoc methods in the models of dialogue presented in the literature. Our model of linguistic interaction is based on the idea that dialogue can be seen as any other interaction among agents: therefore, domain-level and linguistic actions are treated in a similar way.
       Our agent architecture is based on a two-level representation of the knowledge about acting: at the metalevel, the agent modelling (AM) plans describe the recipes for plan formation and execution (they are a declarative representation of a reactive planner); at the object level, the domain and communicative actions are defined. The AM plans are used to identify the goals underlying the actions performed by an observed agent; the recognized plans constitute the dialogue context, where the intentions of all participants are stored in a structured way, in order to be used in the interpretation of the subsequent dialogue turns.
    Keywords: dialogue processing; plan and goal recognition; agent modelling; natural language interpretation.
    Speech Recognition for Command Entry in Multimodal Interaction BIBAK 637-667
      David A. Tyfa; Mark Howes
    Two experiments investigated the cognitive efficiency of using speech recognition in combination with the mouse and keyboard for a range of word processing tasks. The first experiment examined the potential of this multimodal combination to increase performance by engaging concurrent multiple resources. Speech and mouse responses were compared when using menu and direct (toolbar icon) commands, making for a fairer comparison than in previous research which has been biased against the mouse. Only a limited basis for concurrent resource use was found, with speech leading to poorer task performance with both command types. Task completion times were faster with direct commands for both speech and mouse responses, and direct commands were preferred. In the second experiment, participants were free to choose command type, and nearly always chose to use direct commands with both response modes. Speech performance was again worse than mouse, except for tasks which involved a large amount of hand and eye movement, or where direct speech was used but mouse commands were made via menus. In both experiments recognition errors were low, and although they had some detrimental effect on speech use, problems in combining speech and manual modes were highlighted. Potential verbal interference effects when using speech are discussed.
    Keywords: speech recognition; multiple resources; multimodal interaction; command entry; hands-busy; eyes-busy; verbal interference.
    The Collaborative Production of Computer Commands in Command and Control BIBAK 669-699
      Paul Luff; Christian Heath
    The division of labour, in its turn, implies interaction; for it consists not in the sheer difference of one man's kind of work from that another, but in the fact that the different tasks and accomplishments are parts of a whole to whose product all, in some degree, contribute. And wholes, in the human social realm as in the rest of the biological and in the physical realm, have their essence in interaction. Work as social interaction is the central theme of sociological and social psychological study of work.
       Hughes (1958)
       In an interaction with a computer the user receives information that is output by the computer, and responds by providing input to the computer - the user's output becomes the computer's input and vice versa.
       Dix, Finlay, Abowd and Beale (1993, p. 11)
       In this paper, we examine the details of the use of a computer system in situ. Drawing from recent developments in the social sciences, we adopt an analytic orientation that is distinctive from much current work in human-computer interaction and cognitive engineering. Rather than focusing on a circumscribed activity of an individual at a computer system, we explore how the production of computer-based activities is sensitive to the ongoing work and interaction of the participants in the setting. The study utilizes materials including fieldwork and audio-visual recordings to explore how one particular technology is used, a system for automatically controlling trains on an urban transportation system. We focus on the "uses" of this system, a fairly conventional command-and-control system, in the Control Room, and examine how the technology is immersed within the action and interaction of the participants. In particular, we explore how the entry of commands into the system by one controller is coordinated with the conduct of colleagues, and how their conduct is inextricably embedded in their colleague's use of the system. It also reveals how the activities of controllers are managed from moment to moment, so that a division of labour emerges through the course of their interaction. Although in drawing upon naturalistic materials, this study contributes to the growing corpus of "workplace studies" within the field of computer-supported cooperative work, by examining the details of computer-based activities it continues the tradition within human-computer interaction of being concerned with the detailed use of technologies. Indeed, the emerging distinction between the two fields, one considered as focusing on matters associated with the individual "user", and the other on the "group", may be false.
    Keywords: command and control; CSCW; workplace studies; interaction analysis; ethnography.
    Accountability and Automation Bias BIBAK 701-717
      Linda J. Skitka; Kathleen Mosier; Mark D. Burdick
    Although generally introduced to guard against human error, automated devices can fundamentally change how people approach their work, which in turn can lead to new and different kinds of error. The present study explored the extent to which errors of omission (failures to respond to system irregularities or events because automated devices fail to detect or indicate them) and commission (when people follow an automated directive despite contradictory information from other more reliable sources of information because they either fail to check or discount that information) can be reduced under conditions of social accountability. Results indicated that making participants accountable for either their overall performance or their decision accuracy led to lower rates of "automation bias". Errors of omission proved to be the result of cognitive vigilance decrements, whereas errors of commission proved to be the result of a combination of a failure to take into account information and a belief in the superior judgement of automated aids.
    Keywords: automation; accountability; vigilance; decision making; bias.
    A Theoretical Model of Differential Social Attributions Toward Computing Technology: When the Metaphor becomes the Model BIBAK 719-750
      George M. Marakas; Richard D. Johnson; Jonathan W. Palmer
    This paper explores the use of metaphorical personification (anthropomorphism) as an aid to describing and understanding the complexities of computing technologies. This common and seemingly intuitive practice (it "reads", "writes", "thinks", "is friendly", "catches and transmits viruses", etc.) has become the standard by which we formulate our daily communications, and often our formal training mechanisms, with regard to the technology. Both anecdotal and empirical sources have reported numerous scenarios in which computers have played a noticeably social role, thus being positioned more as a social actor than as a machine or "neutral tool." In these accounts, human behavior has ranged from making social reference to the device ("It's really much smarter than me,"), to more overt social interactions including conversational interplay and display of common human emotions in response to an interaction. Drawing from behavioral psychology and attribution theory, a theoretical model of the phenomenon is offered from which several propositions are advanced regarding the nature of the behavior, positive and negative implications associated with extended use of this metaphor, and recommendations for research into this ubiquitous social phenomena.
       ... I have encountered these situations before, and in every case they were the result of human error.
       - HAL 9000 from Arthur C. Clarke's 2001: A Space Odyssey
    Keywords: anthropomorphism; symbolic computing; social acts; laws of control; computer self-efficiency.
    Variables Affecting Information Technology End-User Satisfaction: A Meta-Analysis of the Empirical Literature BIBAK 751-771
      Mo Adam Mahmood; Janice M. Burn; Leopoldo A. Gemoets; Carmen Jacquez
    The level of end-user satisfaction with information technology (IT) has widely been accepted as an indicator of IT success. The present research synthesizes and validates the construct of IT end-user satisfaction using a meta-analysis. It accomplishes this by analysing the empirical results of 45 end-user satisfaction studies published between 1986 and 1998 and by focusing on relationships between end-user satisfaction and nine variables: perceived usefulness, ease of use, user expectations, user experience, user skills, user involvement in system development, organizational support, perceived attitude of top management toward the project and user attitude toward information systems (IS) in widely divergent settings. The present analysis found positive support for the influence of all nine variables on end-user IT satisfaction but to varying degrees. The most significant relationships were found to be user involvement in systems development, perceived usefulness, user experience, organizational support and user attitude toward the IS. This has implications for IS analysis and design as well as user training and the development of training support packages.
    Keywords: end-user satisfaction; information technology; meta-analysis.

    IJHCS 2000 Volume 52 Issue 5

    Self-Instructive Spreadsheets: An Environment for Automatic Knowledge Acquisition and Tutor Generation BIBA 775-803
      M. Lentini; D. Nardi; A. Simonetta
    Typically, spreadsheet applications are difficult to use for casual users (different from developers), mainly because of lack of support. In fact, building a tutoring facility for such applications is a time-consuming task. Our aim is the realization of a tool for the automatic generation of Intelligent Tutors for conventional spreadsheet applications. We have developed a system that works in two steps. In the first one, it extracts an explicit representation of the problem-solving pattern coded in a programmed spreadsheet. In the second step, it generates a hypertextual guide and an interactive tutor that can effectively support, in the native environment, the casual user of the spreadsheet with the specific application it is designed for. We have successfully tested our system on a class of students using an application for budget analysis.
    Consumer Web Search Behaviour: Diagrammatic Illustration of Wayfinding on the Web BIBAK 805-830
      Chris Hodkinson; Geoffrey Kiel; Janet R. McColl-Kennedy
    External information search behaviour has long been of interest to consumer researchers. Experimental and post hoc survey research methodologies have typically used a large number of variables to record search activity. However, as these are usually considered in aggregate, there is little opportunity for the researcher to overview the search style of a consumer. To date, the diagrammatic illustration of search behaviour has been limited to experimental environments in which the available information was strictly bounded, for example, within databases or when information display boards have been used. This paper, which focuses largely on inter-site world wide web (WWW) search behaviour, discusses web search paradigms and the variables used to capture WWW search. It also provides a conceptual framework for the representation of external information search behaviour in diagrammatic form. The technique offers researchers an opportunity to holistically interpret information search data and search styles. The benefits include the identification of particular search styles, more precise interpretation of web search activity numeric data and the potential application for the training of web users to improve their search effectiveness.
    Keywords: consumer behaviour: information search; WWW; wayfinding.
    The Impact of Data Models and Task Complexity on End-User Performance: An Experimental Investigation BIBAK 831-845
      Chechen Liao; Prashant C. Palvia
    The purpose of this study was to investigate similarities and differences in the quality of data representations produced by end-users using the relational model (RM), the extended entity-relationship model (EERM), and the object-oriented model (OOM). By performing laboratory experiments using MIS major students, quality was evaluated on five constructs of a data model (i.e. entity/object, descriptor, identifier, relationship and generalization hierarchy) and six facets of a relationship (i.e. unary one-to-one, unary one-to-many, binary one-to-one, binary one-to-many, binary many-to-many and ternary many-to-many-to-many).
       The research focused on two major issues: data model design and data model conversion. The first issue investigated the differences in user performance between the RM, the EERM and the OOM. The second investigated the differences in user performance between the RM and the relational conversions of the EERM and the OOM models. For the first issue, EERM and OOM scored much higher than the RM in correctness scores of binary one-to-many and binary many-to-many relationships, but only the EERM led to significance. The RM and OOM scored much higher than EERM for unary one-to-one relationships, however, only the RM resulted in significance. The OOM required significantly less time for task completion than EERM. For the second issue, RM and the relational conversion of OOM scored significantly higher than the relational conversion of EERM for unary one-to-one relationships.
    Keywords: data models; end user performance; data representation; conceptual modeling.
    Evaluating Environments for Functional Programming BIBAK 847-878
      Jon Whittle; Andrew Cumming
    Functional programming presents new challenges in the design of programming environments. In a strongly typed functional language, such as ML, much conventional debugging of runtime errors is replaced by dealing with compile-time error reports. On the other hand, the cleanness of functional programming opens up new possibilities for incorporating sophisticated correctness-checking techniques into such environments. CYNTHIA is a novel editor for ML that both addresses the challenges and explores the possibilities. It uses an underlying proof system as a framework for automatically checking for semantic errors such as non-termination. In addition, CYNTHIA embodies the idea of programming by analogy-whereby users write programs by applying abstract transformations to existing programs. This paper investigates CYNTHIA's potential as a novice ML programming environment. We report on two studies in which it was found that students using CYNTHIA commit fewer errors and correct errors more quickly than when using a compiler/text editor approach.
    Keywords: functional programming; transformational programming; structure editors.
    An Efficient Camera Calibration Method for Vision-Based Head Tracking BIBAK 879-898
      K. S. Parka; C. J. Lim
    The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system is proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We propose an efficient camera calibration method to track the three-dimensional position and orientation of the user's head accurately. We also evaluate the performance of the proposed method and the influence of the configuration of calibration points on the performance. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the direct linear transformation method which has been used in camera calibration. The results for this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual reality technology.
    Keywords: vision-based head tracking; eye-controlled human/computer interface; camera calibration.
    Some Human Dimensions of Computer Virus Creation and Infection BIBAK 899-913
      Andy Bissett; Geraldine Shipton
    Infection of computer systems by destructive computer viruses is a commonplace occurrence. Consequently, an extensive literature exists concerning the technical means of virus prevention, detection and disinfection. By contrast, in this paper we consider the human dimensions and implications behind the invention and release of computer viruses. We examine and discuss some possible conscious motivations: these include political, commercial and malicious. However, the paper is also concerned with unconscious motivations and goes on to look at possible meanings for these disruptive activities from within a psychodynamic framework based on the work of Melanie Klein. The paper draws upon previously published information about viruses and their makers in order to furnish material for these discussions. Of equal import in understanding the effect that virus infection has upon computer users. A personal anecdote illustrates the disruption to peace of mind brought about simply by the fear of virus infection. We conclude that virus creation means different things for different perpetrators, but that generally it is a destructive act aimed at dismantling what is apparently 'whole' and satisfactory. This reflects the reality that human life involves a constant struggle with processes of destructiveness as well as creativity. Paradoxically, the orderly, constructed world may become stronger through the process of learning and defending against each new virus, but this strengthening of defences may itself inflame the problem. We conclude by considering some concrete consequences for computer users, and areas for future investigation.
    Keywords: virus-maker; psychoanalysis; motivation.
    Keyword Comparison: A User-Centered Feature for Improving Web Search Tools BIBAK 915-931
      Xiaowen Fang; Gavriel Salvendy
    Current web search tools are examined. Human cognitive limitations in working memory, text processing and comprehension, problem solving, and decision-making in a search process are analysed. Based on the literature review, a user-centered feature, keyword comparison, was proposed to help users review the search results and extract useful information. Experimental search engines and browsers were developed using Visual Basic, Java and Common Gateway Interface (CGI) programming languages. An experiment was conducted to test the effectiveness of this feature. The dependent variables were the number of relevant web sites identified during the 1-hour period and satisfaction. The independent variable was the interface type of the search tools. A between-subject t -test experimental design was utilized with 20 subjects. Testing of the hypothesis, by contrasting the user-centered feature against the current search engine, indicated that feature keyword comparison improved users' search performance by 77% and satisfaction in using the feature by 35%.
    Keywords: keyword comparison; search engines; www.
    Peripheral Participation in Video-Mediated Communication BIBA 933-958
      Andrew Monk; Leon Watts
    The importance of overhearing, and other ways of monitoring communicative behaviour not explicitly directed at oneself, has been illustrated in numerous ethnographic studies of computer-supported cooperative work. This paper is concerned with a particular form of monitoring. A "peripheral participant" is defined as someone who has a legitimate interest in monitoring a joint task (being carried out by some "primary participants") but who is not actively involved in carrying out the task themselves. The concept is illustrated through field studies of telemedical consultation and related to other analyses of overhearing. Two experiments are reported where participatory status was manipulated using a role-play task. Ratings of interpersonal awareness, measures of gaze direction and recall of the conversation all indicate that the task successfully operationalized the distinction between primary and peripheral participation. In addition, the experiment manipulated the visibility of the peripheral participant to a remote primary participant. This was shown to have an effect on the remote primary participant's interpersonal awareness of the peripheral participant. Potential mechanisms for this effect are considered. It is concluded that peripheral participation is a potentially important form of involvement that needs to be considered when designing and configuring equipment for video-mediated cooperative work.

    IJHCS 2000 Volume 52 Issue 6

    Evaluating a Domain-Specialist-Oriented Knowledge Management System BIBAK 961-990
      Timothy C. Lethbridge
    We discuss the evaluation of software tools whose purpose is to assist humans to perform complex creative tasks. We call these creative task assistants (CTAs) and use as a case study CODE4, a CTA designed to allow domain specialists to manage their own knowledge base. We present an integrated process involving evaluation of usability, attractiveness and feature contribution, the latter two being the focus. To illustrate attractiveness evaluation, we assess whether CODE4 has met its objective of having users not trained in logical formalisms choose the tool to represent and manipulate knowledge in a computer. By studying use of the tool by its intended users, we conclude that it has met this objective. To illustrate feature contribution evaluation, we assess what aspects of CODE4 have in fact led to its success. To do this, we study what tasks are performed by users, and what features of both knowledge representation and user interface are exercised. We find that features for manipulating the inheritance hierarchy and naming concepts are considered the most valuable. Our overall conclusion is that those developing or researching CTAs would benefit from using the three types of evaluation in order to make effective decisions about the evolution of their products.
    Keywords: software evaluation; knowledge management; creative task assistant.
    Creating an Effective Training Environment for Enhancing Telework BIBAK 991-1005
      Viswanath Venkatesh; Cheri Speier
    There is a growing need for research examining the effective implementation and management of teleworking as it is increasingly being used as an organizational work structure. The enhanced functionality of many information technologies facilitates the completion of work across geographically dispersed teleworkers while simultaneously providing a vehicle to overcome social isolation that has been viewed as an inhibitor of teleworker effectiveness. This research assesses two training methods that can be used to help teleworkers develop skill sets for using these technologies. The results suggest that using a game-based training method facilitates the training process by increasing users' intrinsic motivation resulting in increased intention to use the technology. This can be particularly important in enhancing the effective completion of team and individual telework while at the same time providing a mechanism to minimize teleworkers social isolation.
    Keywords: game-based training; intrinsic motivation; telework.
    Using Intentional Models for the Interface Design of Multi-Level Systems BIBAK 1007-1029
      Alan W. Colman; Ying K. Leung
    In this paper, it is argued that the design of computer interfaces for complex, multi-layered systems needs to take into account the differing intentional models that are held by different types of users of such systems, and that there is a strong correlation between the job roles of individuals and the level of abstraction of the mental models held by such users. An approach to the analysis and design of complex multi-layered systems based on the analysis of job roles to elicit such models is suggested and linked with other techniques of task analysis and object-oriented analysis and design. The methodology is illustrated with the interface analysis for an automatic environmental chemical analyser.
    Keywords: intentional models; user interface design; multi-layered systems; object-oriented methodology; human-computer interaction.
    Calculators are Needlessly Bad BIBAK 1031-1069
      Harold Thimbleby
    In the two decades hand-held calculators have been readily available, there has been ample time to develop a usable design and to educate the consumer public into choosing quality devices. This article reviews a representative calculator that is "state of the art" and shows it has an execrable design. The design is shown to be confusing and essentially non-mathematical. Substantial evidence is presented that illustrates the inadequate documentation, bad implementation, feature interaction, and feature incoherence. These problems are shown to be typical of calculators generally. Despite the domain (arithmetic) being well defined, the design problems are profound, widespread, confusing-and needless. Worrying questions are begged: about design quality control, about consumer behaviour, and about the role of education-both at school level (training children to acquiesce to bad design) and at university level (training professionals to design unusable products). The article concludes with recommendations.
       "The problem of efficient and uniform notations is perhaps the most serious one facing the mathematical public." Florian Cajori (1993)
       "[. . .] contrivances adapted to peculiar purposes [. . .] and what is worse than all, a profusion of notations (when we regard the whole science) which threaten, if not duly corrected, to multiply our difficulties instead of promoting our progress." Charles Babbage, quoted in Cajori (1993).
    Keywords: consumer product user interfaces; feature interaction; feature incoherence; calculator and calculator user interfaces.
    Ontology-Driven Document Enrichment: Principles, Tools and Applications BIBAK 1071-1109
      Enrico Motta; Simon Buckingham Shum; John Domingue
    In this paper, we present an approach to document enrichment, which consists of developing and integrating formal knowledge models with archives of documents, to provide intelligent knowledge retrieval and (possibly) additional knowledge-intensive services, beyond what is currently available using "standard" information retrieval and search facilities. Our approach is ontology-driven, in the sense that the construction of the knowledge model is carried out in a top-down fashion, by populating a given ontology, rather than in a bottom-up fashion, by annotating a particular document. In this paper, we give an overview of the approach and we examine the various types of issues (e.g. modelling, organizational and user interface issues) which need to be tackled to effectively deploy our approach in the workplace. In addition, we also discuss a number of technologies we have developed to support ontology-driven document enrichment and we illustrate our ideas in the domains of electronic news publishing, scholarly discourse and medical guidelines.
    Keywords: semantic web; ontologies; knowledge modelling; digital documents; document retrieval; intelligent news servers; scholarly discourse; medical informatics
    WonderTools? A Comparative Study of Ontological Engineering Tools BIBAKWonderTools website 1111-1133
      A. J. Duineveld; R. Stoter; M. R. Weiden; B. Kenepa; V. R. Benjamins
    Ontologies are becoming increasingly important in a variety of different fields, such as intelligent searching on the web, knowledge sharing and reuse, knowledge management, etc. Therefore, we expect that the need for tools to support the construction of ontologies will increase significantly in the coming years. In this paper, we investigate several of these tools. We evaluate the tools using two different ontologies: a simple one about university employees, and a second, more complex one, about the structure of a university study. The evaluation was conducted using a framework, which incorporates aspects of ontology buildings and testing, as well as cooperation with other users. Our conclusions are that the usefulness of the tools depends on the level of the users and the stage of development of the ontology.
    Keywords: ontology tools.