HCI Bibliography Home | HCI Journals | About IJHCS | Journal Info | IJHCS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJHCS Tables of Contents: 515253545556575859606162636465666768697071

International Journal of Human-Computer Studies 61

Editors:Enrico Motta; Susan Wiedenbeck
Dates:2004
Volume:61
Publisher:Elsevier Science Publishers
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:39
Links:Table of Contents
  1. IJHCS 2004 Volume 61 Issue 1
  2. IJHCS 2004 Volume 61 Issue 2
  3. IJHCS 2004 Volume 61 Issue 3
  4. IJHCS 2004 Volume 61 Issue 4
  5. IJHCS 2004 Volume 61 Issue 5
  6. IJHCS 2004 Volume 61 Issue 6

IJHCS 2004 Volume 61 Issue 1

EDITORIAL

A message from the new editorial team BIBFull-Text 1-2
  Wendy Mackay; Enrico Motta; Susan Wiedenbeck

ARTICLE

Acquiring domain knowledge for negotiating agents: a case of study BIBAFull-Text 3-31
  Jose J. Castro-Schez; Nicholas R. Jennings; Xudong Luo; Nigel R. Shadbolt
In this paper, we employ the fuzzy repertory table technique to acquire the necessary domain knowledge for software agents to act as sellers and buyers using a bilateral, multi-issue negotiation model that can achieve optimal results in semi-competitive environments. In this context, the seller's domain knowledge that needs to be acquired is the rewards associated with the products and restrictions attached to their purchase. The buyer's domain knowledge that is acquired is their requirements and preferences on the desired products. The knowledge acquisition methods we develop involve constructing three fuzzy repertory tables and their associated distinctions matrixes. The first two are employed to acquire the seller agent's domain knowledge; and the third one is used, together with an inductive machine learning algorithm, to acquire the domain knowledge for the buyer agent.
AutoBrief: an experimental system for the automatic generation of briefings in integrated text and information graphics BIBAFull-Text 32-70
  Nancy L. Green; Giuseppe Carenini; Stephan Kerpedjiev; Joe Mattis; Johanna D. Moore; S. F. Steven F. Roth
This paper describes AutoBrief, an experimental intelligent multimedia presentation system that generates presentations in text and information graphics in the domain of transportation scheduling. Acting as an intelligent assistant, AutoBrief creates a presentation to communicate its analysis of alternative schedules. In addition, the multimedia presentation facilitates data exploration through its complex information visualizations and support for direct manipulation of presentation elements. AutoBrief's research contributions include (1) a design enabling a new human-computer interaction style in which intelligent multimedia presentation objects (textual or graphic) can be used by the audience in direct manipulation operations for data exploration, (2) an application-independent approach to multimedia generation based on the representation of communicative goals suitable for both generation of text and of complex information graphics, and (3) an application-independent approach to intelligent graphic design based upon communicative goals. This retrospective overview paper, aimed at a multidisciplinary audience from the fields of human-computer interaction and natural language generation, presents AutoBrief's design and design rationale.
Mixing personal computer and handheld interfaces and devices: effects on perceptions and attitudes BIBAFull-Text 71-83
  Ing-Marie Jonsson; Clifford Nass; K. Kwan Min Lee
Interfaces designed only for personal computers or only for handhelds can now be displayed on both devices. In this experimental study (N=39), participants used interfaces designed for a personal computer or a handheld on a personal computer, a handheld with keyboard, and a handheld with a virtual keyboard/pen input. The context was an interactive natural language query system used for financial and entertainment inquiries. When the interface matched the device, the application was perceived as easier to use. Applications on the personal computer were perceived as easier to use, less impersonal, and made users feel more in control. The handheld interface was perceived as better on all dimensions. Implications for cross-platform interface design are discussed.
On-line question-posing and peer-assessment as means for web-based knowledge sharing in learning BIBAFull-Text 84-103
  Miri Barak; Sheizaf Rafaeli
This study is an examination of a novel way for merging assessment and knowledge sharing in the context of a hybrid on-line learning system used in a postgraduate MBA course. MBA students carried out an on-line Question-Posing Assignment (QPA) that consisted of two components: Knowledge Development and Knowledge Contribution. The students also performed self- and peer-assessment and took an on-line examination, all administered by QSIA -- an on-line system for assessment and knowledge sharing. Our objective was to explore student's learning and knowledge sharing while engaged in the above. Findings indicated that even controlling for the students' prior knowledge or abilities, those who were highly engaged in on-line question-posing and peer-assessment activity received higher scores on their final examination compared to their counter peers. The results provide evidence that web-based activities can serve as both learning and assessment enhancers in higher education by promoting active learning, constructive criticism and knowledge sharing. We propose the on-line QPA as a methodology, and QSIA system as the technology for merging assessment and knowledge sharing in higher education.
Automatic justification and line-breaking of music sheets BIBAFull-Text 104-137
  P. Bellini; P. Nesi
Automated music formatting helps composers and copyists to speed up the process of music score editing by facilitating complex evaluations needed to produce music sheets in terms of symbol positioning, justification, etc. Music justification is a complex task to be automatically performed. It involves the evaluation of a large number of parameters and requires context evaluation. In this paper, the approach adopted in a justification engine of a European Research project is presented. The approach solves many of the problems of music justification: alignment of simultaneous symbols in polyphonic music, spacing dependent from the duration of the figures, compactness and readability of the resulting measure, justification of both main scores and parts. In the paper, several justification algorithms are described and compared. Stretching and shrinking of measures is also possible, while keeping the justification through a tuning parameter. The resulting algorithm can also handle automatically many music notation exceptions: for example time inconsistency of the justified measure and presence of non-durational figures, grace notes, change of clef/key signature, etc. The solution proposed presents a module for music line-breaking. This is included in the justification engine as an option for visualizing and printing right margined music sheets. Several examples are reported to highlight both the problems and the solutions adopted.
Automatic discourse structure detection using shallow textual continuity BIBAFull-Text 138-164
  Samuel W. K. Chan
A shallow natural language processing approach to discourse structure detection based on the analysis of textual continuity is described. What distinguishes it from previous research is that it does not work toward on the discovery of the formal subtopic structures. In contrast, attention is focused in uncovering the main factors in textual continuity and simulating a dynamic detection mechanism of cohesive sentence-based fragments. A connectionist filtering algorithm is used to capture the textual continuity as one of the structural backbone of text. As a result, the content conveyed by text with discontinuous topic sequence is, on average, most unlikely to be included in the resultant discourse structure. A prototype and its evaluation with various statistics are included.

IJHCS 2004 Volume 61 Issue 2

EDITORIAL

Empirical studies of software engineering BIBFull-Text 165-167
  Marian Petre; Susan Wiedenbeck

ARTICLE

Program comprehension and authentic measurement: a scheme for analysing descriptions of programs BIBAFull-Text 169-185
  Judith Good; Paul Brna
This paper describes an analysis scheme which was developed to probe the comprehension of computer programming languages by students learning to program. The scheme operates on free-form program summaries, i.e. textual descriptions of a program which are produced in response to minimal instructions by the researcher/experimenter. The scheme has been applied to descriptions of programs written in various languages, and it is felt that the scheme has the potential to be applied to languages of markedly different types (e.g. procedural, object-oriented, event-driven). The paper first discusses the basis for the scheme, before describing the scheme in detail. It then presents examples of the scheme's application, and concludes with a discussion of some open issues.
Comprehension of diagram syntax: an empirical study of entity relationship notations BIBAFull-Text 187-203
  Helen C. Purchase; Ray Welland; Matthew McGill; Linda Colpoys
Well-defined symbolic notations are essential for communication between teams of people working on any application. For large software implementations, UML is commonly used; for databases, entity relationship (ER) diagrams are useful. However, the form of notation used in texts, papers, and documentation and learning materials is often different, and tends to reflect the personal preference of the author or publisher. The choice between semantically equivalent notations does not appear to be based on any consideration of the ease with which human readers could understand the notation. This paper addresses this notation comprehension issue by proposing an experimental methodology for determining which of two complete notations is easier to comprehend. The methodology also allows individual notational variants to be targeted. This methodology has been applied to two types of ER notations: our experiment required subjects to indicate whether a supplied textual specification of objects and relationships matched each of a set of Chen (Chen, ACM Trans. Database Systems 1 (1976) 9) and SSADM (Weaver, Practical SSADM Version 4-A Complete Tutorial Guide, Pitman, London, 1993) ER diagrams. The results reveal both better performance and higher preference for the more concise overall notation, with partial results with respect to individual variants within the notations.
Team coordination through externalized mental imagery BIBAFull-Text 205-218
  Marian Petre
Fundamental to the effective operation of a design team is the communication and coordination of design models: that the members of the team are all contributing to the same solution. Other work has shown that breakdowns in the accurate sharing of goals are a significant contributor to bugs, delays and design flaws. This paper discusses one mechanism by which teams unify their vision of a solution. It describes how the mental imagery used by a key team member in constructing an abstract solution to a design problem can be externalized and adopted by the rest of the team as a focal image. Examples drawn from in situ observations of actual design practice of a number of computer system design teams are offered. The examples illustrate how the images were introduced, how they were used to coordinate subsequent design discussions, hence how they evolved, and how short-hand references to them were incorporated into the team's 'jargon'.
Tensions around the adoption and evolution of software quality management systems: a discourse analytic approach BIBAFull-Text 219-236
  Helen Sharp; Mark Woodman; Fiona Hovenden
This paper reports some results from a project to uncover the non-technical factors that affect the adoption and evolution of software quality management systems (SQMS). The data which the paper discusses comes from interviews with people involved in the quality effort in four different companies. Our approach to data collection was to use semi-structured interviews and to encourage interviewees to talk about their experiences of quality management and software development in their own organizations. We analysed this data using discourse analysis, informed by ethnographic observation, and identified a number of themes, one of which was the tensions that exist around the adoption and evolution of SQMS. In this paper, we present and discuss our approach to discourse analysis and some results that illustrate the tensions we found. We hope, thereby, to demonstrate how software engineers may use a technique from the social sciences to better understand their own practices.

IJHCS 2004 Volume 61 Issue 3

ARTICLE

Experience as a moderator of the media equation: the impact of flattery and praise BIBAFull-Text 237-258
  Daniel Johnson; John Gardner; Janet Wiles
This study extends previous media equation research, which showed that the effects of flattery from a computer can produce the same general effects as flattery from humans. Specifically, the study explored the potential moderating effect of experience on the impact of flattery from a computer. One hundred and fifty-eight students from the University of Queensland voluntarily participated in the study. Participants interacted with a computer and were exposed to one of three kinds of feedback: praise (sincere praise), flattery (insincere praise), or control (generic feedback). Questionnaire measures assessing participants' affective state, attitudes and opinions were taken. Participants of high experience, but not low experience, displayed a media equation pattern of results, reacting to flattery from a computer in a manner congruent with peoples' reactions to flattery from other humans. High experience participants tended to believe that the computer spoke the truth, experienced more positive affect as a result of flattery, and judged the computer's performance more favourably. These findings are interpreted in light of previous research and the implications for software design in fields such as entertainment and education are considered.
Towards a novel interface design framework: function-state paradigm BIBAFull-Text 259-297
  Y. Lin; W. J. Zhang
In designing a human-computer interface (interface for short) for a complex work domain, the first question to be answered is what information should be presented on the interface display. The simplest answer may be: it depends on tasks to be performed by the human operator. In the past two decades, several studies towards a satisfactory answer to this question have been reported in literature, among which a study called ecological interface design framework is most sound. Motivated by a discussion with a nuclear power plant builder (in Canada) five years ago, we have conducted a study on the interface framework and obtained very interesting results. One of the salient findings is that the current implementation of the notion of the abstract function in the ecological interface design framework is worthy of further exploration. More fundamentally, one of its basic methods, called the five-level abstraction hierarchy used for work domain analysis, can be more commented on its architecture. Our findings are based on a critical analysis of published articles on the ecological interface design framework. We further postulated an alternative framework called function-behavior-state (FBS). We have conducted an experiment to compare these two frameworks, which positively supported our findings. The present article reports the critical analysis of the ecological interface design framework and describes the FBS framework. The experimental study has been reported separately in this journal.
Presence versus availability: the design and evaluation of a context-aware communication client BIBAFull-Text 299-317
  James Fogarty; Jennifer Lai; Jim Christensen
Although electronic communication plays an important role in the modern workplace, the interruptions created by poorly-timed attempts to communicate are disruptive. Prior work suggests that sharing an indication that a person is currently busy might help to prevent such interruptions, because people could wait for a person to become available before attempting to initiate communication. We present a context-aware communication client that uses the built-in microphones of laptop computers to sense nearby speech. Combining this speech detection sensor data with location, computer, and calendar information, our system models availability for communication, a concept that is distinct from the notion of presence found in widely-used systems. In a 4 week study of the system with 26 people, we examined the use of this additional context. To our knowledge, this is the first-field study to quantitatively examine how people use automatically sensed context and availability information to make decisions about when and how to communicate with colleagues. Participants appear to have used the provided context to as an indication of presence, rather than considering availability. Our results raise the interesting question of whether sharing an indication that a person is currently unavailable will actually reduce inappropriate interruptions.
Toward a more civilized design: studying the effects of computers that apologize BIBAFull-Text 319-345
  Jeng-Yi Tzeng
While it is difficult to create completely error-free interactions in software design, the issue of how to make users feel better when they encounter errors is critical to the concept of user-centered design. Neilsen argued for offering a slightly apologetic statement before an error message provided by web servers, but the notion of a computer apologizing to its users inevitably triggers a debate about the appropriateness of providing humanized messages to users. To understand how users react to computers' apologies (presented by textual or visual formats), a computer-guessing game was designed to test users' reactions. The game features three treatments (difficulty levels, feedback types, and emotional icons), each having two levels (difficult vs. easy, apologetic feedback vs. non-apologetic feedback, with emotional icons vs. without emotional icons). Two-hundred and sixty nine high school students participated in this study and were randomly assigned to one of eight groups. The results show that while the computers' actual performances still dominated the users' assessments of the program, the computer apologies help to create more desirable psychological experiences for the users, and emotional icons help to improve the aesthetic quality of the program.
Flow experiences in information technology use BIBAFull-Text 347-357
  E. M. Pilke
An interview was used to determine, whether flow experiences as defined by Csikszentmihalyi would occur in information technology use. Results indicate, that flow experience is quite frequent while performing a variety of tasks ranging from word processing to programming to visual design and information search on a desktop computer. Also flow experiences seem to occur while using a range of software matching the variety of tasks mentioned above. Participants named factors they thought were causing flow experiences while using information technology. These include almost exclusively items that are generally accepted as good usability. This leads to the concluding hypothesis that designing interfaces that induce flow experiences is to design good usability and vice versa. More research is needed to confirm this.
Evaluating spatial memory in two and three dimensions BIBAFull-Text 359-373
  A. Cockburn; B. McKenzie
Prior research has shown that the efficient use of graphical user interfaces is strongly dependent on human capabilities for spatial cognition. One facet of spatial cognition is the ability to quickly and accurately recall and access the location of objects in a spatial arrangement. This paper describes a series of experiments aimed at determining whether three-dimensional user interfaces better support spatial memory than their more traditional two-dimensional counterparts. The experiments are conducted using both computer-supported systems and physical models that vary the depth and perspective cues in spatial arrangements of interface items. The physical models were used to escape some of the dimensional ambiguities that are hard to control using computer displays. Results strongly suggest that adding a third dimension to computer displays does not aid users' spatial memory. Although there were no significant differences between the effectiveness of spatial memory when using two- and three-dimensional computer interfaces, participants' memory for the location of cards representing web-pages was reliably better when using a two-dimensional physical model than when using an equivalent three-dimensional physical model.
Navigation and orientation in 3D user interfaces: the impact of navigation aids and landmarks BIBAFull-Text 375-395
  Avi Parush; Dafna Berman
This study examined how users acquire spatial cognition in 3D user interfaces depicting an on-screen virtual environment. The study was divided into two main phases: learning and a test of learning transfer. The learning phase consisted of participants directly navigating (search for objects) in the on-screen virtual environment using one of two navigation aids: a visual map or a route list. In addition, there were two virtual environments, one with landmarks and the other without landmarks. Learning transfer was examined by testing both navigation and orientation tasks (relative-direction pointing) in the environment without the use of the navigation aids. Findings show that while the initial navigation with a map appeared to be harder, with longer navigation times and more navigation steps than with a route list, this difference became insignificant at the end of the learning phase. Moreover, performance degradation upon removal of the navigation aids was less for those that navigated with a map as compared to route list. A similar pattern was found for the impact of landmarks. Initial navigation with landmarks appeared to be harder than without landmarks, but this difference became insignificant at the end of the learning phase. Moreover, performance degradation upon removal of the navigation aid was less for those that navigated with landmarks as compared to no landmarks. Finally, the combined impact of both the navigation aid used in the learning and the presence of landmarks was primarily evident in the orientation task. Relative direction pointing was better for those who learnt with a map without landmarks, or with route list with landmarks. The findings are discussed in terms of the impact of navigations aids and landmarks on the acquisition of route and survey knowledge in spatial cognition. In addition, some gender differences are discussed in terms of different strategies in spatial cognition acquisition.

IJHCS 2004 Volume 61 Issue 4

ARTICLE

Comparing a rule-based approach with a pattern-based approach at different levels of complexity of conceptual data modelling tasks BIBAFull-Text 397-419
  Dinesh Batra; Nicole A. Wishart
It is well known that conceptual database design is an unusually difficult and error-prone task for novice designers. To address the problem, at least two training approaches -- rule-based and pattern-based -- have been suggested. A rule-based approach prescribes a sequence in modelling the conceptual modelling constructs, and the action to be taken at each stage. A pattern-based approach presents data modelling structures that occur frequently in practice, and prescribes guidelines on how to recognize these structures. This paper describes the conceptual framework, experimental design, and results of a laboratory study that employed novice designers to compare the effectiveness of the two training approaches (between-subjects) at three levels of task complexity (within subjects). Results indicate an interaction effect between treatment and task complexity. The rule-based approach was significantly better in the low-complexity and the high-complexity cases; there was no statistical difference in the medium-complexity case. Designer performance fell significantly as complexity increased. Overall, although the rule-based approach was not significantly superior to the pattern-based approach, the study still recommends the rule-based approach for novice designers given the significantly better performance at two out of three complexity levels.
Situation awareness in emergency medical dispatch BIBAFull-Text 421-452
  Ann Blandford; B. L. William Wong
Situation awareness, and how systems can be designed to support it appropriately, have been a focus of study in dynamic, safety critical contexts such as aviation. The work reported here extends the study of situation awareness into the domain of emergency medical dispatch (EMD). The study was conducted in one of the largest ambulance services in the world. In this study, we encountered development and exploitation of situation awareness, particularly among the more senior EMD operators called allocators. In this paper we describe the notion of a 'mental picture' as an outcome of situation awareness, how an awareness of the situation is developed and maintained, the cues allocators attend to, and the difficulties they face in doing so. One of the key characteristics of ambulance control is that relatively routine behaviour is periodically interspersed with incidents that demand much higher levels of attention, but that the routine work must still be completed; operators exhibit contrasting levels of situation awareness for the different kinds of incidents. Our findings on situation awareness are related to those of others, particularly Endsley and Wickens. The observations and interviews enable us to propose high-level requirements for systems to support appropriate situation awareness, to enable EMD staff to complete their work effectively.
Embodiment and copresence in collaborative interfaces BIBAFull-Text 453-480
  Michael Gerhard; David Moore; Dave Hobbs
As collaborative computer systems are evolving, the use of spatial, three-dimensional interfaces for multiplayer games, groupware systems, and multi-user chat systems, for example, is increasing rapidly. This paper provides a theoretical underpinning for understanding the relevance of user embodiments and copresence within such three-dimensional collaborative computer interfaces. Firstly, the issue of embodiment is traced back through its origins in philosophy and psychology literature, and theories are identified, potentially helpful in understanding key issues concerning user embodiments in collaborative virtual environments. A hybrid avatar/agent model to achieve permanent user embodiments in such environments is discussed. Since copresence of other users within such environments has been shown to be an important factor for the experience of presence, a prototype embodied conversational agent has been designed to simulate copresence. A series of controlled experiments involving the prototype agent is discussed, highlighting the effects of simulated copresence on users' experience of presence. Results suggest that, despite its shortcomings, the prototype agent does seem to have increased participants' experience of presence. Evidence was found that even limited copresence as provided by the current prototype agent is sufficient to help users feel presence in the environment. The results seem to confirm that copresence simulated by agents can complement avatar technology and therefore that a hybrid avatar/agent model can potentially achieve permanent virtual presence of all participants.
Designing product listing pages on e-commerce websites: an examination of presentation mode and information format BIBAFull-Text 481-503
  Weiyin Hong; James Y. L. Thong; Kar Yan Tam
Web interface design is of enduring interest to researchers as online shopping on the Internet continues to grow. Prior research has shown that the design of product listing pages, where information on multiple products are displayed together to allow further exploration of any of them, has a great influence on the traffic and sales volume on a website. In this paper, we focus on two design features, presentation mode and information format, and examine their impact on users' interaction with websites. An experiment was conducted to compare text-only versus image-text presentation modes, based on the dual coding theory (DCT), and list versus array information formats, based on the proximity compatibility principle (PCP). In general, the findings support the application of the DCT and the PCP to the e-commerce domain. Specifically, the image-text presentation mode and the list information format were found to outperform the text-only presentation mode and the array information format respectively in terms of shorter information search time, better recall of brand names and product images, and more positive attitudes towards the screen design and using the website. Given the same information content, the spatial arrangement of products and the hierarchical placement of images can make a difference to users' online shopping performance and attitudes.
Guided programming and automated error analysis in an intelligent Prolog tutor BIBAFull-Text 505-534
  Jun Hong
We present a Prolog programming technique-based approach to guided programming and automated error analysis in Prolog tutoring. The concept of Prolog programming technique is used to characterize and classify programs. Each class of programs use the same programming technique and share the common pattern of code. A set of programming technique grammar rules are defined for each class of programs. These rules are used for programming technique recognition, program construction, and program parsing. A programming technique frame is used to represent the programming technique-related knowledge for each class of programs. A program frame is used to represent the coding-related knowledge for the reference program of each of the most specialized programming techniques. The representation of the programming technique grammar rules, programming technique-related knowledge, and coding-related knowledge provides the basis for guided programming and automated error analysis in tutoring. Our approach to error analysis however does not rely on the representation of buggy versions of the program. Automated error analysis in our approach is done on the basis of comparing the parsings of both the student program and the reference program. Our approach has been implemented in a Prolog tutoring system called the Prolog Tutor, which has been tested on a collection of 125 programs for list reversal. The Prolog Tutor performs well on these tests in terms of programming technique recognition, error detection, and error correction.
Inspectable Bayesian student modelling servers in multi-agent tutoring systems BIBAFull-Text 535-563
  Juan-Diego Zapata-Rivera; Jim Greer
User modelling shells and learner modelling servers have been proposed in order to provide reusable user/student model information over different domains, common inference mechanisms, and mechanisms to handle consistency of beliefs from different sources. Open and inspectable student models have been investigated by several authors as a means to promote student reflection, knowledge awareness, collaborative assessment, self-assessment, interactive diagnosis, to arrange groups of students, and to support the use of students' models by the teacher. This paper presents SModel, a Bayesian student modelling server used in distributed multi-agent environments. SModel server includes a student model database and a Bayesian student modelling component. SModel provides several services to a group of agents in a CORBA platform. Users can use ViSMod, a Bayesian student modelling visualization tool, and SMV, a student modelling database viewer, to visualize and inspect distributed Bayesian student models maintained by SModel server. SModel has been tested in a multi-agent tutoring system for teaching basic Java programming. In addition, SModel server has been used to maintain and share student models in a study focussed on exploring the existence of student reflection and analysing student model accuracy using inspectable Bayesian student models.

IJHCS 2004 Volume 61 Issue 5

ARTICLE

Organizational building blocks for design of distributed intelligent system BIBAFull-Text 567-599
  Chris J. van Aart; Bob Wielinga; Guus Schreiber
In this work we present a framework for multi-agent system design which is based both on human organizational notions and principles for distributed intelligent systems design. The framework elaborates on the idea that notions from the field of organizational design can be used as the basis for the design of distributed intelligent systems. Concepts such as task, control, job, operation, management, coordination and organization are framed into an agent organizational framework. A collection of organizational design activities is presented that assist in a task oriented decomposition of the overall task of a system into jobs and the reintegration of jobs using job allocation, coordination mechanisms and organizational structuring. A number of coordination mechanisms have been defined in the organizational design literature. For the scope of this work we concentrate on: Direct Supervision where one individual takes all decisions about the work of others, Mutual Adjustment that achieves coordination by a process of informal communication between agents, and Standardization of Work, Output and Skills. Three organizational structures are discussed, that coordinate agents and their work: Machine Bureaucracy, Professional Bureaucracy and Adhocracy. The Machine Bureaucracy is task-driven, seeing the organization as a single-purpose structure, which only uses one strategy to execute the overall task. The Professional Bureaucracy is competence-driven, where a part of the organization will first examine a case, match it to predetermined situations and then allocate specialized agents to it. In the Adhocracy the organization is capable of reorganizing its own structure including dynamically changing the work flow, shifting responsibilities and adapting to changing environments. A case study on distributed supply chain management shows the process from task decomposition via organizational design to three multi-agent architectures based on Mintzberg's organizational structures.
Classification of user image descriptions BIBAFull-Text 601-626
  L. Hollink; A. Th. Schreiber; B. J. Wielinga; M. Worring
In order to resolve the mismatch between user needs and current image retrieval techniques, we conducted a study to get more information about what users look for in images. First, we developed a framework for the classification of image descriptions by users, based on various classification methods from the literature. The classification framework distinguishes three related viewpoints on images, namely nonvisual metadata, perceptual descriptions and conceptual descriptions. For every viewpoint a set of descriptive classes and relations is specified. We used the framework in an empirical study, in which image descriptions were formulated by 30 participants. The resulting descriptions were split into fragments and categorized in the framework. The results suggest that users prefer general descriptions as opposed to specific or abstract descriptions. Frequently used categories were objects, events and relations between objects in the image.
Addressing a standards creation process: a focus on ebXML BIBAKFull-Text 627-648
  Beomjin Choi; T. S. Raghu; Ajay Vinze
Current trends in e-business are creating opportunities for automation of business processes across business boundaries. However, lack of standards has caused difficulties for industry players in exploiting resources and coordinating activities in the context of e-business. ebXML -- an emerging e-business standard framework to unite competing factions under a banner of international trade -- has been developed within an industry consortium using an open, collaborative process with no barriers to entry, whose approach is very different from traditional approach to create standards. Drawing on socio-technological perspective, this paper attempts to gain deeper understanding of such phenomenon by using a case study methodology. This paper uses data drawn mostly from email discussions and minutes of teleconference and face-to-face meeting. Our exploration of the ebXML standardization process generates specific propositions. In summary, our analysis found that the 'openness' of standardization process helps to create a more comprehensive standard than proprietary standards -- effectively leading to convergence of technologies, and that the unfolding dynamics of standardization process varies depending on the characteristics of standards to be developed. We also discuss user participation as an important factor that influences the dynamics of standardization process in such an open, collaborative standardization process. Surprisingly, user participation seems to be more effective in creating technical infrastructure oriented standards rather than business process oriented standards.
Keywords: e-Business standards; Standardization process; Standards body; Industry consortium
The quality of human-automation cooperation in human-system interface for nuclear power plants BIBAFull-Text 649-677
  Ann Britt Miberg Skjerve; Gyrd, Jr. Skraaning
The use of automation within high-risk industrial production systems has increased markedly during the last 50 years. Automatic systems have gained in autonomy and authority, whereby the activity of the systems has become less dependent on operator interventions. This has brought forward the suggestion that human-automation transactions should be conceptualized within the framework of cooperation, and consequently that automatic systems should be designed to be cooperative. The question is then how design can promote human-automation cooperation, and how the quality of cooperation can be assessed. The OECD Halden Reactor Project performed two closely related experiments, which allowed assessments of whether the quality of human-automation cooperation would be promoted by a human-machine interface designed to increase the observability of the automatic system's activity using graphical and verbal feedback, as compared to a conventional human-machine interface. The experiments were performed in a full-scale nuclear power plant simulator, using licensed operators as subjects, and applied a 2x2 within-subject design. The quality of human-automation cooperation was assessed from subjective operator judgements. The experiments demonstrated a clear improvement in human-automation cooperation quality when the observability of the automatic system's activity was increased. The relationship between human-automation cooperation quality and the effectiveness of the joint human-machine system's performance was furthermore explored, but no clear results were found. As the trend in automation design seems to imply an increase in system autonomy and authority, the issue of human-automation cooperation can be expected to further gain in importance in the future settings.
Comparison of head-up display (HUD) vs. head-down display (HDD): driving performance of commercial vehicle operators in Taiwan BIBAFull-Text 679-697
  Yung-Ching Liu; Ming-Hui Wen
This study investigates the effects of two different display modes -- head-up display (HUD) vs. head-down display (HDD) on the driving performance and psychological workload ratings of drivers operating commercial vehicles in Taiwan. Twelve commercial lorry drivers participated in a 2 (high/low driving load road) x 2 (head-up/head-down display) x 2 (different arrangements of display sequences used) mixed-factor driving simulation experiment. Participants were divided into two groups according to the level of driving load conditions within each driving load group; the participants were further divided into another 2 subgroups based on two arrangements of display sequences used. For each driving load condition, there were two 20-min driving simulation experiments, separated by a display sequence using head-up first and then head-down or vice versa. The subjects were asked to perform four tasks: "commercial goods delivery", "navigation", "speed detection and maintenance" and "response to an urgent event". Results indicated that for the first task, commercial goods delivery, the two display types showed no significant performance difference in terms of average accuracy rate. However, in terms of response time to an urgent event, it was faster with the HUD (with a low driving load -- head-up vs. head-down: 1.0073 vs. 1.8684 s; with a high driving load -- head-up vs. head-down: 1.3235 vs. 2.3274 s) and speed control was more consistent (having low speed variations) than with the HDD. In addition, using the HUD caused less mental stress for the drivers than the HDD and was easier for first-time users to become familiar with; with a high driving load, however, the difference between the two displays was not significant.
Efficient cooperative searching on the Web: system design and evaluation BIBAFull-Text 699-724
  Efstratios T. Diamadis; George C. Polyzos
The World Wide Web provides a convenient and inexpensive infrastructure for Computer-Supported Cooperative Work. Groupware systems allow distant users to work together in a shared virtual workspace. Awareness of group members' actions is a basic feature and key functionality for groupware. Many times a group of people work together researching information on the Web about a topic. This type of collaboration can be decomposed into two tasks. First, team members have to access, process and filter by importance the Web pages gathered. Second, they have to synthesize and present them either as a whole in the form of a report, or in an organized way in the form of Web directories. A key issue that strongly affects this particular type of cooperative work is the revisiting of pages and, consequently, the time spent on accessing and processing the same information sources, which may be relevant or not to the topic. We propose group member URL traversal awareness (GMUTA) as significant functionality for Web-based collaboration tools in order to avoid conflicting or repetitive actions by group members. We then present a prototype system we developed, the Web Collaborative Searching Assistant (WCSA), which exploits GMUTA and helps distributed group members to work more efficiently. Experimental evaluation of the WCSA indicated that the functionality provided overcomes the above-mentioned problem, improves searching efficiency and adds substantial value to the collaboration.
Socio-economic background and computer use: the role of computer anxiety and computer experience in their relationship BIBAKFull-Text 725-746
  Nikos Bozionelos
indirectly, via its relationship with computer experience and computer anxiety, was tested with questionnaire data from a sample of 267 university students. The results supported the proposition, as they indicated a causal path model that contained a positive indirect relationship of socio-economic background with the amount of current computer use, via computer experience and computer anxiety. Socio-economic background had a direct positive relationship with computer experience and an indirect negative relationship with computer anxiety. The pattern of relationships was held over and above the variance accounted for by the set of control variables that included, among others, computer access and sex. The findings are supportive of the digital divide and they imply that information technology may in fact be increasing inequalities among social strata in their access to employment opportunities. The limitations of the study along with potential directions for future research are discussed.
Keywords: Computer anxiety; Digital divide; Socio-economic background; Computer experience; Computer use; Computer access; Causal path model

IJHCS 2004 Volume 61 Issue 6

EDITORIAL

Fitts' law 50 years later: applications and contributions from human-computer interaction BIBFull-Text 747-750
  Yves Guiard; Michel Beaudouin-Lafon

ARTICLE

Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts' law research in HCI BIBAFull-Text 751-789
  R. William Soukoreff; I. Scott MacKenzie
This paper makes seven recommendations to HCI researchers wishing to construct Fitts' law models for either movement time prediction, or for the comparison of conditions in an experiment. These seven recommendations support (and in some cases supplement) the methods described in the recent ISO 9241-9 standard on the evaluation of pointing devices. In addition to improving the robustness of Fitts' law models, these recommendations (if widely employed) will improve the comparability and consistency of forthcoming publications. Arguments to support these recommendations are presented, as are concise reviews of 24 published Fitts' law models of the mouse, and 9 studies that used the new ISO standard.
Characterizing computer input with Fitts' law parameters -- the information and non-information aspects of pointing BIBAKFull-Text 791-809
  Shumin Zhai
Throughput (TP), also known as index of performance or bandwidth in Fitts' law tasks, has been a fundamental metric in quantifying input system performance. The operational definition of TP is varied in the literature. In part thanks to the common interpretations of International Standard ISO 9241-9, the "Ergonomic requirements for office work with visual display terminals -- Part 9: Requirements for non-keyboard input devices", the measurements of throughput have increasingly converged onto the average ratio of index of difficulty (ID) and trial completion time (MT), i.e. TP=ID/MT. In lieu of the complete Fitts' law regression results that can only be represented by both slope (b) and intercept (a) (or MT=a+b ID), TP has been used as the sole performance characteristic of input devices, which is problematic. We show that TP defined as ID/MT is an ill-defined concept that may change its value with the set of ID values used for the same input device and cannot be generalized beyond specific experimental target distances and sizes. The greater the absolute value of a is, the more variable TP (=ID/MT) is. ID/MT only equals a constant 1/b when a=0. We suggest that future studies should use the complete Fitts' law regression characterized by (a, b) parameters to characterize an input system. a reflects the non-informational aspect and b the informational aspect of input performance. For convenience, 1/b can be named as throughput which, unlike ID/MT, is conceptually a true constant.
Keywords: Computer input; Fitts' law; Motor control performance; Throughput; Bandwidth; Index of performance (IP); ISO 9241-9
Behind Fitts' law: kinematic patterns in goal-directed movements BIBAKFull-Text 811-821
  R. J. Bootsma; L. Fernandez; D. Mottet
Half a century ago, Paul Fitts first discovered that the time necessary to complete a pointing movement (MT) linearly increases with the amount of information (ID) necessary to specify the target width (W) relative to the distance (D). The so-called Fitts' law states that, with ID being a logarithmic function of the D/W ratio. With the rising importance of pointing in human-computer interaction, Fitts' law is nowadays an important tool for the quantitative evaluation of user interface design. We show that changes in ID give rise to systematic changes in the kinematics patterns that determine MT, and provide evidence that the observed patterns result from the interplay between basic oscillatory motion and visual control processes. We also emphasize the generality and abstract nature of Fitts' robust model of human psychomotor behavior, and suggest that some adaptations in the design of the (computer-mediated) coupling of perception and production of movement might improve the efficiency of the interaction.
Keywords: Fitts' law; Kinematics; Model
Speed-accuracy tradeoff in Fitts' law tasks -- on the equivalency of actual and nominal pointing precision BIBAKFull-Text 823-856
  Shumin Zhai; Jing Kong; Xiangshi Ren
Pointing tasks in human-computer interaction obey certain speed-accuracy tradeoff rules. In general, the more accurate the task to be accomplished, the longer it takes and vice versa. Fitts' law models the speed-accuracy tradeoff effect in pointing as imposed by the task parameters, through Fitts' index of difficulty (Id) based on the ratio of the nominal movement distance and the size of the target. Operating with different speed or accuracy biases, performers may utilize more or less area than the target specifies, introducing another subjective layer of speed-accuracy tradeoff relative to the task specification. A conventional approach to overcome the impact of the subjective layer of speed-accuracy tradeoff is to use the a posteriori "effective" pointing precision We in lieu of the nominal target width W. Such an approach has lacked a theoretical or empirical foundation. This study investigates the nature and the relationship of the two layers of speed-accuracy tradeoff by systematically controlling both Id and the index of target utilization Iu in a set of four experiments. Their results show that the impacts of the two layers of speed-accuracy tradeoff are not fundamentally equivalent. The use of We could indeed compensate for the difference in target utilization, but not completely. More logical Fitts' law parameter estimates can be obtained by the We adjustment, although its use also lowers the correlation between pointing time and the index of difficulty. The study also shows the complex interaction effect between Id and Iu, suggesting that a simple and complete model accommodating both layers of speed-accuracy tradeoff may not exist.
Keywords: Pointing; Input; Speed-accuracy tradeoff; Fitts' law; Modeling; Human performance
"Beating" Fitts' law: virtual enhancements for pointing facilitation BIBAFull-Text 857-874
  Ravin Balakrishnan
We survey recent research into new techniques for artificially facilitating pointing at targets in graphical user interfaces. While pointing in the physical world is governed by Fitts' law and constrained by physical laws, pointing in the virtual world does not necessarily have to abide by the same constraints, opening the possibility for "beating" Fitts' law with the aid of the computer by artificially reducing the target distance, increasing the target width, or both. The survey suggests that while the techniques developed to date are promising, particularly when applied to the selection of single isolated targets, many of them do not scale well to the common situation in graphical user interfaces where multiple targets are located in close proximity.
Target acquisition in multiscale electronic worlds BIBAKFull-Text 875-905
  Yves Guiard; Michel Beaudouin-Lafon
Since the advent of graphical user interfaces, electronic information has grown exponentially, whereas the size of screen displays has stayed almost the same. Multiscale interfaces were designed to address this mismatch, allowing users to adjust the scale at which they interact with information objects. The technology has progressed quickly and the theory has lagged behind. Multiscale interfaces pose a stimulating theoretical challenge: reformulating the classic target-acquisition problem from the physical world into an infinitely rescalable electronic world. We address this challenge by extending Fitts' original pointing paradigm: we introduce the scale variable, thus defining a multiscale pointing paradigm. This article reports on our theoretical and empirical results. We show that target-acquisition performance in a zooming interface must obey Fitts' law and, more specifically, that target-acquisition time must be proportional to the index of difficulty. Moreover, we complement Fitts' law by accounting for the effect of view size on pointing performance, showing that performance bandwidth is proportional to view size, up to a ceiling effect. Our first empirical study shows that Fitts' law does apply to a zoomable interface for indices of difficulty up to and beyond 30 bits, whereas classical Fitts' law studies have been confined in the 2-10 bit range. Our second study demonstrates a strong interaction between view size and task difficulty for multiscale pointing, and shows a surprisingly low ceiling. We conclude with implications of these findings for the design of multiscale user interfaces.
Keywords: Target acquisition; movement; Fitts' law; multiscale interfaces; input and interaction technologies