HCI Bibliography Home | HCI Journals | About HCI | Journal Info | HCI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
HCI Tables of Contents: 091011121314151617181920212223242526272829

Human-Computer Interaction 19

Editors:Thomas P. Moran
Dates:2004
Volume:19
Publisher:Lawrence Erlbaum Associates
Standard No:ISSN 0737-0024
Papers:20
Links:Table of Contents
  1. HCI 2004 Volume 19 Issue 1/2
  2. HCI 2004 Volume 19 Issue 3
  3. HCI 2004 Volume 19 Issue 4

HCI 2004 Volume 19 Issue 1/2

Introduction to This Special Issue on Human-Robot Interaction BIBFull-Text 1-8
  Sara Kiesler; Pamela Hinds
Toward a Framework for Human-Robot Interaction BIBAFull-Text 9-24
  Sebastian Thrun
The goal of this article is to introduce the reader to the rich and vibrant field of robotics. Robotics is a field in change; the meaning of the term robot today differs substantially from the term just 1 decade ago. The primary purpose of this article is to provide a comprehensive description of past- and present-day robotics. It identifies the major epochs of robotic technology and systems-from industrial to service robotics-and characterizes the different styles of human-robot interaction paradigmatic for each epoch. To help set the agenda for research on human-robot interaction, the article articulates some of the most pressing open questions pertaining to modern-day human-robot interaction.
Assistive Robotics and an Ecology of Elders Living Independently in Their Homes BIBAFull-Text 25-59
  Jodi Forlizzi; Carl DiSalvo; Francine Gemperle
For elders who remain independent in their homes, the home becomes more than just a place to eat and sleep. The home becomes a place where people care for each other, and it gradually subsumes all activities. This article reports on an ethnographic study of aging adults who live independently in their homes. Seventeen elders aged 60 through 90 were interviewed and observed in their homes in 2 Midwestern cities. The goal is to understand how robotic products might assist these people, helping them to stay independent and active longer. The experience of aging is described as an ecology of aging made up of people, products, and activities taking place in a local environment of the home and the surrounding community. In this environment, product successes and failures often have a dramatic impact on the ecology, throwing off a delicate balance. When a breakdown occurs, family members and other caregivers have to intervene, threatening elders' independence and identity. This article highlights the interest in how the elder ecology can be supported by new robotic products that are conceived of as a part of this interdependent system. It is recommended that the design of these products fit the ecology as part of the system, support elders' values, and adapt to all of the members of the ecology who will interact with them.
Interactive Robots as Social Partners and Peer Tutors for Children: A Field Trial BIBAFull-Text 61-84
  Takayuki Kanda; Takayuki Hirano; Daniel Eaton; Hiroshi Ishiguro
Robots increasingly have the potential to interact with people in daily life. It is believed that, based on this ability, they will play an essential role in human society in the not-so-distant future. This article examined the proposition that robots could form relationships with children and that children might learn from robots as they learn from other children. In this article, this idea is studied in an 18-day field trial held at a Japanese elementary school. Two English-speaking "Robovie" robots interacted with first- and sixth-grade pupils at the perimeter of their respective classrooms. Using wireless identification tags and sensors, these robots identified and interacted with children who came near them. The robots gestured and spoke English with the children, using a vocabulary of about 300 sentences for speaking and 50 words for recognition. The children were given a brief picture-word matching English test at the start of the trial, after 1 week and after 2 weeks. Interactions were counted using the tags, and video and audio were recorded. In the majority of cases, a child's friends were present during the interactions. Interaction with the robot was frequent in the 1st week, and then it fell off sharply by the 2nd week. Nonetheless, some children continued to interact with the robot. Interaction time during the 2nd week predicted improvements in English skill at the posttest, controlling for pretest scores. Further analyses indicate that the robots may have been more successful in establishing common ground and influence when the children already had some initial proficiency or interest in English. These results suggest that interactive robots should be designed to have something in common with their users, providing a social as well as technical challenge.
Moonlight in Miami: Field Study of Human-Robot Interaction in the Context of an Urban Search and Rescue Disaster Response Training Exercise BIBAFull-Text 85-116
  Jennifer L. Burke; Robin R. Murphy; Michael D. Coovert; Dawn L. Riddle
This article explores human-robot interaction during a 16-hr, high-fidelity urban search and rescue disaster response drill with teleoperated robots. This article examines operator situation awareness and technical search team interaction using communication analysis. It analyzes situation awareness, team communication, and the interaction of these constructs using a systematic coding scheme designed for this research. The findings indicate that operators spent significantly more time gathering information about the state of the robot and the state of the environment than they did navigating the robot. Operators had difficulty integrating the robot's view into their understanding of the search and rescue site. They compensated for this lack of situation awareness by communicating with team members at the site, attempting to gather information that would provide a more complete mental model of the site. They also worked with team members to develop search strategies. The article concludes with suggestions for design and future research.
Beyond Usability Evaluation: Analysis of Human-Robot Interaction at a Major Robotics Competition BIBAFull-Text 117-149
  Holly A. Yanco; Jill L. Drury; Jean Scholtz
Human-robot interaction (HRI) is a relatively new field of study. To date, most of the effort in robotics has been spent in developing hardware and software that expands the range of robot functionality and autonomy. In contrast, little effort has been spent so far to ensure that the robotic displays and interaction controls are intuitive for humans. This study applied robotics, human-computer interaction (HCI), and computer-supported cooperative work (CSCW) expertise to gain experience with HCI/CSCW evaluation techniques in the robotics domain. As a case study for this article, we analyzed four different robot systems that competed in the 2002 American Association for Artificial Intelligence Robot Rescue Competition. These systems completed urban search and rescue tasks in a controlled environment with predetermined scoring rules that provided objective measures of success. This study analyzed pre-evaluation questionnaires; videotapes of the robots, interfaces, and operators; maps of the robots' paths through the competition arena; post-evaluation debriefings; and critical incidents (e.g., when the robots damaged the test arena). As a result, this study developed guidelines for developing interfaces for HRI.
Whose Job Is It Anyway? A Study of Human-Robot Interaction in a Collaborative Task BIBAFull-Text 151-181
  Pamela J. Hinds; Teresa L. Roberts; Hank Jones
The use of autonomous, mobile professional service robots in diverse workplaces is expected to grow substantially over the next decade. These robots often will work side by side with people, collaborating with employees on tasks. Some roboticists have argued that, in these cases, people will collaborate more naturally and easily with humanoid robots as compared with machine-like robots. It is also speculated that people will rely on and share responsibility more readily with robots that are in a position of authority. This study sought to clarify the effects of robot appearance and relative status on human-robot collaboration by investigating the extent to which people relied on and ceded responsibility to a robot coworker. In this study, a 3 x 3 experiment was conducted with human likeness (human, human-like robot, and machine-like robot) and status (subordinate, peer, and supervisor) as dimensions. As far as we know, this study is one of the first experiments examining how people respond to robotic coworkers. As such, this study attempts to design a robust and transferable sorting and assembly task that capitalizes on the types of tasks robots are expected to do and is embedded in a realistic scenario in which the participant and confederate are interdependent. The results show that participants retained more responsibility for the successful completion of the task when working with a machine-like as compared with a humanoid robot, especially when the machine-like robot was subordinate. These findings suggest that humanoid robots may be appropriate for settings in which people have to delegate responsibility to these robots or when the task is too demanding for people to do, and when complacency is not a major concern. Machine-like robots, however, may be more appropriate when robots are expected to be unreliable, are less well-equipped for the task than people are, or in other situations in which personal responsibility should be emphasized.

HCI 2004 Volume 19 Issue 3

Cognitive Strategies for the Visual Search of Hierarchical Computer Displays BIBAFull-Text 183-223
  Anthony J. Hornof
This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined: unlabeled layouts that contain multiple groups of items but no group headings, labeled layouts in which items are grouped and each group has a useful heading, and a target-only layout that contains just one item. A number of plausible strategies were proposed for each layout. Each strategy was programmed into the EPIC cognitive architecture, producing models that simulate the human visual-perceptual, oculomotor, and cognitive processing required for the task. The models generate search time predictions. For unlabeled layouts, the mean layout search times are predicted by a purely random search strategy, and the more detailed positional search times are predicted by a noisy systematic strategy. The labeled layout search times are predicted by a hierarchical strategy in which first the group labels are systematically searched, and then the contents of the target group. The target-only layout search times are predicted by a strategy in which the eyes move directly to the sudden appearance of the target. The models demonstrate that human visual search performance can be explained largely in terms of the cognitive strategy that is used to coordinate the relevant perceptual and motor processes, a clear and useful visual hierarchy triggers a fundamentally different visual search strategy and effectively gives the user greater control over the visual navigation, and cognitive strategies will be an important component of a predictive visual search tool. The models provide insights pertaining to the visual-perceptual and oculomotor processes involved in visual search and contribute to the science base needed for predictive interface analysis.
Modeling Information Navigation: Implications for Information Architecture BIBAFull-Text 225-271
  Craig S. Miller; Roger W. Remington
Previous studies for menu and Web search tasks have suggested differing advice on the optimal number of selections per page. In this article, we examine this discrepancy through the use of a computational model of information navigation that simulates users navigating through a Web site. By varying the quality of the link labels in our simulations, we find that the optimal structure depends on the quality of the labels and are thus able to account for the results in the previous studies. We present additional empirical results to further validate the model and corroborate our findings. Finally we discuss our findings' implications for the information architecture of Web sites.
Gestures Over Video Streams to Support Remote Collaboration on Physical Tasks BIBAFull-Text 273-309
  Susan R. Fussell; Leslie D. Setlock; Jie Yang; Jiazhi Ou; Elizabeth Mauer; Adam D. I. Kramer
This article considers tools to support remote gesture in video systems being used to complete collaborative physical tasks-tasks in which two or more individuals work together manipulating three-dimensional objects in the real world. We first discuss the process of conversational grounding during collaborative physical tasks, particularly the role of two types of gestures in the grounding process: pointing gestures, which are used to refer to task objects and locations, and representational gestures, which are used to represent the form of task objects and the nature of actions to be used with those objects. We then consider ways in which both pointing and representational gestures can be instantiated in systems for remote collaboration on physical tasks. We present the results of two studies that use a "surrogate" approach to remote gesture, in which images are intended to express the meaning of gestures through visible embodiments, rather than direct views of the hands. In Study 1, we compare performance with a cursor-based pointing device that allows remote partners to point to objects in a video feed of the work area to performance side-by-side or with the video system alone. In Study 2, we compare performance with two variations of a pen-based drawing tool that allows for both pointing and representational gestures to performance with video alone. The results suggest that simple surrogate gesture tools can be used to convey gestures from remote sites, but that the tools need to be able to convey representational as well as pointing gestures to be effective. The results further suggest that an automatic erasure function, in which drawings disappear a few seconds after they were created, is more beneficial for collaboration than tools requiring manual erasure. We conclude with a discussion of the theoretical and practical implications of the results, as well as several areas for future research.

HCI 2004 Volume 19 Issue 4

Introduction to This Special Section on Beauty, Goodness, and Usability BIBFull-Text 311-318
  Donald A. Norman
The Interplay of Beauty, Goodness, and Usability in Interactive Products BIBAFull-Text 319-349
  Marc Hassenzahl
Two studies considered the interplay between user-perceived usability (i.e., pragmatic attributes), hedonic attributes (e.g., stimulation, identification), goodness (i.e., satisfaction), and beauty of 4 different MP3-player skins. As long as beauty and goodness stress the subjective valuation of a product, both were related to each other. However, the nature of goodness and beauty was found to differ. Goodness depended on both perceived usability and hedonic attributes. Especially after using the skins, perceived usability became a strong determinant of goodness. In contrast, beauty largely depended on identification; a hedonic attribute group, which captures the product's ability to communicate important personal values to relevant others. Perceived usability as well as goodness was affected by experience (i.e., actual usability, usability problems), whereas hedonic attributes and beauty remained stable over time. All in all, the nature of beauty is rather self-oriented than goal-oriented, whereas goodness relates to both.
A Few Notes on the Study of Beauty in HCI BIBFull-Text 351-357
  Noam Tractinsky
Beauty as a Design Prize BIBFull-Text 359-366
  David M. Frohlich
Beauty in Use BIBFull-Text 367-369
  Kees Overbeeke; Stephan Wensveen
The Product as a Fixed-Effect Fallacy BIBFull-Text 371-375
  Andrew Monk
Beautiful Objects as an Extension of the Self: A Reply BIBFull-Text 377-386
  Marc Hassenzahl
Introduction to this Special Section on Change Blindness BIBFull-Text 387-388
  Richard W. Pew
Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design BIBAFull-Text 389-422
  D. Alexander Varakin; Daniel T. Levin; Roger Fidler
Because computers often rely on visual displays as a way to convey information to a user, recent research suggesting that people have detailed awareness of only a small subset of the visual environment has important implications for human-computer interface design. Equally important to basic limits of awareness is the fact that people often over-predict what they will see and become aware of. Together, basic failures of awareness and people's failure to intuitively understand them may account for situations where computer users fail to obtain critical information from a display even when the designer intended to make the information highly visible and easy to apprehend. To minimize the deleterious effects of failures of awareness, it is important for users and especially designers to be mindful of the circumscribed nature of visual awareness. In this article, we review basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface.
Change Blindness and Its Implications for Complex Monitoring and Control Systems Design and Operator Training BIBAFull-Text 423-451
  Paula J. Durlach
Recent research on change detection suggests that people often fail to notice changes in visual displays when they occur at the same time as various forms of visual transients, including eye blinks, screen flashes, and scene relocation. Distractions that draw the observer's attention away from the location of the change especially lead to detection failure. As process monitoring and control systems rely on humans interacting with complex visual displays, there is a possibility that important changes in visually presented information will be missed if the changes occur coincident with a visual transient or distraction. The purpose of this article is to review research on so called "change blindness" and discuss its implications for the design of visual interfaces for complex monitoring and control systems. The major implication is that systems should provide users with dedicated change-detection tools, instead of leaving change detection to the vagaries of human memorial and attentional processes. Possible training solutions for reducing vulnerability to change-detection failure are also discussed.