HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction

Fullname:HRI'06 Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction
Editors:Michael A. Goodrich; Alan C. Schultz; David J. Bruemmer
Location:Salt Lake City, Utah, USA
Dates:2006-Mar-02 to 2006-Mar-03
Publisher:ACM
Standard No:ISBN: 1-59593-294-1; ACM DL: Table of Contents hcibib: HRI06
Papers:64
Pages:364
Links:Conference Series Home Page
  1. Metrics and work study practices
  2. Programming and OS issues in HRI
  3. Situational awareness
  4. Learning, adaptation and imitation in HRI
  5. Assistive robotics
  6. User studies I
  7. User studies II
  8. Cognitive science in HRI
  9. Interface design and analysis
  10. Dialog, mixed-initiative and multimodal interfaces
  11. Short papers
The law of stretched systems in action: exploiting robots BIBAFull-Text 1
  David D. Woods
Robotic systems represent new capabilities that justifiably excite technologists and problem holders in many areas. But what affordances do the new capabilities represent and how will problem holders and practitioners exploit these capabilities as they struggle to meet performance demands and resource pressures? Discussions of the impact of new robotic technology typically mistake new capabilities for affordances in use. The dominate note is that robots as autonomous agents will revolutionize human activity. This is a fundamental oversimplification (see Feltovich et al., 2001) as past research has shown that advances in autonomy (an intrinsic capability) have turned out to demand advances in support for coordinated activity (extrinsic affordances). The Law of Stretched Systems captures the co-adaptive dynamic that human leaders under pressure for higher and more efficient levels of performance will exploit new capabilities to demand more complex forms of work (Woods and Dekker, 2000; Woods and Hollnagel, 2006). This law provides a guide to use past findings on the reverberations of technology change to project how effective leaders and operators will exploit the capabilities of future robotic systems. When one applies the Law of Stretched Systems to new robotic capabilities for demanding work settings, one begins to see new stories about how problem holders work with and through robotic systems to accomplish goals. These are not stories about machine autonomy and the substitution myth. Rather, the new capabilities trigger the exploration of new story lines about future operations that concern:
  • how to coordinate activities over wider ranges,
  • how to expand our perception and action over larger spans through remote
       devices, and
  • how to project our intent into distant situations to achieve our goals.
  • Every body is somebody: The psychology and design of embodiment BIBAFull-Text 2
      Clifford Nass
    There is a long tradition in psychology asking the question, "how does a body affect how people think and respond?" There is a much smaller literature addressing the question, "how does having a body affect how people think about us and respond to us?" In this talk, I will discuss a series of experimental studies that are guided by the idea that an understanding of people's responses to other people can guide research on human-robot interaction. Questions to be addressed include: When should a robot say "I"? Should robots have body parts that do not operate like human body parts? When should robots use synthetic speech as compared to recorded speech? How should teams of robots interact with teams of people? How should robots respond to human error and their own errors? For each study, I will describe theory, methods, results, and application to design.

    Metrics and work study practices

    Daily HRI evaluation at a classroom environment: reports from dance interaction experiments BIBAKFull-Text 3-9
      Fumihide Tanaka; Javier R. Movellan; Bret Fortenberry; Kazuki Aisaka
    The design and development of social robots that interact and assist people in daily life requires moving into unconstrained daily-life environments. This presents unexplored methodological challenges to robotic researchers. Is it possible, for example, to perform useful experiments in the uncontrolled conditions of everyday life environments? How long do these studies need to be to provide reliable results? What evaluations methods can be used?
       In this paper we present preliminary results on a study designed to evaluate an algorithm for social robots in relatively uncontrolled, daily life conditions. The study was conducted as part of the RUBI project, whose goal is to design and develop social robots by immersion in the environment in which the robots are supposed to operate. First we found that in spite of the relative chaotic conditions and lack of control existing in the daily activities of a child-care center, it is possible to perform experiments in a relatively short period of time and with reliable results. We found that continuous audience response methods borrowed from marketing research provided good inter-observer reliabilities, in the order of 70%, and temporal resolution (the cut-off frequency is in the order of 1 cycle per minute) at low cost (evaluation is performed continuously in real time). We also experimented with objective behavioral descriptions, like tracking children's movement across a room. These approaches complemented each other and provided a useful picture of the temporal dynamics of the child-robot interaction, allowing us to gather baseline data for evaluating future systems. Finally, we also touch the ongoing study of behavior analysis through 3 months long-term child-robot interaction.
    Keywords: QRIO, child development, child education, child robot interaction, children, daily HRI evaluation, engaging interaction, human robot interaction, long-term interaction, social interaction
    Development of a test bed for evaluating human-robot performance for explosive ordnance disposal robots BIBAKFull-Text 10-17
      Jean Scholtz; Mary Theofanos; Brian Antonishek
    This paper discusses the development of a test bed to evaluate the combined performance of the human operator and an explosive ordnance disposal robot. We have other means of evaluating the capabilities of the robots but for the robots to be truly useful it is necessary to understand how effectively and efficiently operators will be able to use these robots in critical situations. In this paper we discuss the tasks developed for the test bed and how we are going about development of the metrics for assessing the human-robot performance and, more specifically, the human-robot user interface.
    Keywords: evaluation, explosive ordnance disposal robots, human-robot interaction, metrics, test bed
    Searching for a quantitative proxy for rover science effectiveness BIBAKFull-Text 18-25
      Erin Pudenz; Geb Thomas; Justin Glasgow; Peter Coppin; David Wettergreen; Nathalie Cabrol
    During two weeks of study in September and October of 2004, a science team directed a rover and explored the arid Atacama Desert in Chile. The objective of the mission was to search for life. Over the course of the mission the team gained experience with the rover and the rover became more reliable and autonomous. As a result, the rover/operator system became more effective. Several factors likely contributed to the improvement in science effectiveness including increased experience, more effective search strategies, different science team composition, different science site locations, changes in rover operational capabilities, and changes in the operation interface. However, it is difficult to quantify this effectiveness because science is a largely creative and unstructured task. This study considers techniques that quantify science team performance leading to an understanding of which features of the human-rover system are most effective and which features need further development. Continuous observation of the scientists throughout the mission led to coded transcripts enumerating each scientific statement. This study considers whether six variables correlate with scientific effectiveness. Several of these variables are metrics and ratios related to the daily rover plan, the time spent programming the rover, the number of scientific statements made and the data returned. The results indicate that the scientists created more complex rover plans without increasing the time to create the plans. The total number of scientific statements was approximately equal (2187 versus 2415) for each week. There was a 50% reduction in bytes of returned data between the two weeks resulting in an increase in scientific statements per byte of returned data ratio. Of the original six, the most successful proxies for science effectiveness were the time to program each rover task and the number of scientific statements related to data delivered by the rover. Although both these measures have face validity and were consistent with the results of this experiment, their ultimate empirical utility must be measured further.
    Keywords: human robot interaction (HRI), mobile robots, remote rover exploration, supervisory control, teleoperation interface
    Human control of multiple unmanned vehicles: effects of interface type on execution and task switching times BIBAKFull-Text 26-32
      Peter Squire; Greg Trafton; Raja Parasuraman
    The number and type of unmanned vehicles sought in military operations continues to grow. A critical consideration in designing these systems is identifying interface types or interaction schemes that enhance an operator's ability to supervise multiple unmanned vehicles. Past research has explored how interface types impact overall performance measures (e.g. mission execution time), but has not extensively examined other human performance factors that might influence human-robot interaction. Within a dynamic military environment, it is particularly important to assess how interfaces impact an operator's ability to quickly adapt and alter the unmanned vehicle's tasking. To assess an operator's ability to confront this changing environment, we explored the impact of interface type on task switching. Research has shown performance costs (i.e. increased time response) when individuals switch between different tasks. Results from this study suggest that this task switching effect is also seen when participants controlling multiple unmanned vehicles switch between different strategies. Results also indicate that when utilizing a flexible delegation interface, participants did not incur as large a switch cost effect as they did when using an interface that allowed only the use of fixed automated control of the unmanned vehicles.
    Keywords: automation, delegation, human-robot interaction, interruption, playbook, task switching, unmanned vehicles
    Common metrics for human-robot interaction BIBAKFull-Text 33-40
      Aaron Steinfeld; Terrence Fong; David Kaber; Michael Lewis; Jean Scholtz; Alan Schultz; Michael Goodrich
    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.
    Keywords: human-robot interaction, metrics, unmanned ground vehicles

    Programming and OS issues in HRI

    The human-robot interaction operating system BIBAKFull-Text 41-48
      Terrence Fong; Clayton Kunz; Laura M. Hiatt; Magda Bugajska
    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "uman-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.
    Keywords: human-robot interaction, interaction infrastructure, multi-agent system, robot architecture
    Developer oriented visualisation of a robot program BIBAFull-Text 49-56
      T. H. J. Collett; B. A. MacDonald
    Robot programmers are faced with the challenging problem of understanding the robot's view of its world, both when creating and when debugging robot software. As a result tools are created as needed in different laboratories for different robots and different applications. We discuss the requirements for effective interaction under these conditions, and propose an augmented reality approach to visualising robot input, output and state information, including geometric data such as laser range scans, temporal data such as the past robot path, conditional data such as possible future robot paths, and statistical data such as localisation distributions. The visualisation techniques must scale appropriately as robot data and complexity increases. Our current progress in developing a robot visualisation toolkit is presented.
    Usability evaluation of an automated mission repair mechanism for mobile robot mission specification BIBAKFull-Text 57-63
      Lilia Moshkina; Yoichiro Endo; Ronald C. Arkin
    This paper describes a usability study designed to assess ease of use, user satisfaction, and performance of a mobile robot mission specification system. The software under consideration, MissionLab, allows users to specify a robot mission as well as compile it, execute it, and control the robot in real-time. In this work, a new automated mission repair mechanism that aids users in correcting faulty missions was added to the system. This mechanism was compared to an older version in order to better inform the development process, and set a direction for future improvements in usability.
    Keywords: human-robot interaction, mission specification, usability study
    Interaction debugging: an integral approach to analyze human-robot interaction BIBAKFull-Text 64-71
      Tijn Kooijmans; Takayuki Kanda; Christoph Bartneck; Hiroshi Ishiguro; Norihiro Hagita
    Along with the development of interactive robots, controlled experiments and field trials are regularly conducted to stage human-robot interaction. Experience in this field has shown that analyzing human-robot interaction for evaluation purposes fosters the development of improved systems and the generation of new knowledge. In this paper, we present the interaction debugging approach, which is based on the collection and analysis of data from robots and their environment. Considering the multimodality of robotic technology, often only audio and video are insufficient for detailed analysis of human-robot interaction. Therefore, in our analysis we integrate multimodal information using audio, video, sensory data, and intermediate variables. An important aspect of the interaction debugging approach is using a tool called Interaction Debugger to analyze data. By supporting user-friendly data presentation, annotation and navigation, Interaction Debugger enables fine-grained inspection of human-robot interaction. The main goal of this paper is to address how an integral approach to the analysis of human-robot interaction can be adopted. This is demonstrated by three case studies.
    Keywords: analysis tool, integral approach, interaction, multimodal data

    Situational awareness

    Changing shape: improving situation awareness for a polymorphic robot BIBAKFull-Text 72-79
      Jill L. Drury; Holly A. Yanco; Whitney Howell; Brian Minten; Jennifer Casper
    Polymorphic, or shape-shifting, robots can normally tackle more types of tasks than non-polymorphic robots due to their flexible morphology. Their versatility adds to the challenge of designing a human interface, however. To investigate the utility of providing awareness information about the robot's physical configuration (or "pose"), we performed a within-subjects experiment with presence or absence of pose information being the independent variable. We found that participants were more likely to tip the robot or have it ride up on obstacles when they used the display that lacked pose information and also more likely to move the robot to the highest position to become oriented. There was no significant difference in the number of times that participants bumped into obstacles, however, indicating that having more awareness of the robot's state does not affect awareness of the robots' immediate surroundings. Participants thought the display with pose information was easier to use, helped their performance and was more enjoyable than having no pose information. Future research directions point toward providing recommendations to robot operators for which pose they should change to given the terrain to be traversed.
    Keywords: evaluation, human-robot interaction, interaction design, polymorphic robots, shape-shifting robots, situation awareness
    Attaining situational awareness for sliding autonomy BIBAKFull-Text 80-87
      Brennan P. Sellner; Laura M. Hiatt; Reid Simmons; Sanjiv Singh
    We are interested in the problems of a human operator who is responsible for rapidly and accurately responding to requests for help from an autonomous robotic construction team. A difficult aspect of this problem is gaining an awareness of the requesting robot's situation quickly enough to avoid slowing the whole team down. One approach to speeding the initial acquisition of situational awareness is to maintain a buffer of data, and play it back for the human when their help is needed. We report here on an experiment to determine how the composition and length of this buffer affect the human's speed and accuracy in our multi-robot construction domain. The experiments show that, for our scenario, 5-10 seconds of one raw video feed led to the fastest operator attainment of situational awareness, while accuracy was maximized by viewing 10 seconds of three video feeds. These results are necessarily specific to our scenario, but we feel that they indicate general trends which may be of use in other situations. We discuss the interacting effects of buffer composition and length on operator speed and accuracy, and draw several conclusions from this experiment which may generalize to other scenarios.
    Keywords: case study, situational awareness, sliding autonomy, user study
    A decomposition of UAV-related situation awareness BIBAKFull-Text 88-94
      Jill L. Drury; Laurel Riek; Nathan Rackliffe
    This paper presents a fine-grained decomposition of situation awareness (SA) as it pertains to the use of unmanned aerial vehicles (UAVs), and uses this decomposition to understand the types of SA attained by operators of the Desert Hawk UAV. Since UAVs are airborne robots, we adapt a definition previously developed for human-robot awareness after learning about the SA needs of operators through observations and interviews. We describe the applicability of UAV-related SA for people in three roles: UAV operators, air traffic controllers, and pilots of manned aircraft in the vicinity of UAVs. Using our decomposition, UAV interaction designers can specify SA needs and analysts can evaluate a UAV interface's SA support with greater precision and specificity than can be attained using other SA definitions.
    Keywords: evaluation, interaction design, situation awareness, unmanned aerial vehicles (UAVs), user interaction requirements
    Comparing the usefulness of video and map information in navigation tasks BIBAKFull-Text 95-101
      Curtis W. Nielsen; Michael A. Goodrich
    One of the fundamental aspects of robot teleoperation is the ability to successfully navigate a robot through an environment. We define successful navigation to mean that the robot minimizes collisions and arrives at the destination in a timely manner. Often video and map information is presented to a robot operator to aid in navigation tasks. This paper addresses the usefulness of map and video information in a navigation task by comparing a side-by-side (2D) representation and an integrated (3D) representation in both a simulated and a real world study. The results suggest that sometimes video is more helpful than a map and other times a map is more helpful than video. From a design perspective, an integrated representation seems to help navigation more than placing map and video side-by-side.
    Keywords: HRI, human robot interaction, information presentation, integrated display, user studies

    Learning, adaptation and imitation in HRI

    FOCUS: a generalized method for object discovery for robots that observe and interact with humans BIBAKFull-Text 102-109
      Manuela M. Veloso; Paul E. Rybski; Felix von Hundelshausen
    The essence of the signal-to-symbol problem consists of associating a symbolic description of an object (e.g., a chair) to a signal (e.g., an image) that captures the real object. Robots that interact with humans in natural environments must be able to solve this problem correctly and robustly. However, the problem of providing complete object models a priori to a robot so that it can understand its environment from any viewpoint is extremely difficult to solve. Additionally, many objects have different uses which in turn can cause ambiguities when a robot attempts to reason about the activities of a human and their interactions with those objects. In this paper, we build upon the fact that robots that co-exist with humans should have the ability of observing humans using the different objects and learn the corresponding object definitions. We contribute an object recognition algorithm, FOCUS, that is robust to the variations of signals, combines structure and function of an object, and generalizes to multiple similar objects. FOCUS, which stands for Finding Object Classification through Use and Structure, combines an activity recognizer capable of capturing how an object is used with a traditional visual structure processor. FOCUS learns structural properties (visual features) of objects by knowing first the object's affordance properties and observing humans interacting with that object with known activities. The strength of the method relies on the fact that we can define multiple aspects of an object model, i.e., structure and use, that are individually robust but insufficient to define the object, but can do when combined.
    Keywords: functional object recognition, learning by demonstration
    Using context and sensory data to learn first and second person pronouns BIBAKFull-Text 110-117
      Kevin Gold; Brian Scassellati
    We present a method of grounded word learning that can learn the meanings of first and second person pronouns. The model selectively associates new words with agents in the environment by using already understood words to establish context. The method uses chi-square tests to find significant associations between the new words and attributes of the relevant agents. We show that this model can learn from a transcript of a parent-child interaction that "I" refers to the person who is speaking. With the additional information that questions about wants refer to the person being asked about them, the system learns that "you" refers to the person being addressed. We show that an incorrect assumption about the subject of "want" questions can lead to pronoun reversal, a linguistic error most commonly found in autistic and congenitally blind children. Finally, we present results from a physical implementation on a robot that runs in real time.
    Keywords: autism, deixis, humanoid robot, natural language, pronoun reversal, pronouns, real-time, word learning
    Teaching robots by moulding behavior and scaffolding the environment BIBAKFull-Text 118-125
      Joe Saunders; Chrystopher L. Nehaniv; Kerstin Dautenhahn
    Programming robots to carry out useful tasks is both a complex and non-trivial exercise. A simple and intuitive method to allow humans to train and shape robot behaviour is clearly a key goal in making this task easier. This paper describes an approach to this problem based on studies of social animals where two teaching strategies are applied to allow a human teacher to train a robot by moulding its actions within a carefully scaffolded environment. Within these environments sets of competences can be built by building stateslash action memory maps of the robot's interaction within that environment. These memory maps are then polled using a k-nearest neighbour based algorithm to provide a generalised competence. We take a novel approach in building the memory models by allowing the human teacher to construct them in a hierarchical manner. This mechanism allows a human trainer to build and extend an action-selection mechanism into which new skills can be added to the robot's repertoire of existing competencies. These techniques are implemented on physical Khepera miniature robots and validated on a variety of tasks.
    Keywords: imitation, memory-based learning, scaffolding, social robotics, teaching, zone of proximal development
    Effects of adaptive robot dialogue on information exchange and social relations BIBAKFull-Text 126-133
      Cristen Torrey; Aaron Powers; Matthew Marge; Susan R. Fussell; Sara Kiesler
    Human-robot interaction could be improved by designing robots that engage in adaptive dialogue with users. An adaptive robot could estimate the information needs of individuals and change its dialogue to suit these needs. We test the value of adaptive robot dialogue by experimentally comparing the effects of adaptation versus no adaptation on information exchange and social relations. In Experiment 1, a robot chef adapted to novices by providing detailed explanations of cooking tools; doing so improved information exchange for novice participants but did not influence experts. Experiment 2 added incentives for speed and accuracy and replicated the results from Experiment 1 with respect to information exchange. When the robot's dialogue was adapted for expert knowledge (names of tools rather than explanations), expert participants found the robot to be more effective, more authoritative, and less patronizing. This work suggests adaptation in human-robot interaction has consequences for both task performance and social cohesion. It also suggests that people may be more sensitive to social relations with robots when under task or time pressure.
    Keywords: adaptive dialogue, collaboration, common ground, human-robot communication, human-robot interaction, perspective taking, social robots
    Evaluation of robot imitation attempts: comparison of the system's and the human's perspectives BIBAKFull-Text 134-141
      Aris Alissandrakis; Chrystopher L. Nehaniv; Kerstin Dautenhahn; Joe Saunders
    Imitation is a powerful learning tool when humans and robots interact in a social context. A series of experimental runs and a small pilot user study were conducted to evaluate the performance of a system designed for robot imitation. Performance assessments of similarity of imitative behaviours were carried out by machines and by humans: the system was evaluated quantitatively (from a machine-centric perspective) and qualitatively (from a human perspective) in order to study the reconciliation of these views. The experimental results presented here illustrate how the number of exceptions can be used as a performance measure by a robotic or software imitator of an object manipulation behaviour. (In this context, exceptions are events when the optimal displacement and/or rotation that minimize the dissimilarity metrics used to generate a corresponding imitative behaviour cannot be directly achieved in the particular context.) Results of the user study giving similarity judgments on imitative behaviours were used to examine how the quantitative measure of the number of exceptions (from a robot's perspective) corresponds to the qualitative evaluation of similarity (from a human's perspective) for the imitative behaviours generated by the jabberwocky system. Results suggest that there is a good alignment between this quantitive system centered assessment and the more qualitative human-centered assessment of imitative performance.
    Keywords: human-robot interaction, imitation and social learning, programming by demonstration

    Assistive robotics

    Ergonomics-for-one in a robotic shopping cart for the blind BIBAKFull-Text 142-149
      Vladimir A. Kulyukin; Chaitanya Gharpure
    Assessment and design frameworks for human-robot teams attempt to maximize generality by covering a broad range of potential applications. In this paper, we argue that, in assistive robotics, the other side of generality is limited applicability: it is oftentimes more feasible to custom-design and evolve an application that alleviates a specific disability than to spend resources on figuring out how to customize an existing generic framework. We present a case study that shows how we used a pure bottom-up learn-through-deployment approach inspired by the principles of ergonomics-for-one to design, deploy and iteratively re-design a proof-of-concept robotic shopping cart for the blind.
    Keywords: assistive robotics, assistive technology, ergonomics-for-one, navigation and wayfinding for the blind
    Encouraging physical therapy compliance with a hands-Off mobile robot BIBAKFull-Text 150-155
      Rachel Gockley; Maja J. MatariC
    This paper presents results toward our ongoing research program into hands-off assistive human-robot interaction [6]. Our work has focused on applications of socially assistive robotics in health care and education, where human supervision can be significantly augmented and complemented by intelligent machines. In this paper, we focus on the role of embodiment, empirically addressing the question: "In what ways can the robot's physical embodiment be used effectively to positively influence human task-related behavior?" We hypothesized that users' personalities would correlate with their preferences of robot behavior expression. To test this hypothesis, we implemented an autonomous mobile robot aimed at the role of a monitoring and encouragement system for stroke patient rehabilitation. We performed a pilot study that indicates that the presence and behavior of the robot can influence how well people comply with their physical therapy.
    Keywords: embodiment, human-robot interaction, physical therapy, psychology, social robots, stroke recovery
    Spatial routines for a simulated speech-controlled vehicle BIBAKFull-Text 156-163
      Stefanie Tellex; Deb Roy
    We have defined a lexicon of words in terms of spatial routines, and used that lexicon to build a speech controlled vehicle in a simulator. A spatial routine is a script composed from a set of primitive operations on occupancy grids, analogous to Ullman's visual routines. The vehicle understands the meaning of context-dependent natural language commands such as "Go across the room." When the system receives a command, it combines definitions from the lexicon according to the parse structure of the command, creating a script that selects a goal for the vehicle. Spatial routines may provide the basis for interpreting spatial language in a broad range of physically situated language understanding systems.
    Keywords: language grounding, situated language processing, spatial language, spatial routines, visual routines, wheelchair
    On natural language dialogue with assistive robots BIBAKFull-Text 164-171
      Vladimir A. Kulyukin
    This paper examines the appropriateness of natural language dialogue (NLD) with assistive robots. Assistive robots are defined in terms of an existing human-robot interaction taxonomy. A decision support procedure is outlined for assistive technology researchers and practitioners to evaluate the appropriateness of NLD in assistive robots. Several conjectures are made on when NLD may be appropriate as a human-robot interaction mode.
    Keywords: assistive robotics, assistive technology, natural language dialogue

    User studies I

    How may I serve you?: a robot companion approaching a seated person in a helping context BIBAKFull-Text 172-179
      K. Dautenhahn; M. Walters; S. Woods; K. L. Koay; C. L. Nehaniv; A. Sisbot; R. Alami; T. Siméon
    This paper presents the combined results of two studies that investigated how a robot should best approach and place itself relative to a seated human subject. Two live Human Robot Interaction (HRI) trials were performed involving a robot fetching an object that the human had requested, using different approach directions. Results of the trials indicated that most subjects disliked a frontal approach, except for a small minority of females, and most subjects preferred to be approached from either the left or right side, with a small overall preference for a right approach by the robot. Handedness and occupation were not related to these preferences. We discuss the results of the user studies in the context of developing a path planning system for a mobile robot.
    Keywords: human-robot interaction, live interactions, personal spaces, social robot, social spaces, user trials
    Effects of head movement on perceptions of humanoid robot behavior BIBAKFull-Text 180-185
      Emily Wang; Constantine Lignos; Ashish Vatsal; Brian Scassellati
    This paper examines human perceptions of humanoid robot behavior, specifically how perception is affected by variations in head tracking behavior under constant gestural behavior. Subjects were invited to the lab to "play with Nico," an upper-torso humanoid robot. The follow-up survey asked subjects to rate and write about the experience. A coding scheme originally created to gauge human intentionality was applied to written responses to measure the level of intentionality that subjects perceived in the robot. Subjects were presented with one of four variations of head movement: a motionless head, a smooth tracking head, a tracking head without smoothed movements, and an avoidance behavior, while a pre-scripted wave and beckon sequence was carried out in all cases. Surprisingly, subjects rated the interaction as most enjoyable and Nico as possessing more intentionality when avoidance and unsmooth tracking were used. These data suggest that naïve users of robots may prefer caricatured and exaggerated behaviors to more natural ones. Also, correlations between ratings across modes suggest that simple features of robot behavior reliably evoke notable changes in many perception scales.
    Keywords: coding scheme, head tracking behavior, intentionality
    Interactions with a moody robot BIBAKFull-Text 186-193
      Rachel Gockley; Jodi Forlizzi; Reid Simmons
    This paper reports on the results of a long-term experiment in which a social robot's facial expressions were changed to reflect different moods. While the facial changes in each condition were not extremely different, they still altered how people interacted with the robot. On days when many visitors were present, average interactions with the robot were longer when the robot displayed either a "happy" or a "sad" expression instead of a neutral face, but the opposite was true for low-visitor days. The implications of these findings for human-robot social interaction are discussed.
    Keywords: affective modeling, emotions, human-robot interaction, moods, psychology, social robots

    User studies II

    Empirical results from using a comfort level device in human-robot interaction studies BIBAKFull-Text 194-201
      K. L. Koay; K. Dautenhahn; S. N. Woods; M. L. Walters
    This paper describes an extensive analysis of the comfort level data of 7 subjects with respect to 12 robot behaviours as part of a human-robot interaction trial. This includes robot action, proximity and motion relative to the subjects. Two researchers coded the video material, identifying visible states of discomfort displayed by subjects in relation to the robot's behaviour. Agreement between the coders varied from moderate to high, except for more ambiguous situations involving robot approach directions. The detected visible states of discomfort were correlated with the situations where the comfort level device (CLD) indicated states of discomfort. Results show that the uncomfortable states identified by both coders, and by either of the coders corresponded with 31% and 64% of the uncomfortable states identified by the subjects' CLD data (N=58), respectively. Conversely there was 72% agreement between subjects' CLD data and the uncomfortable states identified by both coders (N=25). Results show that the majority of the subjects expressed discomfort when the robot blocked their path or was on a collision course towards them, especially when the robot was within 3 meters proximity. Other observations include that the majority of subjects experienced discomfort when the robot was closer than 3m, within the social zone reserved for human-human face to face conversation, while they were performing a task. The advantages and disadvantages of the CLD in comparison to other techniques for assessing subjects' internal states are discussed and future work concludes the paper.
    Keywords: comfort level device, human-robot interaction, social interaction, social robot
    An investigation of real world control of robotic assets under communication latency BIBAKFull-Text 202-209
      Jason P. Luck; Patricia L. McDermott; Laurel Allender; Deborah C. Russell
    Robots are already being used in a variety of applications, including the military battlefield. As robotic technology continues to advance, those applications will increase, as will the demands on the associated network communication links. Two experiments investigated the effects of communication latency on the control of a robot across four Levels Of Automation (LOAs), (1) full teleoperation, (2) guarded teleoperation, (3) autonomous obstacle avoidance, and (4) full autonomy. Latency parameters studied included latency duration, latency variability, and the "direction" in which the latency occurs, that is from user-to-robot or from robot-to-user. The results indicate that the higher the LOA, the better the performance in terms of both time and number of errors made, and also the more resistant to the degrading effects of latency. Subjective reports confirmed these findings. Implications of constant vs. variable-latency, user-to-robot vs. robot-to-user latency, and latency duration are also discussed.
    Keywords: communication, control, and level of automation, delay, latency, robotics, teleoperation
    Effective team-driven multi-model motion tracking BIBAKFull-Text 210-217
      Yang Gu; Manuela Veloso
    Autonomous robots use sensors to perceive and track objects in the world. Tracking algorithms use object motion models to estimate the position of a moving object. Tracking efficiency completely depends on the accuracy of the motion model and of the sensory information. Interestingly, when the robots can actuate the object being tracked, the motion can become highly discontinuous and nonlinear. We have previously developed a successful tracking approach that effectively switches among object motion models as a function of the robot's actions. If the object to be tracked is actuated by a team, the set of motion models is quite more complex. In this paper, we report on a tracking approach that can use a dynamic multiple motion model based on a team coordination plan. We present the multi-model probabilistic tracking algorithms in detail and present empirical results both in simulation and real robot test. Our physical team is composed of a robot and a human in a real Segway soccer game scenario. We show how the coordinated plan allows the robot to better track a mobile object through the effective interaction with its human teammate.
    Keywords: motion modelling, multi-model, team-driven, tracking

    Cognitive science in HRI

    The advisor robot: tracing people's mental model from a robot's physical attributes BIBAKFull-Text 218-225
      Aaron Powers; Sara Kiesler
    Humanoid robots offer many physical design choices such as voice frequency and head dimensions. We used hierarchical statistical mediation analysis to trace differences in people's mental model of robots from these choices. In an experiment, a humanoid robot gave participants online advice about their health. We used mediation analysis to identify the causal path from the robot's voice and head dimensions to the participants' mental model, and to their willingness to follow the robot's advice. The male robot voice predicted impressions of a knowledgeable robot, whose advice participants said they would follow. Increasing the voice's fundamental frequency reduced this effect. The robot's short chin length (but not its forehead dimensions) predicted impressions of a sociable robot, which also predicted intentions to take the robot's advice. We discuss the use of this approach for designing robots for different roles, when people's mental model of the robot matters.
    Keywords: dialogue, gender, human-robot interaction, humanoids, knowledge estimation, mental model, perception, social robots
    The utility of affect expression in natural language interactions in joint human-robot tasks BIBAKFull-Text 226-233
      Matthias Scheutz; Paul Schermerhorn; James Kramer
    Recognizing and responding to human affect is important in collaborative tasks in joint human-robot teams. In this paper we present an integrated affect and cognition architecture for HRI and report results from an experiment with this architecture that shows that expressing affect and responding to human affect with affect expressions can significantly improve team performance in a joint human-robot task.
    Keywords: affect, distributed affect architecture, human robot teams, joint human-robot tasks
    Analysis of human behavior to a communication robot in an open field BIBAKFull-Text 234-241
      Shogo Nabe; Takayuki Kanda; Kazuo Hiraki; Hiroshi Ishiguro; Kiyoshi Kogure; Norihiro Hagita
    This paper investigates human behavior around an interactive robot at a science museum. To develop a communication robot that works in daily environments, it is important to investigate the available information from a robot about people's behavior. Such information will enable the robot to predict people's behavior so that the robot can optimize its interactive behavior. We analyzed visitor behavior toward a simple interactive robot exhibited at a science museum in relation to information from sound level and range sensors. We discovered factors that influence the way people approach, maintain distance, and interact both physically and verbally with the robot. This enabled us to extract meaningful information from the sensory information and apply it to communication robots.
    Keywords: analysis of human behavior, communication robot, field trial, psychology
    Children and robots learning to play hide and seek BIBAKFull-Text 242-249
      J. Gregory Trafton; Alan C. Schultz; Dennis Perznowski; Magdalena D. Bugajska; William Adams; Nicholas L. Cassimatis; Derek P. Brock
    How do children learn how to play hide and seek? At age 3-4, children do not typically have perspective taking ability, so their hiding ability should be extremely limited. We show through a case study that a 3 1/2 year old child can, in fact, play a credible game of hide and seek, even though she does not seem to have perspective taking ability. We propose that children are able to learn how to play hide and seek by learning the features and relations of objects (e.g., containment, under) and use that information to play a credible game of hide and seek. We model this hypothesis within the ACT-R cognitive architecture and put the model on a robot, which is able to mimic the child's hiding behavior. We also take the "hiding" model and use it as the basis for a "seeking" model. We suggest that using the same representations and procedures that a person uses allows better interaction between the human and robotic system.
    Keywords: cognitive modeling, hide and seek, human-robot interaction

    Interface design and analysis

    Effective user interface design for rescue robotics BIBAKFull-Text 250-257
      M. Waleed Kadous; Raymond Ka-Man Sheh; Claude Sammut
    Until robots are able to autonomously navigate, carry out a mission and report back to base, effective human-robot interfaces will be an integral part of any practical mobile robot system. This is especially the case for robot-assisted Urban Search and Rescue (USAR). Unfamiliar and unstructured environments, unreliable communications and many sensors combine to make the job of a human operator, and hence the interface designer challenging.
       This paper presents the design, implementation and deployment of a human-robot interface for the teleoperated USAR research robot, textsfCASTER. Proven HCI-based user interface design principles were adopted in order to produce an interface that was intuitive and minimised learning time while maximising effectiveness.
       The human-robot interface was deployed by Team CASualty in the 2005 RoboCup Rescue Robot League competition. This competition allows a wide variety of approaches to USAR research to be evaluated in a realistic environment. Despite the operator having less than one month of experience, Team CASualty came 3rd, beating teams that had far longer to train their operators. In particular, the ease with which the robot could be driven and high quality information gathered played a crucial part in Team CASualty's success. Further empirical evaluations of the system on a group of twelve users as well as members of the public further reinforce our belief that this interface is quick to learn, easy to use and effective.
    Keywords: human robot interface, rescue robot design, user interface design
    Service robots in the domestic environment: a study of the roomba vacuum in the home BIBAKFull-Text 258-265
      Jodi Forlizzi; Carl DiSalvo
    Domestic service robots have long been a staple of science fiction and commercial visions of the future. Until recently, we have only been able to speculate about what the experience of using such a device might be. Current domestic service robots, introduced as consumer products, allow us to make this vision a reality.
       This paper presents ethnographic research on the actual use of these products, to provide a grounded understanding of how design can influence human-robot interaction in the home. We used an ecological approach to broadly explore the use of this technology in this context, and to determine how an autonomous, mobile robot might "fit" into such a space. We offer initial implications for the design of these products: first, the way the technology is introduced is critical; second, the use of the technology becomes social; and third, that ideally, homes and domestic service robots must adapt to each other.
    Keywords: design research, domestic robots, ethnography, human-robot interaction design
    A video game-based framework for analyzing human-robot interaction: characterizing interface design in real-time interactive multimedia applications BIBAKFull-Text 266-273
      Justin Richer; Jill L. Drury
    There is growing interest in mining the world of video games to find inspiration for human-robot interaction (HRI) design. This paper segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive multimedia applications in general and HRI in particular. We provide examples of using the components in both the video game and the Unmanned Aerial Vehicle (UAV) domains (treating UAVs as airborne robots). Beyond characterization, the framework can be used to inspire new HRI designs and compare different designs; we provide an example comparison of two UAV ground station applications.
    Keywords: HRI, UAVs, evaluation, human-robot interaction, interaction design, unmanned aerial vehicles
    User, robot and automation evaluations in high-throughput biological screening processes BIBAKFull-Text 274-281
      Noa Segall; Rebecca S. Green; David B. Kaber
    This paper introduces high-throughput screening of biological samples in life sciences, as a domain for analysis of human-robot interaction (HRI) and development of usable human interface design principles. High-throughput screening (HTS) processes involve use of robotics and highly automated analytical measurement devices to transport and chemically evaluate biological compounds for potential use as drug derivatives. Humans act as supervisory controllers in HTS processes by performing test planning and device programming prior to experiments, systems monitoring, and real-time process intervention and error correction to maintain experiment safety and output. Process errors are infrequent but can be costly. Two forms of cognitive task analysis were applied to a highly automated HTS process to address different classes of errors, including goal-directed task analysis to describe critical operator decisions and information requirements and abstraction hierarchy modeling to represent HTS process devices and automation integrated in screening lines. The outcomes of the analyses were used as bases for generating supervisory control interface design recommendations to improve existing system usefulness and usability.
    Keywords: abstraction hierarchy modeling, cognitive task analysis, goal-directed task analysis, high-throughput screening, human error

    Dialog, mixed-initiative and multimodal interfaces

    Clarification dialogues in human-augmented mapping BIBAKFull-Text 282-289
      Geert-Jan M. Kruijff; Hendrik Zender; Patric Jensfelt; Henrik I. Christensen
    An approach to dialogue based interaction for resolution of ambiguities encountered as part of Human-Augmented Mapping (HAM) is presented. The paper focuses on issues related to spatial organisation and localisation. The dialogue pattern naturally arises as robots are introduced to novel environments. The paper discusses an approach based on the notion of Questions under Discussion (QUD). The presented approach has been implemented on a mobile platform that has dialogue capabilities and methods for metric SLAM. Experimental results from a pilot study clearly demonstrate that the system can resolve problematic situations.
    Keywords: clarification, human-augmented mapping, mixed initiative, natural language dialogue
    The effect of head-nod recognition in human-robot conversation BIBAKFull-Text 290-296
      Candace L. Sidner; Christopher Lee; Louis-Philippe Morency; Clifton Forlines
    This paper reports on a study of human participants with a robot designed to participate in a collaborative conversation with a human. The purpose of the study was to investigate a particular kind of gestural feedback from human to the robot in these conversations: head nods. During these conversations, the robot recognized head nods from the human participant. The conversations between human and robot concern demonstrations of inventions created in a lab. We briefly discuss the robot hardware and architecture and then focus the paper on a study of the effects of understanding head nods in three different conditions. We conclude that conversation itself triggers head nods by people in human-robot conversations and that telling participants that the robot recognizes their nods as well as having the robot provide gestural feedback of its nod recognition is effective in producing more nods.
    Keywords: collaborative conversation, conversational feedback, human-robot interaction, nod recognition, nodding
    Working with robots and objects: revisiting deictic reference for achieving spatial common ground BIBAKFull-Text 297-304
      Andrew G. Brooks; Cynthia Breazeal
    Robust joint visual attention is necessary for achieving a common frame of reference between humans and robots interacting multimodally in order to work together on real-world spatial tasks involving objects. We make a comprehensive examination of one component of this process that is often otherwise implemented in an ad hoc fashion: the ability to correctly determine the object referent from deictic reference including pointing gestures and speech. From this we describe the development of a modular spatial reasoning framework based around decomposition and resynthesis of speech and gesture into a language of pointing and object labeling. This framework supports multimodal and unimodal access in both real-world and mixed-reality workspaces, accounts for the need to discriminate and sequence identical and proximate objects, assists in overcoming inherent precision limitations in deictic gesture, and assists in the extraction of those gestures. We further discuss an implementation of the framework that has been deployed on two humanoid robot platforms to date.
    Keywords: human-robot interaction, multimodal interfaces, natural gesture understanding, spatial behavior
    Interactive humanoid robots for a science museum BIBAKFull-Text 305-312
      Masahiro Shiomi; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
    This paper reports on a field trial with interactive humanoid robots at a science museum where visitors are supposed to study and develop an interest in science. In the trial, each visitor wore an RFID tag while looking around the museum's exhibits. Information obtained from the RFID tags was used to direct the robots' interaction with the visitors. The robots autonomously interacted with visitors via gestures and utterances resembling the free play of children [1]. In addition, they performed exhibit-guiding by moving around several exhibits and explaining the exhibits based on sensor information. The robots were highly evaluated by visitors during the two-month trial. Moreover, we conducted an experiment in the field trial to compare the detailed effects of exhibit-guiding and free-play interaction under three operating conditions. This revealed that the combination of the free-play interaction and exhibit-guiding positively affected visitors' experiences at the science museum.
    Keywords: commutation robot, field trial, human-robot interaction, science museum robot
    How contingent should a communication robot be? BIBAKFull-Text 313-320
      Fumitaka Yamaoka; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
    The purpose of our research is to develop lifelike behavior in a communication robot, which is expected to potentially make human-robot interaction more natural. Our earlier research demonstrated the importance of a robot's contingency for lifelikeness [1]. On the other hand, perfect contingency seems to give us a non-lifelike impression. In order to explore the appropriate contingency for communication robots, we developed a robot system that allows us to adjust its contingency to an interacting person in a simple mimic interaction. As a result of an experiment, we identified the relationships between the degree of contingency and the subjective impressions of lifelikeness, autonomy, and preference. However, the experimental result also seems to suggest the importance of the complexity of interaction for investigating the appropriate contingency of communication robots.
    Keywords: communication robot, contingency, human-robot interaction, lifelike behavior

    Short papers

    The first segway soccer experience: towards peer-to-peer human-robot teams BIBAKFull-Text 321-322
      Brenna Argall; Yang Gu; Brett Browning; Manuela Veloso
    In this paper, we focus on human-robot interaction in a team task where we identify the need for peer-to-peer (P2P) teamwork, with no fixed hierarchy for decision making between robots and humans. Instead, all team members are equal participants and decision making is truly distributed. We have fully developed a P2P team within Segway Soccer, a research domain, built upon Robocup robot soccer, that we have introduced to explore the challenge of P2P coordination in human-robot teams with dynamic, adversarial tasks. We recently participated in the first Segway Soccer games between two competing teams at the 2005 RoboCup US Open. We believe these games are the first ever between two human-robot P2P teams. Based on the competition, we realized two different approaches to P2P teams. We present our robot-centric approach to P2P team coordination and contrast it to the human-centric approach of the opponent team.
    Keywords: human-robot teams, segway soccer
    Gesture-based control of highly articulated biomechatronic systems BIBAKFull-Text 323-324
      Zhiqiang Luo; I-Ming Chen; Shusong Xing; Henry Been-Lirn Duh
    A robotic puppet is developed for studying motion generation and control of highly articulated biomimic mechatronic systems with anatomical motion data of human in real time. The system is controlled by a pair of data gloves tracking human fingers' actions. With the primitives designed in a multilayered motion synthesis structure, the puppet can realize some complex human-like actions. Continuous full body movements are produced on the robotic puppet by combining and sequencing the actions on different body parts using temporal and spatial information provided by the data gloves. Human is involved in the interactive design of the coordination and timing of the body movements of the robotic puppet in a natural and intuitive manner. The methods of motion generation exhibited on the robotic puppet may be applied to the interactive media, entertainment and biomedical engineering.
    Keywords: biomechatronic system, human-robot interface, multilayered motion synthesis, robotic puppet
    Human telesupervision of a fleet of autonomous robots for safe and efficient space exploration BIBAKFull-Text 325-326
      Gregg Podnar; John Dolan; Alberto Elfes; Marcel Bergerman; H. Benjamin Brown; Alan D. Guisewite
    In January 2004, NASA began a bold enterprise to return to the Moon, and with the technologies and expertise gained, press on to Mars. The underlying Vision for Space Exploration calls for a sustained and affordable human and robotic program to explore the solar system and beyond; to conduct human expeditions to Mars after successfully demonstrating sustained human exploration missions on the Moon. The approach is to "send human and robotic explorers as partners, leveraging the capabilities of each where most useful." Human-robot interfacing technologies for this approach are required at readiness levels above any available today. In this paper, we describe the HRI aspects of a robot supervision architecture we are developing under NASA's auspices, based on the authors' extensive experience with field deployment of ground, underwater, lighter-than-air, and inspection autonomous and semi-autonomous robotic vehicles and systems.
    Keywords: autonomous navigation, field deployment, robot supervision architecture, technology readiness, teleoperation, telepresence
    Affective expression in appearance constrained robots BIBKFull-Text 327-328
      Cindy L. Bethel; Robin R. Murphy
    Keywords: affective computing, human-robot interaction, proxemics
    3-D modeling of spatial referencing language for human-robot interaction BIBAKFull-Text 329-330
      Samuel Blisard; Marjorie Skubic; Robert H., III Luke; James M. Keller
    One of the key components for natural interaction between humans and robots is the ability to understand the spatial relationships that exist in the natural world. Previous research has shown that modeling the 2D spatial relationships of FRONT, BEHIND, LEFT, RIGHT, and BETWEEN can be accomplished with results consistent with that of a human being. Upcoming research will involve a human subject study to investigate the use of spatial relationships in 3D space. This will be the first step in extending previous research of the 2D spatial relations into a 3D representation through the use of 3D object point clouds generated by the SIFT algorithm and stereo vision. This will allow for the enrichment of our human-robot dialog to include phrases such as "Bring me the coffee cup on top of the desk and to the right of the computer.
    Keywords: human-robot interaction, spatial language, spatial reasoning, stereo vision
    The art of designing robot faces: dimensions for human-robot interaction BIBAKFull-Text 331-332
      Mike Blow; Kerstin Dautenhahn; Andrew Appleby; Chrystopher L. Nehaniv; David Lee
    As robots enter everyday life and start to interact with ordinary people [5]the question of their appearance becomes increasingly important. A user's perception of a robot can be strongly influenced by its facial appearance [6]. The dimensions and issues of face design are illustrated in the design rationale, details of construction and intended uses of a new minimal expressive robot called KASPAR.
    Keywords: human-robot interaction, robot face design
    Dynamic leadership for human-robot teams BIBAFull-Text 333-334
      Douglas A. Few; David J. Bruemmer; Miles C. Walton
    This paper evaluates collaborative tasking tools that facilitate dynamic sharing of responsibilities between robot and operator throughout a search and detection task Participants who utilize Collaborative Tasking Mode (CTM) do not experience a significant performance penalty, yet benefit from reduced workload and fewer instances of confusion. In addition, CTM participants report a higher overall feeling of control as compared to those using Standard Shared Mode.
    Affective feedback in closed loop human-robot interaction BIBKFull-Text 335-336
      Pramila Rani; Changchun Liu; Nilanjan Sarkar
    Keywords: affective computing, anxiety, human-robot interaction
    Shaping human behavior by observing mobility gestures BIBFull-Text 337-338
      David Feil-Seifer; A Maja J. MatariC
    Commonality of control paradigms for unmanned systems BIBAKFull-Text 339-340
      Marc Gacy; David Dahn
    One of the technical thrusts within the Robotics Collaborative Technology Alliance (CTA) from the Army Research Laboratory has been to design, build, and experiment with new concept control systems that will allow a single human to simultaneously control multiple unmanned ground and air vehicles. We have developed both vehicle mounted and dismounted controllers that all provide a similar look and feel, with relatively equivalent control capabilities. The similarity in capabilities includes not only support functions such as map management and reporting, but the actual planning, tasking and control of the unmanned systems including small unmanned ground vehicles (SUGV), larger unmanned ground vehicles (UGV) and unmanned air vehicles (UAV).
    Keywords: HRI applications, HRI for heterogeneous teams, interface and autonomy design, mixed initiative interaction
    A model for imitating human reaching movements BIBAFull-Text 341-342
      Micha Hersch; Aude G. Billard
    We present a model of human-like reaching movements. This model is then used to give a humanoid robot the ability to imitate human reaching motions. It illustrates that having a robot control similar to human control can greatly ease the human-robot interaction.
    Structural descriptions in human-assisted robot visual learning BIBAKFull-Text 343-344
      Geert-Jan M. Kruijff; John D. Kelleher; Gregor Berginc; Ales Leonardis
    The paper presents an approach to using structural descriptions, obtained through a human-robot tutoring dialogue, as labels for the visual object models a robot learns. The paper shows how structural descriptions enable relating models for different aspects of one and the same object, and how being able to relate descriptions for visual models and discourse referents enables incremental updating of model descriptions through dialogue (either robot- or human initiated). The approach has been implemented in an integrated architecture for human-assisted robot visual learning.
    Keywords: cognitive vision and learning, natural language dialogue
    Auditory perspective taking BIBAKFull-Text 345-346
      Eric Martinson; Derek Brock
    Auditory perspective taking is the process of imagining the auditory scene from another's place and inferring what that person can (and cannot) hear, as well as how this affects his or her auditory comprehension. With this inferred knowledge, a conversational partner can then adapt his or her vocal presentation to overcome or cope with competing sounds and other auditory challenges to ensure that what is being said can be understood. In this poster, we explore several aspects of auditory perspective taking in the context of a robot speech and listening interface.
    Keywords: auditory interface, auditory scene, human-robot interaction
    Multimodal person tracking and attention classification BIBAKFull-Text 347-348
      Marek P. Michalowski; Reid Simmons
    This paper presents a robot search task (social tag) that uses social interaction, in the form of asking for help, as an integral component of task completion. We define socially distributed perception as a robot's ability to augment its limited sensory capacities through social interaction.
    Keywords: human-robot interaction, mixed initiative, social robotics
    Socially distributed perception BIBAKFull-Text 349-350
      Marek P. Michalowski; Carl DiSalvo; Didac Busquets; Laura M. Hiatt; Nik A. Melchior; Reid Simmons; Selma Sabanovic
    The problems of human detection, tracking, and attention recognition can be solved more effectively by integrating multiple sensory modalities, such as vision and range data. We present a system that uses a laser range scanner and a single camera to detect and track people, and to classify their attention relative to a socially interactive robot.
    Keywords: human-robot interaction, social robotics
    Perceptions of ASIMO: an exploration on co-operation and competition with humans and humanoid robots BIBAKFull-Text 351-352
      Bilge Mutlu; Steven Osman; Jodi Forlizzi; Jessica Hodgins; Sara Kiesler
    Recent developments in humanoid robotics have made possible a vision of robots in everyday use in the home and workplace. However, little is known about how we should design social interactions with humanoid robots. We explored how co-operation versus competition in a game shaped people's perceptions of ASIMO. We found that in the co-operative interaction, people found the robot more sociable and more intellectual than in the competitive interaction while people felt more positive and were more involved in the task in the competitive condition than in the co-operative condition. Our poster presents these findings with the supporting theoretical background.
    Keywords: ASIMO, co-operation vs. competition, human-robot interaction, humanoid robots, social perception, social robots
    On the effect of the user's background on communicating grasping commands BIBAKFull-Text 353-354
      Maria Ralph; Medhat A. Moussa
    In this paper, we investigate the impact of the user's background on their ability to communicate grasping commands to a robot. We conducted a study where a group of 15 non-technical users use natural language to instruct a robotic arm to grasp five small everyday objects. We found that users with less technical backgrounds choose simple more predictable commands over complex unpredictable movements. These users also required more time and commands to complete a grasping task compared to users with more technical backgrounds. Other results however suggest that the user's background is not the most critical factor. Individual preferences and learning approaches also appear to play a role in command choices.
    Keywords: grasping, human-robot interaction, natural language instruction, skill transfer, user-adaptive robotics
    Sociality of robots: do robots construct or collapse human relations? BIBAKFull-Text 355-356
      Daisuke Sakamoto; Tetsuo Ono
    With developments in robotics, robots "living" with people will become a part of daily life in the near future. However, there are many problems with social robots. In particular, the behavior of robots can influence human relations, and societies have not yet clarified this. In this paper, we report on an experiment we conducted to verify the influence of robot behavior on human relations using the "balance theory." The results show that robots can have both good and bad influence on human relations. One person's impression of another can undergo changes because of a robot. In other words, robots can construct or collapse human relations.
    Keywords: robotic social psychology
    Challenges to grounding in human-robot interaction BIBAKFull-Text 357-358
      Kristen Stubbs; Pamela Hinds; David Wettergreen
    We report a study of a human-robot system composed of a science team (located in Pittsburgh), an engineering team (located in Chile), and a robot (located in Chile). We performed ethnographic observations simultaneously at both sites over two weeks as scientists collected data using the robot. Our data reveal problems in establishing and maintaining common ground between the science team and the robot due to missing contextual information about the robot. Our results have implications for the design of systems to support human-robot interaction.
    Keywords: common ground, ethnography, exploration robotics, human-robot interaction, mutual knowledge
    Experiments in socially guided machine learning: understanding how humans teach BIBAKFull-Text 359-360
      Andrea L. Thomaz; Guy Hoffman; Cynthia Breazeal
    In Socially Guided Machine Learning we explore the ways in which machine learning can more fully take advantage of natural human interaction. In this work we are studying the role real-time human interaction plays in training assistive robots to perform new tasks. We describe an experimental platform, Sophie's World, and present descriptive analysis of human teaching behavior found in a user study. We report three important observations of how people administer reward and punishment to teach a simulated robot a new task through Reinforcement Learning. People adjust their behavior as they develop a model of the learner, they use the reward channel for guidance as well as feedback, and they may also use it as a motivational channel.
    Keywords: human-robot interaction, machine learning, socially guided agents
    Acquiring a shared environment representation BIBAKFull-Text 361-362
      Elin Anna Topp; Henrik I. Christensen; Kerstin Severinson Eklundh
    Interacting with a domestic service robot implies the existence for a joint environment model for a user and a robot. We present a pilot study that investigates, how humans present a familiar environment to a mobile robot. Results from this study are used to evaluate a generic environment model for a service robot that can be personalised by interaction.
    Keywords: cognitive modelling, environment representation, user study