HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Proceedings of the 2015 ACM/IEEE International Conference on Human-Robot Interaction

Fullname:Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction
Editors:Julie A. Adams; William Smart; Bilge Mutlu; Leila Takayama
Location:Portland, Oregon
Dates:2015-Mar-02 to 2015-Mar-05
Volume:1
Publisher:ACM
Standard No:ISBN: 978-1-4503-2883-8; ACM DL: Table of Contents; hcibib: HRI15-1
Papers:46
Pages:350
Links:Conference Website
  1. HRI 2015-03-02 Volume 1
    1. Keynote Address
    2. Session A: Designing Robots for Everyday Interaction
    3. Session B: Robot Motion
    4. Session C: Robots & Children
    5. Keynote Address
    6. Session D: Perceptions of Robots
    7. Session E: Robots as Social Agents
    8. Session F: Human-Robot Teams
    9. Keynote Address
    10. Session G: Multi-modal Capabilities
    11. Session H: Human Behaviors, Activities, and Environments, Part 1
    12. Session I: Human Behaviors, Activities, and Environments, Part 2

HRI 2015-03-02 Volume 1

Keynote Address

Design Everything by Yourself BIBAFull-Text 1
  Takeo Igarashi
I will introduce our research project (design interface project) aiming at the development of various design tools for end-users. We live in a mass-production society today and everyone buy and use same things all over the world. This is cheap, but not necessarily ideal for individual persons. We envision that computer tools that help people to design things by themselves can enrich their lives. To that end, we develop innovative interaction techniques for end users to (1) create rich graphics such as three-dimensional models and animations by simple sketching (2) design their own real-world, everyday objects such as clothing and furniture with real-time physical simulation integrated in a simple geometry editor, and (3) design the behavior of their personal robots and give instructions to them to satisfy their particular needs.

Session A: Designing Robots for Everyday Interaction

Design and Evaluation of a Peripheral Robotic Conversation Companion BIBAFull-Text 3-10
  Guy Hoffman; Oren Zuckerman; Gilad Hirschberger; Michal Luria; Tal Shani Sherman
We present the design, implementation, and evaluation of a peripheral empathy-evoking robotic conversation companion, Kip1. The robot's function is to increase people's awareness to the effect of their behavior towards others, potentially leading to behavior change. Specifically, Kip1 is designed to promote non-aggressive conversation between people. It monitors the conversation's nonverbal aspects and maintains an emotional model of its reaction to the conversation. If the conversation seems calm, Kip1 responds by a gesture designed to communicate curious interest. If the conversation seems aggressive, Kip1 responds by a gesture designed to communicate fear. We describe the design process of Kip1, guided by the principles of peripheral and evocative. We detail its hardware and software systems, and a study evaluating the effects of the robot's autonomous behavior on couples' conversations. We find support for our design goals. A conversation companion reacting to the conversation led to more gaze attention, but not more verbal distraction, compared to a robot that moves but does not react to the conversation. This suggests that robotic devices could be designed as companions to human-human interaction without compromising the natural communication flow between people. Participants also rated the reacting robot as having significantly more social human character traits and as being significantly more similar to them. This points to the robot's potential to elicit people's empathy.
Mechanical Ottoman: How Robotic Furniture Offers and Withdraws Support BIBAFull-Text 11-18
  David Sirkin; Brian Mok; Stephen Yang; Wendy Ju
This paper describes our approach to designing, developing behaviors for, and exploring the use of, a robotic footstool, which we named the mechanical ottoman. By approaching unsuspecting participants and attempting to get them to place their feet on the footstool, and then later attempting to break the engagement and get people to take their feet down, we sought to understand whether and how motion can be used by non-anthropomorphic robots to engage people in joint action. In several embodied design improvisation sessions, we observed a tension between people perceiving the ottoman as a living being, such as a pet, and simultaneously as a functional object, which requests that they place their feet on it-something they would not ordinarily do with a pet. In a follow-up lab study (N=20), we found that most participants did make use of the footstool, although several chose not to place their feet on it for this reason. We also found that participants who rested their feet understood a brief lift and drop movement as a request to withdraw, and formed detailed notions about the footstool's agenda, ascribing intentions based on its movement alone.
Communicating Directionality in Flying Robots BIBAFull-Text 19-26
  Daniel Szafir; Bilge Mutlu; Terry Fong
Small flying robots represent a rapidly emerging family of robotic technologies with aerial capabilities that enable unique forms of assistance in a variety of collaborative tasks. Such tasks will necessitate interaction with humans in close proximity, requiring that designers consider human perceptions regarding robots flying and acting within human environments. We explore the design space regarding explicit robot communication of flight intentions to nearby viewers. We apply design constraints to robot flight behaviors, using biological and airplane flight as inspiration, and develop a set of signaling mechanisms for visually communicating directionality while operating under such constraints. We implement our designs on two commercial flyers, requiring little modification to the base platforms, and evaluate each signaling mechanism, as well as a no-signaling baseline, in a user study in which participants were asked to predict robot intent. We found that three of our designs significantly improved viewer response time and accuracy over the baseline and that the form of the signal offered tradeoffs in precision, generalizability, and perceived robot usability.
The Privacy-Utility Tradeoff for Remotely Teleoperated Robots BIBAFull-Text 27-34
  Daniel J. Butler; Justin Huang; Franziska Roesner; Maya Cakmak
Though teleoperated robots have become common for more extreme tasks such as bomb diffusion, search-and-rescue, and space exploration, they are not commonly used in human-populated environments for more ordinary tasks such as house cleaning or cooking. This presents near-term opportunities for teleoperated robots in the home. However, a teleoperator's remote presence in a consumer's home presents serious security and privacy risks, and the concerns of end-users about these risks may hinder the adoption of such in-home robots. In this paper, we define and explore the privacy-utility tradeoff for remotely teleoperated robots: as we reduce the quantity or fidelity of visual information received by the teleoperator to preserve the end-user's privacy, we must balance this against the teleoperator's need for sufficient information to successfully carry out tasks. We explore this tradeoff with two surveys that provide a framework for understanding the privacy attitudes of end-users, and with a user study that empirically examines the effect of different filters of visual information on the ability of a teleoperator to carry out a task. Our findings include that respondents do desire privacy protective measures from teleoperators, that respondents prefer certain visual filters from a privacy perspective, and that, for the studied task, we can identify a filter that balances privacy with utility. We make recommendations for in-home teleoperation based on these findings.

Session B: Robot Motion

May I help you?: Design of Human-like Polite Approaching Behavior BIBAFull-Text 35-42
  Yusuke Kato; Takayuki Kanda; Hiroshi Ishiguro
When should service staff initiate interaction with a visitor? Neither simply-proactive (e.g. talk to everyone in a sight) nor passive (e.g. wait until being talked to) strategies are desired. This paper reports our modeling of polite approaching behavior. In a shopping mall, there are service staff members who politely approach visitors who need help. Our analysis revealed that staff members are sensitive to "intentions" of nearby visitors. That is, when a visitor intends to talk to a staff member and starts to approach, the staff member also walks a few steps toward the visitors in advance to being talked. Further, even when not being approached, staff members exhibit "availability" behavior in the case that a visitor's intention seems uncertain. We modeled these behaviors that are adaptive to pedestrians' intentions, occurred prior to initiation of conversation. The model was implemented into a robot and tested in a real shopping mall. The experiment confirmed that the proposed method is less intrusive to pedestrians, and that our robot successfully initiated interaction with pedestrians.
Robot Form and Motion Influences Social Attention BIBAFull-Text 43-50
  Alvin X. Li; Maria Florendo; Luke E. Miller; Hiroshi Ishiguro; Ayse P. Saygin
For social robots to be successful, they need to be accepted by humans. Human-robot interaction (HRI) researchers are aware of the need to develop the right kinds of robots with appropriate, natural ways for them to interact with humans. However, much of human perception and cognition occurs outside of conscious awareness, and how robotic agents engage these processes is currently unknown. Here, we explored automatic, reflexive social attention, which operates outside of conscious control within a fraction of a second to discover whether and how these processes generalize to agents with varying humanlikeness in their form and motion. Using a social variant of a well-established spatial attention paradigm, we tested whether robotic or human appearance and/or motion influenced an agent's ability to capture or direct implicit social attention. In each trial, either images or videos of agents looking to one side of space (a head turn) were presented to human observers. We measured reaction time to a peripheral target as an index of attentional capture and direction. We found that all agents, regardless of humanlike form or motion, were able to direct spatial attention in the cued direction. However, differences in the form of the agent affected attentional capture, i.e., how quickly the observers could disengage attention from the agent and respond to the target. This effect further interacted with whether the spatial cue (head turn) was presented through static images or videos. Overall whereas reflexive social attention operated in the same manner for human and robot social agents for spatial attentional cueing, robotic appearance, as well as whether the agent was static or moving significantly influenced unconscious attentional capture processes. Overall the studies reveal how unconscious social attentional processes operate when the agent is a human vs. a robot, add novel manipulations to the literature such as the role of visual motion, and provide a link between attention studies in HRI, and decades of research on unconscious social attention in experimental psychology and vision science.
Effects of Robot Motion on Human-Robot Collaboration BIBAFull-Text 51-58
  Anca D. Dragan; Shira Bauman; Jodi Forlizzi; Siddhartha S. Srinivasa
Most motion in robotics is purely functional, planned to achieve the goal and avoid collisions. Such motion is great in isolation, but collaboration affords a human who is watching the motion and making inferences about it, trying to coordinate with the robot to achieve the task. This paper analyzes the benefit of planning motion that explicitly enables the collaborator's inferences on the success of physical collaboration, as measured by both objective and subjective metrics. Results suggest that legible motion, planned to clearly express the robot's intent, leads to more fluent collaborations than predictable motion, planned to match the collaborator's expectations. Furthermore, purely functional motion can harm coordination, which negatively affects both task efficiency, as well as the participants' perception of the collaboration.

Session C: Robots & Children

Escaping from Children's Abuse of Social Robots BIBAFull-Text 59-66
  Drazen Brscic; Hiroyuki Kidokoro; Yoshitaka Suehiro; Takayuki Kanda
Social robots working in public space often stimulate children's curiosity. However, sometimes children also show abusive behavior toward robots. In our case studies, we observed in many cases that children persistently obstruct the robot's activity. Some actually abused the robot by saying bad things, and at times even kicking or punching the robot. We developed a statistical model of occurrence of children's abuse. Using this model together with a simulator of pedestrian behavior, we enabled the robot to predict the possibility of an abuse situation and escape before it happens. We demonstrated that with the model the robot successfully lowered the occurrence of abuse in a real shopping mall.
The Robot Who Tried Too Hard: Social Behaviour of a Robot Tutor Can Negatively Affect Child Learning BIBAFull-Text 67-74
  James Kennedy; Paul Baxter; Tony Belpaeme
Social robots are finding increasing application in the domain of education, particularly for children, to support and augment learning opportunities. With an implicit assumption that social and adaptive behaviour is desirable, it is therefore of interest to determine precisely how these aspects of behaviour may be exploited in robots to support children in their learning. In this paper, we explore this issue by evaluating the effect of a social robot tutoring strategy with children learning about prime numbers. It is shown that the tutoring strategy itself leads to improvement, but that the presence of a robot employing this strategy amplifies this effect, resulting in significant learning. However, it was also found that children interacting with a robot using social and adaptive behaviours in addition to the teaching strategy did not learn a significant amount. These results indicate that while the presence of a physical robot leads to improved learning, caution is required when applying social behaviour to a robot in a tutoring context.
Emotional Storytelling in the Classroom: Individual versus Group Interaction between Children and Robots BIBAFull-Text 75-82
  Iolanda Leite; Marissa McCoy; Monika Lohani; Daniel Ullman; Nicole Salomons; Charlene Stokes; Susan Rivers; Brian Scassellati
Robot assistive technology is becoming increasingly prevalent. Despite the growing body of research in this area, the role of type of interaction (i.e., small groups versus individual interactions) on effectiveness of interventions is still unclear. In this paper, we explore a new direction for socially assistive robotics, where multiple robotic characters interact with children in an interactive storytelling scenario. We conducted a between-subjects repeated interaction study where a single child or a group of three children interacted with the robots in an interactive narrative scenario. Results show that although the individual condition increased participant's story recall abilities compared to the group condition, the emotional interpretation of the story content seemed more dependent on the difficulty level rather than the study condition. Our findings suggest that, despite the type of interaction, interactive narratives with multiple robots are a promising approach to foster children's development of social-related skills.
When Children Teach a Robot to Write: An Autonomous Teachable Humanoid Which Uses Simulated Handwriting BIBAFull-Text 83-90
  Deanna Hood; Séverin Lemaignan; Pierre Dillenbourg
This article presents a novel robotic partner which children can teach handwriting. The system relies on the learning by teaching paradigm to build an interaction, so as to stimulate meta-cognition, empathy and increased self-esteem in the child user. We hypothesise that use of a humanoid robot in such a system could not just engage an unmotivated student, but could also present the opportunity for children to experience physically-induced benefits encountered during human-led handwriting interventions, such as motor mimicry. By leveraging simulated handwriting on a synchronised tablet display, a NAO humanoid robot with limited fine motor capabilities has been configured as a suitably embodied handwriting partner. Statistical shape models derived from principal component analysis of a dataset of adult-written letter trajectories allow the robot to draw purposefully deformed letters. By incorporating feedback from user demonstrations, the system is then able to learn the optimal parameters for the appropriate shape models. Preliminary in situ studies have been conducted with primary school classes to obtain insight into children's use of the novel system. Children aged 6-8 successfully engaged with the robot and improved its writing to a level which they were satisfied with. The validation of the interaction represents a significant step towards an innovative use for robotics which addresses a widespread and socially meaningful challenge in education.
Can Children Catch Curiosity from a Social Robot? BIBAFull-Text 91-98
  Goren Gordon; Cynthia Breazeal; Susan Engel
Curiosity is key to learning, yet school children show wide variability in their eagerness to acquire information. Recent research suggests that other people have a strong influence on children's exploratory behavior. Would a curious robot elicit children's exploration and the desire to find out new things? In order to answer this question we designed a novel experimental paradigm in which a child plays an education tablet app with an autonomous social robot, which is portrayed as a younger peer. We manipulated the robot's behavior to be either curiosity-driven or not and measured the child's curiosity after the interaction. We show that some of the child's curiosity measures are significantly higher after interacting with a curious robot, compared to a non-curious one, while others do not. These results suggest that interacting with an autonomous social curious robot can selectively guide and promote children's curiosity.
Comparing Models of Disengagement in Individual and Group Interactions BIBAFull-Text 99-105
  Iolanda Leite; Marissa McCoy; Daniel Ullman; Nicole Salomons; Brian Scassellati
Changes in type of interaction (e.g., individual vs. group interactions) can potentially impact data-driven models developed for social robots. In this paper, we provide a first investigation in the effects of changing group size in data-driven models for HRI, by analyzing how a model trained on data collected from participants interacting individually performs in test data collected from group interactions, and vice-versa. Another model combining data from both individual and group interactions is also investigated. We perform these experiments in the context of predicting disengagement behaviors in children interacting with two social robots. Our results show that a model trained with group data generalizes better to individual participants than the other way around. The mixed model seems a good compromise, but it does not achieve the performance levels of the models trained for a specific type of interaction.

Keynote Address

Of Robots, Humans, Bodies and Intelligence: Body Languages for Human Robot Interaction BIBAFull-Text 107
  Antonio Bicchi
Modern approaches to the design of robots with increasing amounts of embodied intelligence affect human-robot interaction paradigms. The physical structure of robots is evolving from traditional rigid, heavy industrial machines into soft bodies exhibiting new levels of versatility, adaptability, safety, elasticity, dynamism and energy efficiency. New challenges and opportunities arise for the control of soft robots: for instance, carefully planning for collision avoidance may no longer be a dominating concern, being on the contrary physical interaction with the environment not only allowed, but even desirable to solve complex tasks. To address these challenges, it is often useful to look at how humans use their own bodies in similar tasks, and even in some cases have a direct dialogue between the natural and artificial counterparts.

Session D: Perceptions of Robots

Rabble of Robots Effects: Number and Form of Robots Modulates Attitudes, Emotions, and Stereotypes BIBAFull-Text 109-116
  Marlena R. Fraune; Steven Sherrin; Selma Sabanovic; Eliot R. Smith
Robots are expected to become present in society in increasing numbers, yet few studies in human-robot interaction (HRI) go beyond one-to-one interaction to examine how emotions, attitudes, and stereotypes expressed toward groups of robots differ from those expressed toward individuals. Research from social psychology indicates that people interact differently with individuals than with groups. We therefore hypothesize that group effects might similarly occur when people face multiple robots. Further, group effects might vary for robots of different types. In this exploratory study, we used videos to expose participants in a between-subjects experiment to robots varying in Number (Single or Group) and Type (anthropomorphic, zoomorphic, or mechanomorphic). We then measured participants' general attitudes, emotions, and stereotypes toward robots with a combination of measures from HRI (e.g., Godspeed Questionnaire, NARS) and social psychology (e.g., Big Five, Social Threat, Emotions). Results suggest that Number and Type of observed robots had an interaction effect on responses toward robots in general, leading to more positive responses for groups for some robot types, but more negative responses for others.
Sacrifice One For the Good of Many?: People Apply Different Moral Norms to Human and Robot Agents BIBAFull-Text 117-124
  Bertram F. Malle; Matthias Scheutz; Thomas Arnold; John Voiklis; Corey Cusimano
Moral norms play an essential role in regulating human interaction. With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior. We report the first comparison of people's moral judgments (of permissibility, wrongness, and blame) about human and robot agents. Two online experiments (total N = 316) found that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a "utilitarian" choice), and they were blamed more than their human counterparts when they did not make that choice. Though the utilitarian sacrifice was generally seen as permissible for human agents, they were blamed more for choosing this option than for doing nothing. These results provide a first step toward a new field of Moral HRI, which is well placed to help guide the design of social robots.
Poor Thing! Would You Feel Sorry for a Simulated Robot?: A comparison of empathy toward a physical and a simulated robot BIBAFull-Text 125-132
  Stela H. Seo; Denise Geiskkovitch; Masayuki Nakane; Corey King; James E. Young
In designing and evaluating human-robot interactions and interfaces, researchers often use a simulated robot due to the high cost of robots and time required to program them. However, it is important to consider how interaction with a simulated robot differs from a real robot; that is, do simulated robots provide authentic interaction? We contribute to a growing body of work that explores this question and maps out simulated-versus-real differences, by explicitly investigating empathy: how people empathize with a physical or simulated robot when something bad happens to it. Our results suggest that people may empathize more with a physical robot than a simulated one, a finding that has important implications on the generalizability and applicability of simulated HRI work. Empathy is particularly relevant to social HRI and is integral to, for example, companion and care robots. Our contribution additionally includes an original and reproducible HRI experimental design to induce empathy toward robots in laboratory settings, and an experimentally validated empathy-measuring instrument from psychology for use with HRI.
Observer Perception of Dominance and Mirroring Behavior in Human-Robot Relationships BIBAFull-Text 133-140
  Jamy Li; Wendy Ju; Cliff Nass
How people view relationships between humans and robots is an important consideration for the design and acceptance of social robots. Two studies investigated the effect of relational behavior in a human-robot dyad. In Study 1, participants watched videos of a human confederate discussing the Desert Survival Task with either another human confederate or a humanoid robot. Participants were less trusting of both the robot and the person in a human-robot relationship where the robot was dominant toward the person than when the person was dominant toward the robot; these differences were not found for a human pair. In Study 2, participants watched videos of a human confederate having an everyday conversation with either another human confederate or a humanoid robot. Participants who saw a confederate mirror the gestures of a robot found the robot less attractive than when the robot mirrored the confederate; the opposite effect was found for a human pair. Exploratory findings suggest that human-robot relationships are viewed differently than human dyads.

Session E: Robots as Social Agents

Would You Trust a (Faulty) Robot?: Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust BIBAFull-Text 141-148
  Maha Salem; Gabriella Lakatos; Farshid Amirabdollahian; Kerstin Dautenhahn
How do mistakes made by a robot affect its trustworthiness and acceptance in human-robot collaboration? We investigate how the perception of erroneous robot behavior may influence human interaction choices and the willingness to cooperate with the robot by following a number of its unusual requests. For this purpose, we conducted an experiment in which participants interacted with a home companion robot in one of two experimental conditions: (1) the correct mode or (2) the faulty mode. Our findings reveal that, while significantly affecting subjective perceptions of the robot and assessments of its reliability and trustworthiness, the robot's performance does not seem to substantially influence participants' decisions to (not) comply with its requests. However, our results further suggest that the nature of the task requested by the robot, e.g. whether its effects are revocable as opposed to irrevocable, has a significant impact on participants' willingness to follow its instructions.
Moderating a Robot's Ability to Influence People Through its Level of Sociocontextual Interactivity BIBAFull-Text 149-156
  Sonja Caraian; Nathan Kirchner; Peter Colborne-Veel
A range of situations exist in which it would be useful to influence people's behavior in public spaces, for example to improve the efficiency of passenger flow in congested train stations. We have identified our previously developed Robot Centric paradigm of Human-Robot Interaction (HRI), which positions robots as Interaction Peers, as a potentially suitable model to achieve more effective influence through defining and exploiting the interactivity of robots (that is, their ability to moderate their issued sociocontextual cues based on the behavioral information read from humans). In this paper, we investigate whether increasing a robot's interactivity will increase the effectiveness of its influence on people in public spaces. A two-part study (total n = 273) was conducted in both a major Australian public train station (n = 84 + 105) and a university (n = 84) where passersby encountered a robot, designed with various levels of interactivity, which attempted to influence their passage. The findings suggest that the Robot Centric HRI paradigm generalizes to other robots and application spaces, and enables deliberate moderation of a robot's interactivity, facilitating more nuanced, predictable and systematic influence, and thus yielding greater effectiveness.
Effects of Culture on the Credibility of Robot Speech: A Comparison between English and Arabic BIBAFull-Text 157-164
  Sean Andrist; Micheline Ziadee; Halim Boukaram; Bilge Mutlu; Majd Sakr
As social robots begin to enter our lives as providers of information, assistance, companionship, and motivation, it becomes increasingly important that these robots are capable of interacting effectively with human users across different cultural settings worldwide. A key capability in establishing acceptance and usability is the way in which robots structure their speech to build credibility and express information in a meaningful and persuasive way. Previous work has established that robots can use speech to improve credibility in two ways: expressing practical knowledge and using rhetorical linguistic cues. In this paper, we present two studies that build on prior work to explore the effects of language and cultural context on the credibility of robot speech. In the first study (n=96), we compared the relative effectiveness of knowledge and rhetoric on the credibility of robot speech between Arabic-speaking robots in Lebanon and English-speaking robots in the USA, finding the rhetorical linguistic cues to be more important in Arabic than in English. In the second study (n=32), we compared the effectiveness of credible robot speech between robots speaking either Modern Standard Arabic or the local Arabic dialect, finding the expression of both practical knowledge and rhetorical ability to be most important when using the local dialect. These results reveal nuanced cultural differences in perceptions of robots as credible agents and have important implications for the design of human-robot interactions across Arabic and Western cultures.
Evidence that Robots Trigger a Cheating Detector in Humans BIBAFull-Text 165-172
  Alexandru Litoiu; Daniel Ullman; Jason Kim; Brian Scassellati
Short et al. found that in a game between a human participant and a humanoid robot, the participant will perceive the robot as being more agentic and as having more intentionality if it cheats than if it plays without cheating. However, in that design, the robot that actively cheated also generated more motion than the other conditions. In this paper, we investigate whether the additional movement of the cheating gesture is responsible for the increased agency and intentionality or whether the act of cheating itself triggers this response. In a between-participant design with 83 participants, we disambiguate between these causes by testing (1) the cases of the robot cheating to win, (2) cheating to lose, (3) cheating to tie from a winning position, and (4) cheating to tie from a losing position. Despite the fact that the robot changes its gesture to cheat in all four conditions, we find that participants are more likely to report the gesture change when the robot cheated to win from a losing position, compared with the other conditions. Participants in that same condition are also far more likely to protest in the form of an utterance following the cheat and report that the robot is less fair and honest. It is therefore the adversarial cheat itself that causes the effect and not the change in gesture, providing evidence for a cheating detector that can be triggered by robots.
Will People Keep the Secret of a Humanoid Robot?: Psychological Intimacy in HRI BIBAFull-Text 173-180
  Peter H., Jr. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Solace Shen; Heather E. Gary; Jolina H. Ruckert
Will people keep the secret of a socially compelling robot who shares, in confidence, a "personal" (robot) failing? Toward answering this question, 81 adults participated in a 20-minute interaction with (a) a humanoid robot (Robovie) interacting in a highly social way as a lab tour guide, and (b) with a human being interacting in the same highly social way. As a baseline comparison, participants also interacted with (c) a humanoid robot (Robovie) interacting in a more rudimentary social way. In each condition, the tour guide asks for the secret keeping behavior. Results showed that the majority of the participants (59%) kept the secret of the highly social robot, and did not tell the experimenter when asked directly, with the robot present. This percentage did not differ statistically from the percentage who kept the human's secret (67%). It did differ statistically when the robot engaged in the more rudimentary social interaction (11%). These results suggest that as humanoid robots become increasingly social in their interaction, that people will form increasingly intimate and trusting psychological relationships with them. Discussion focuses on design principles (how to engender psychological intimacy in human-robot interaction) and norms (whether it is even desirable to do so, and if so in what contexts).
Robot Presence and Human Honesty: Experimental Evidence BIBAFull-Text 181-188
  Guy Hoffman; Jodi Forlizzi; Shahar Ayal; Aaron Steinfeld; John Antanitis; Guy Hochman; Eric Hochendoner; Justin Finkenaur
Robots are predicted to serve in environments in which human honesty is important, such as the workplace, schools, and public institutions. Can the presence of a robot facilitate honest behavior? In this paper, we describe an experimental study evaluating the effects of robot social presence on people's honesty. Participants completed a perceptual task, which is structured so as to allow them to earn more money by not complying with the experiment instructions. We compare three conditions between subjects: Completing the task alone in a room; completing it with a non-monitoring human present; and completing it with a non-monitoring robot present. The robot is a new expressive social head capable of 4-DoF head movement and screen-based eye animation, specifically designed and built for this research. It was designed to convey social presence, but not monitoring. We find that people cheat in all three conditions, but cheat equally less when there is a human or a robot in the room, compared to when they are alone. We did not find differences in the perceived authority of the human and the robot, but did find that people felt significantly less guilty after cheating in the presence of a robot as compared to a human. This has implications for the use of robots in monitoring and supervising tasks in environments in which honesty is key.

Session F: Human-Robot Teams

Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks BIBAFull-Text 189-196
  Stefanos Nikolaidis; Ramya Ramakrishnan; Keren Gu; Julie Shah
We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially observable variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p<0.01), compared to manually annotating robot actions. In trials where participants faced difficulty annotating the robot actions to complete the task, the proposed framework significantly improved team efficiency (p<0.01). The robot incorporating the framework was also found to be more responsive to human actions compared to policies computed using a hand-coded reward function by a domain expert (p<0.01). These results indicate that learning human user models from joint-action demonstrations and encoding them in a MOMDP formalism can support effective teaming in human-robot collaborative tasks.
Bounds of Neglect Benevolence in Input Timing for Human Interaction with Robotic Swarms BIBAFull-Text 197-204
  Sasanka Nagavalli; Shih-Yi Chien; Michael Lewis; Nilanjan Chakraborty; Katia Sycara
Robotic swarms are distributed systems whose members interact via local control laws to achieve a variety of behaviors, such as flocking. In many practical applications, human operators may need to change the current behavior of a swarm from the goal that the swarm was going towards into a new goal due to dynamic changes in mission objectives. There are two related but distinct capabilities needed to supervise a robotic swarm. The first is comprehension of the swarm's state and the second is prediction of the effects of human inputs on the swarm's behavior. Both of them are very challenging. Prior work in the literature has shown that inserting the human input as soon as possible to divert the swarm from its original goal towards the new goal does not always result in optimal performance (measured by some criterion such as the total time required by the swarm to reach the second goal). This phenomenon has been called Neglect Benevolence, conveying the idea that in many cases it is preferable to neglect the swarm for some time before inserting human input. In this paper, we study how humans can develop an understanding of swarm dynamics so they can predict the effects of the timing of their input on the state and performance of the swarm. We developed the swarm configuration shape-changing Neglect Benevolence Task as a Human Swarm Interaction (HSI) reference task allowing comparison between human and optimal input timing performance in control of swarms. Our results show that humans can learn to approximate optimal timing and that displays which make consensus variables perceptually accessible can enhance performance.
Interactive Hierarchical Task Learning from a Single Demonstration BIBAFull-Text 205-212
  Anahita Mohseni-Kabir; Charles Rich; Sonia Chernova; Candace L. Sidner; Daniel Miller
We have developed learning and interaction algorithms to support a human teaching hierarchical task models to a robot using a single demonstration in the context of a mixed-initiative interaction with bi-directional communication. In particular, we have identified and implemented two important heuristics for suggesting task groupings based on the physical structure of the manipulated artifact and on the data flow between tasks. We have evaluated our algorithms with users in a simulated environment and shown both that the overall approach is usable and that the grouping suggestions significantly improve the learning and interaction.
How Robot Verbal Feedback Can Improve Team Performance in Human-Robot Task Collaborations BIBAFull-Text 213-220
  Aaron St. Clair; Maja Mataric
We detail an approach to planning effective verbal feedback during pairwise human-robot task collaboration. The approach is motivated by social science literature as well as existing work in robotics and is applicable to a variety of task scenarios. It consists of a dynamic, synthetic task implemented in an augmented reality environment. The result is combined robot task control and speech production, allowing the robot to actively participate and communicate with its teammate. A user study was conducted to experimentally validate the efficacy of the approach on a task in which a single user collaborates with an autonomous robot. The results demonstrate that the approach is capable of improving both objective measures of team performance and the user's subjective evaluation of both the task and the robot as a teammate.
OPTIMo: Online Probabilistic Trust Inference Model for Asymmetric Human-Robot Collaborations BIBAFull-Text 221-228
  Anqi Xu; Gregory Dudek
We present OPTIMo: an Online Probabilistic Trust Inference Model for quantifying the degree of trust that a human supervisor has in an autonomous robot "worker". Represented as a Dynamic Bayesian Network, OPTIMo infers beliefs over the human's moment-to-moment latent trust states, based on the history of observed interaction experiences. A separate model instance is trained on each user's experiences, leading to an interpretable and personalized characterization of that operator's behaviors and attitudes. Using datasets collected from an interaction study with a large group of roboticists, we empirically assess OPTIMo's performance under a broad range of configurations. These evaluation results highlight OPTIMo's advances in both prediction accuracy and responsiveness over several existing trust models. This accurate and near real-time human-robot trust measure makes possible the development of autonomous robots that can adapt their behaviors dynamically, to actively seek greater trust and greater efficiency within future human-robot collaborations.
Using Robots to Moderate Team Conflict: The Case of Repairing Violations BIBAFull-Text 229-236
  Malte F. Jung; Nikolas Martelaro; Pamela J. Hinds
We explore whether robots can positively influence conflict dynamics by repairing interpersonal violations that occur during a team-based problem-solving task. In a 2 (negative trigger: task-directed vs. personal attack) x 2 (repair: yes vs. no) between-subjects experiment (N = 57 teams, 114 participants), we studied the effect of a robot intervention on affect, perceptions of conflict, perceptions of team members' contributions, and team performance during a problem-solving task. Specifically, the robot either intervened by repairing a task-directed or personal attack by a confederate or did not intervene. Contrary to our expectations, we found that the robot's repair interventions increased the groups' awareness of conflict after the occurrence of a personal attack thereby acting against the groups' tendency to suppress the conflict. These findings suggest that repair heightened awareness of a normative violation. Overall, our results provide support for the idea that robots can aid team functioning by regulating core team processes such as conflict.
Face the Music and Glance: How Nonverbal Behaviour Aids Human Robot Relationships Based in Music BIBAFull-Text 237-244
  Louis McCallum; Peter W. McOwan
It is our hypothesis that improvised musical interaction will be able to provide the extended engagement often failing others during long term Human Robot Interaction (HRI) trials. Our previous work found that simply framing sessions with their drumming robot Mortimer as social interactions increased both social presence and engagement, two factors we feel are crucial to developing and maintaining a positive and meaningful relationship between human and robot. For this study we investigate the inclusion of the additional social modalities, namely head pose and facial expression, as nonverbal behaviour has been shown to be an important conveyor of information in both social and musical contexts. Following a 6 week experimental study using automatic behavioural metrics, results demonstrate those subjected to nonverbal behaviours not only spent more time voluntarily with the robot, but actually increased the time they spent as the trial progressed. Further, that they interrupted the robot less during social interactions and played for longer uninterrupted. Conversely, they also looked at the robot less in both musical and social contexts. We take these results as support for open ended musical activity providing a solid grounding for human robot relationships and the improvement of this by the inclusion of appropriate nonverbal behaviours.

Keynote Address

Chasing Our Science Fiction Future BIBAFull-Text 245
  Daniel H. Wilson
Engineers and researchers, particularly in the field of robotics and human-computer interaction, are often inspired by science fiction futures depicted in novels, on television, and in the movies. For example, Honda's Asimo humanoid robot is said to have been directly inspired by the Astroboy manga series.
   In turn, public perception of science is also shaped by science fiction. For better or worse, broad technological expectations of the future (aesthetic and otherwise) are largely set by exposure to science fiction in popular culture. These depictions have a direct impact on attitudes toward new technology.
   We review some common tropes of science fiction (including the idea of the "singularity" and killer robots) and examine why certain archetypes might persist while others fall by the wayside. From the perspective of a scientist-turned-sci-fi-author, we discuss factors that go into the creation of science fiction and how these factors may or may not correspond to the needs and wants of the actual science community.
   Exposure to science fiction influences scientists and the general public, both to build and adopt new technologies. The inextricable link between science and science fiction helps to determine how and when those futures arrive.

Session G: Multi-modal Capabilities

Shaking Hands and Cooperation in Tele-present Human-Robot Negotiation BIBAFull-Text 247-254
  Chris Bevan; Danaë Stanton Fraser
A 3 x 2 between subjects design examined the effect of shaking hands prior to engaging in a single issue distributive negotiation, where one negotiator performed their role tele-presently through a 'Nao' humanoid robot. An additional third condition of handshaking with feedback examined the effect of augmenting the tele-present handshake with haptic and tactile feedback for the non tele-present and tele-present negotiators respectively.
   Results showed that the shaking of hands prior to negotiating resulted in increased cooperation between negotiators, reflected by economic outcomes that were more mutually beneficial. Despite the fact that the non tele-present negotiator could not see the real face of their counterpart, tele-presence did not affect the degree to which negotiators considered one another to be trustworthy, nor did it affect the degree to which negotiators self-reported as intentionally misleading one another. Negotiators in the more powerful role of buyer rated their impressions of their counterpart more positively, but only if they themselves conducted their negotiations tele-presently.
   Results are discussed in terms of their design implications for social tele-presence robotics.
Speech and Gesture Emphasis Effects for Robotic and Human Communicators: A Direct Comparison BIBAFull-Text 255-262
  Paul Bremner; Ute Leonards
Emphasis, by means of either pitch accents or beat gestures (rhythmic co-verbal gestures with no semantic meaning), has been shown to serve two main purposes in human communication: syntactic disambiguation and salience. To use beat gestures in this role, interlocutors must be able to integrate them with the speech they accompany. Whether such integration is possible when the multi-modal communication information is produced by a humanoid robot, and whether it is as efficient as for human communicators, are questions that need to be answered to further understanding of the efficacy of humanoid robots for naturalistic human-like communication.
   Here, we present an experiment which, using a fully within subjects design, shows that there is a marked difference in speech and gesture integration between human and robot communicators, being significantly less effective for the robot. In contrast to beat gestures, the effects of speech emphasis are the same whether that speech is played through a robot or as part of a video of a human. Thus, while integration of speech emphasis and verbal information do occur for robot communicators, integration of non-informative beat gestures and verbal information does not, despite comparable timing and motion profiles to human gestures.
Haptic Human-Robot Affective Interaction in a Handshaking Social Protocol BIBAFull-Text 263-270
  Mehdi Ammi; Virginie Demulier; Sylvain Caillou; Yoren Gaffary; Yacine Tsalamlal; Jean-Claude Martin; Adriana Tapus
This paper deals with the haptic affective social interaction during a greeting handshaking between a human and a humanoid robot. The goal of this work is to study how the haptic interaction conveys emotions, and more precisely, how it influences the perception of the dimensions of emotions expressed through the facial expressions of the robot. Moreover, we examine the benefits of the multimodality (i.e., visuo-haptic) over the monomodality (i.e., visual-only and haptic-only). The experimental results with Meka robot show that the multimodal condition presenting high values for grasping force and joint stiffness are evaluated with higher values for the arousal and dominance dimensions than during the visual condition. Furthermore, the results corresponding to the monomodal haptic condition showed that participants discriminate well the dominance and the arousal dimensions of the haptic behaviours presenting low and high values for grasping force and joint stiffness.
Embodied Collaborative Referring Expression Generation in Situated Human-Robot Interaction BIBAFull-Text 271-278
  Rui Fang; Malcolm Doering; Joyce Y. Chai
To facilitate referential communication between humans and robots and mediate their differences in representing the shared environment, we are exploring embodied collaborative models for referring expression generation (REG). Instead of a single minimum description to describe a target object, episodes of expressions are generated based on human feedback during human-robot interaction. We particularly investigate the role of embodiment such as robot gesture behaviors (i.e., pointing to an object) and human's gaze feedback (i.e., looking at a particular object) in the collaborative process. This paper examines different strategies of incorporating embodiment and collaboration in REG and discusses their possibilities and challenges in enabling human-robot referential communication.
Bringing the Scene Back to the Tele-operator: Auditory Scene Manipulation for Tele-presence Systems BIBAFull-Text 279-286
  Chaoran Liu; Carlos T. Ishi; Hiroshi Ishiguro
In a tele-operated robot system, the reproduction of auditory scenes, conveying 3D spatial information of sound sources in the remote robot environment, is important for the transmission of remote presence to the tele-operator. We proposed a tele-presence system which is able to reproduce and manipulate the auditory scenes of a remote robot environment, based on the spatial information of human voices around the robot, matched with the operator's head orientation. In the robot side, voice sources are localized and separated by using multiple microphone arrays and human tracking technologies, while in the operator side, the operator's head movement is tracked and used to relocate the spatial positions of the separated sources. Interaction experiments with humans in the robot environment indicated that the proposed system had significantly higher accuracy rates for perceived direction of sounds, and higher subjective scores for sense of presence and listenability, compared to a baseline system using stereo binaural sounds obtained by two microphones located at the humanoid robot's ears. We also proposed three different user interfaces for augmented auditory scene control. Evaluation results indicated higher subjective scores for sense of presence and usability in two of the interfaces (control of voice amplitudes based on virtual robot positioning, and amplification of voices in the frontal direction).
Environment Perception in the Presence of Kinesthetic or Tactile Guidance Virtual Fixtures BIBAFull-Text 287-294
  Samuel B. Schorr; Zhan Fan Quek; William R. Provancher; Allison M. Okamura
During multi-lateral collaborative teleoperation, where multiple human or autonomous agents share control of a teleoperation system, it is important to be able to convey individual user intent. One option for conveying the actions and intent of users or autonomous agents is to provide force guidance from one user to another. Under this paradigm, forces would be transmitted from one user to another in order to guide motions and actions. However, the use of force guidance to convey intent can mask environmental force feedback. In this paper we explore the possibility of using tactile feedback, in particular skin deformation feedback, skin deformation feedback to convey collaborative intent while preserving environmental force perception. An experiment was performed to test the ability of participants to use force guidance and skin deformation guidance to follow a path while interacting with a virtual environment. In addition, we tested the ability of participants to discriminate virtual environment stiffness when receiving either force guidance or skin deformation guidance. We found that skin deformation guidance resulted in a reduction of path-following accuracy, but increased the ability to discriminate environment stiffness when compared with force feedback guidance.

Session H: Human Behaviors, Activities, and Environments, Part 1

Robot-Centric Activity Prediction from First-Person Videos: What Will They Do to Me' BIBAFull-Text 295-302
  M. S. Ryoo; Thomas J. Fuchs; Lu Xia; J. K. Aggarwal; Larry Matthies
In this paper, we present a core technology to enable robot recognition of human activities during human-robot interactions. In particular, we propose a methodology for early recognition of activities from robot-centric videos (i.e., first-person videos) obtained from a robot's viewpoint during its interaction with humans. Early recognition, which is also known as activity prediction, is an ability to infer an ongoing activity at its early stage. We present an algorithm to recognize human activities targeting the camera from streaming videos, enabling the robot to predict intended activities of the interacting person as early as possible and take fast reactions to such activities (e.g., avoiding harmful events targeting itself before they actually occur). We introduce the novel concept of 'onset' that efficiently summarizes pre-activity observations, and design a recognition approach to consider event history in addition to visual features from first-person videos. We propose to represent an onset using a cascade histogram of time series gradients, and we describe a novel algorithmic setup to take advantage of such onset for early recognition of activities. The experimental results clearly illustrate that the proposed concept of onset enables better/earlier recognition of human activities from first-person videos collected with a robot.
Mutual Modelling in Robotics: Inspirations for the Next Steps BIBAFull-Text 303-310
  Séverin Lemaignan; Pierre Dillenbourg
Mutual modelling, the reciprocal ability to establish a mental model of the other, plays a fundamental role in human interactions. This complex cognitive skill is however difficult to fully apprehend as it encompasses multiple neuronal, psychological and social mechanisms that are generally not easily turned into computational models suitable for robots. This article presents several perspectives on mutual modelling from a range of disciplines, and reflects on how these perspectives can be beneficial to the advancement of social cognition in robotics. We gather here both basic tools (concepts, formalisms, models) and exemplary experimental settings and methods that are of relevance to robotics. This contribution is expected to consolidate the corpus of knowledge readily available to human-robot interaction research, and to foster interest for this fundamentally cross-disciplinary field.
Learning to Interact with a Human Partner BIBAFull-Text 311-318
  Mayada Oudah; Vahan Babushkin; Tennom Chenlinangjia; Jacob W. Crandall
Despite the importance of mutual adaption in human relationships, online learning is not yet used during most successful human-robot interactions. The lack of online learning in HRI to date can be attributed to at least two unsolved challenges: random exploration (a core component of most online-learning algorithms) and the slow convergence rates of previous online-learning algorithms. However, several recently developed online-learning algorithms have been reported to learn at much faster rates than before, which makes them candidates for use in human-robot interactions. In this paper, we explore the ability of these algorithms to learn to interact with people. Via user study, we show that these algorithms alone do not consistently learn to collaborate with human partners. Similarly, we observe that humans fail to consistently collaborate with each other in the absence of explicit communication. However, we demonstrate that one algorithm does learn to effectively collaborate with people when paired with a novel cheap-talk communication system. In addition to this technical achievement, this work highlights the need to address AI and HRI synergistically rather than independently.

Session I: Human Behaviors, Activities, and Environments, Part 2

Robots in the Home: Qualitative and Quantitative Insights into Kitchen Organization BIBAFull-Text 319-326
  Elizabeth Cha; Jodi Forlizzi; Siddhartha S. Srinivasa
In the future, we envision domestic robots to play a large role in our everyday lives. This requires robots able to anticipate our needs and preferences and adapt their behavior. Since current robotics research takes place primarily in laboratory settings, it fails to take into account real users. In this work, we explore how organization occurs in the kitchen through a home study. Our analysis includes qualitative insights towards robot behavior during kitchen organization, an open source dataset of real life kitchens, and a proof-of-concept application of this dataset to the problem of object return.
Are Robots Ready for Administering Health Status Surveys': First Results from an HRI Study with Subjects with Parkinson's Disease BIBAFull-Text 327-334
  Priscilla Briggs; Matthias Scheutz; Linda Tickle-Degnen
Facial masking is a symptom of Parkinson's disease (PD) in which humans lose the ability to quickly create refined facial expressions. This difficulty of people with PD can be mistaken for apathy or dishonesty by their caregivers and lead to a breakdown in social relationships. We envision future "robot mediators" that could ease tensions in these caregiver-client relationships by intervening when interactions go awry. However, it is currently unknown whether people with PD would even accept a robot as part of their healthcare processes. We thus conducted a first human-robot interaction study to assess the extent to which people with PD are willing to discuss their health status with a robot. We specifically compared a robot interviewer to a human interviewer in a within-subjects design that allowed us to control for individual differences of the subjects with PD caused by their individual disease progression. We found that participants overall reacted positively to the robot, even though they preferred interactions with the human interviewer. Importantly, the robot performed at a human level at maintaining the participants' dignity, which is critical for future social mediator robots for people with PD.
Measuring the Efficacy of Robots in Autism Therapy: How Informative are Standard HRI Metrics' BIBAFull-Text 335-342
  Momotaz Begum; Richard W. Serna; David Kontak; Jordan Allspaw; James Kuczynski; Holly A. Yanco; Jacob Suarez
A significant amount of robotics research over the past decade has shown that many children with autism spectrum disorders (ASD) have a strong interest in robots and robot toys, concluding that robots are potential tools for the therapy of individuals with ASD. However, clinicians, who have the authority to approve robots in ASD therapy, are not convinced about the potential of robots. One major reason is that the research in this domain does not have a strong focus on the efficacy of robots. Robots in ASD therapy are end-user oriented technologies, the success of which depends on their demonstrated efficacy in real settings. This paper focuses on measuring the efficacy of robots in ASD therapy and, based on the data from a feasibility study, shows that the human-robot interaction (HRI) metrics commonly used in this research domain might not be sufficient.
Interaction Expands Function: Social Shaping of the Therapeutic Robot PARO in a Nursing Home BIBAFull-Text 343-350
  Wan-Ling Chang; Selma Šabanovic
We use the "social shaping of technology and society" framework to qualitatively analyze data collected through observation of human-robot interaction (HRI) between social actors in a nursing home (staff, residents, visitors) and the socially assistive robot PARO. The study took place over the course of three months, during which PARO was placed in a publicly accessibly space where participants could interact with it freely. Social shaping focuses attention on social factors that affect the use and interpretation of technology in particular contexts. We therefore aimed to understand how different social actors make sense of and use PARO in daily interaction. Our results show participant gender, social mediation, and individual sense making led to differential use and interpretation of the robot, which affected the success of human-robot interactions. We also found that exposure to others interacting with PARO affected the nursing staff's perceptions of robots and their potential usefulness in eldercare. This shows that social shaping theory provides a valuable perspective for understanding the implementation of robots in long-term HRI and can inform interaction design in this domain.