HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction

Fullname:HRI'10 Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction
Editors:Pamela Hinds; Hiroshi Ishiguro; Takayuki Kanda; Peter Kahn
Location:Osaka, Japan
Dates:2010-Mar-02 to 2010-Mar-05
Publisher:ACM
Standard No:ISBN: 1-4244-4893-X, 978-1-4244-4893-7; ACM DL: Table of Contents hcibib: HRI10
Papers:113
Pages:384
Links:Conference Series Home Page
  1. HRI 2010 tutorials, workshops, & panels
  2. Paper session 1: robots in daily life
  3. Paper session 2: affect from appearance & motion
  4. Late-breaking abstracts session/poster session 1
  5. Late-breaking abstracts session/poster session 2
  6. Keynote
  7. Paper session 3: social & moral interaction with robots
  8. Paper session 4: teleoperation
  9. Paper session 5: natural language interaction
  10. Keynote
  11. Paper session 6: nonverbal interaction
  12. Paper session 7: social learning
  13. Video session
  14. Paper session 8: evaluation of interaction

HRI 2010 tutorials, workshops, & panels

Tutorial: cognitive analysis methods applied to human-robot interaction BIBAKFull-Text 1
  Julie A. Adams; Robin R. Murphy
This half-day tutorial will cover topics related to conducting cognitive task analysis and cognitive work analysis for purposes of informing human-robot interaction design and development. The goal of the tutorial is to provide attendees with an overview and comparison of various cognitive task analysis and cognitive work analysis methods, an understanding of how to conduct these types of analyses, collect the necessary data for analysis, and provide real-world case studies for specific cognitive task analysis and cognitive work analysis. The tutorial will include examples from actual analyses and data collection activities.
Keywords: human-robot interaction, task analysis
HRI 2010 workshop 1: what do collaborations with the arts have to say about HRI? BIBAKFull-Text 3
  William D. Smart; Annamaria Pileggi; Leila Takayama
Human-Robot Interaction researchers are beginning to reach out to fields not traditionally associated with interaction research, such as the performing arts, cartooning, and animation. These collaborations offer the potential for novel insights about how to get robots and people to interact more effectively, but they also involve a number of unique challenges. This full-day workshop will offer a venue for HRI researchers and their collaborators from these diverse fields to report on their work, share insights about the collaboration process, and to help begin to define an exciting new area in HRI.
Keywords: arts, collaboration, human-robot interaction, performance
Interaction science perspective on HRI: designing robot morphology BIBAKFull-Text 5
  Angel P. del Pobil; S. Shyam Sundar
This workshop will address the impact of robot morphology on HRI from the perspective of Interaction Science, which encompasses theory and design of human interaction with technology. Anthropomorphic designs, which are common, have to be balanced with the "uncanny valley effect," since different morphologies suggest different affordances to users, triggering a variety of cognitive heuristics and thereby shaping their interactions with robots. We expect progress towards more human-acceptable interactions with robots by understanding the cognitive, behavioral, organizational, and contextual factors of morphology in HRI, as well as new meta-theories and design guidelines. We emphasize a highly multi-disciplinary approach, by involving participants from social sciences, engineering, and design.
   Topics of presentation include but not limited to:
  • 1 Engineering considerations in designing robot morphology
  • 2 Empirical psychological considerations in designing robot morphology
  • 3 Aesthetic parameters for transcending the uncanny valley effect (UVE) with
       static, dynamic and interactive robots
  • 4 Physiological (fMRI) bases of UVE
  • 5 Cognitive heuristics triggered by morphological cues on robot interfaces
  • 6 Adaptation for multimodal robot interfaces
  • 7 Cognitive Robotic Engine for Dependable HRI
  • 8 Acceptance of Socially Interactive Robots
  • 9 Evaluation frameworks for human-like robots
  • 10 Robot appearances for social interactions among Autistic children
    Keywords: human-robot interaction, morphology, robotics, uncanny valley
  • HRI 2010 workshop 3: learning and adaptation of humans in HRI BIBAKFull-Text 7
      Hiroshi Ishiguro; Robin Murphy; Tatsuya Nomura
    On the current situation where robots having functions of communication with humans begin to appear in daily-life fields, it should be considered how symbiosis of humans and robots can be achieved. Many existing studies have focused on how robots can learn from and adapt for humans. This full-day workshop focuses not only on this classical theme but also on how humans can learn in and adapt for environments where robots are acting. In particular, human learning from and adaptation for robots should be covered by interdisciplinary research fields including robotics, computer science, psychology, sociology, and pedagogy.
    Keywords: adaptation, human-robot interaction, learning
    HRI pioneers workshop 2010 BIBAKFull-Text 9
      Kate Tsui; Min Kyung Lee; Kristen Stubbs; Henriette Cramer; Laurel D. Riek; Ja-Young Sung; Osawa Hirotaka; Satoru Satake
    The field of human-robot interaction is new but growing rapidly. While there are now several established researchers in the field, many of the current human-robot interaction practitioners are students or recently graduated. This workshop, to be held in conjunction with the HRI 2010 Conference, aims to bring together graduate students to present their current research to an audience of their peers in a setting that is less formal and more interactive than the main HRI conference, to talk about the important issues in their field, and to hear about what their colleagues are doing. Participants are encouraged to actively engage in and form relationships with others by discussing fundamental topics in HRI and by engaging in hands-on group activities.
    Keywords: collaboration, human-robot interaction, multidisciplinary
    Panel 1: grand technical and social challenges in human-robot interaction BIBAFull-Text 11
      Nathan Freier; Minoru Asada; Pam Hinds; Gerhard Sagerer; Greg Trafton
    Robots are becoming part of people's everyday social lives -- and will increasingly become so. In future years, robots may become caretaking assistants for the elderly, or academic tutors for our children, or medical assistants, day care assistants, or psychological counselors. Robots may become our co-workers in factories and offices, or maids in our homes. They may become our friends. As we move to create our future with robots, hard problems in HRI exist, both technically and socially. The Fifth Annual Conference on HRI seeks to take up grand technical and social challenges in the field -- and speak to their integration. This panel brings together 4 leading experts in the field of HRI to speak on this topic.
    Panel 2: social responsibility in human-robot interaction BIBAFull-Text 11
      Nathan Freier; Aude Billard; Hiroshi Ishiguro; Illah Nourbakhsh
    At the 2008 ACM/IEEE Conference on Human-Robot Interaction, a provocative panel was held to discuss the complicated ethical issues that abound in the field of human-robot interaction. The panel members and the audience participation made it clear that the HRI community desires -- indeed, is in need of -- an ongoing discussion on the nature of social responsibility in the field of human-robot interaction. At the 2010 Conference, we will hold a panel on the issues of social responsibility in HRI, focusing on the unique features of robotic interaction that call for responsible action (e.g., value-specific domains such as autonomy, accountability, trust, and/or human dignity; and application areas such as military applications, domestic care, entertainment, and/or communication). As a young and rapidly growing field, we have a responsibility to conduct our research in such a way that it leads to human-robot interaction outcomes that promote rather than hinder the flourishing of humans across society. What does social responsibility within the HRI field look like, and how do we conduct our work while adhering to such an obligation? The panelists will be asked to address this and related questions as a means of continuing an ongoing conversation on social responsibility in human-robot interaction.
    Company talks BIBAFull-Text 13
      Y. Hosoda; N. Sumida; T. Mita; Y. Matsukawa; D. Yamamoto; N. Shibatani; L. Takayama
    The aim of the company talks is (1) to provide a good picture about forefront technologies about robots related to human-robot interaction, and (2) to provide a opportunity to connect researchers and people from industries.
       Seven companies gives 8 minutes talk to present their cutting-edge technologies.
       Instead of having Q&A time after each presentation researchers and company presenters are Q&A time after each presentation, researchers and company presenters are encouraged to communicate with each other during reception just after company talks, where research posters will be presented.

    Paper session 1: robots in daily life

    MeBot: a robotic platform for socially embodied presence BIBAKFull-Text 15-22
      Sigurdur O. Adalgeirsson; Cynthia Breazeal
    Telepresence refers to a set of technologies that allow users to feel present at a distant location; telerobotics is a subfield of telepresence. This paper presents the design and evaluation of a telepresence robot which allows for social expression. Our hypothesis is that a telerobot that communicates more than simply audio or video but also expressive gestures, body pose and proxemics, will allow for a more engaging and enjoyable interaction. An iterative design process of the MeBot platform is described in detail, as well as the design of supporting systems and various control interfaces. We conducted a human subject study where the effects of expressivity were measured. Our results show that a socially expressive robot was found to be more engaging and likable than a static one. It was also found that expressiveness contributes to more psychological involvement and better cooperation.
    Keywords: embodied videoconferencing, human robot interaction, robot-mediated communication, telepresence
    Robots asking for directions: the willingness of passers-by to support robots BIBAKFull-Text 23-30
      Astrid Weiss; Judith Igelsböck; Manfred Tscheligi; Andrea Bauer; Kolja Kühnlenz; Dirk Wollherr; Martin Buss
    This paper reports about a human-robot interaction field trial conducted with the autonomous mobile robot ACE (Autonomous City Explorer) in a public place, where the ACE robot needs the support of human passers-by to find its way to a target location. Since the robot does not possess any prior map knowledge or GPS support, it has to acquire missing information through interaction with humans. The robot thus has to initiate communication by asking for the way, and retrieves information from passers-by showing the way by gestures (pointing) and marking goal positions on a still image on the touch screen of the robot. The aims of the field trial where threefold: (1) Investigating the aptitude of the navigation architecture, (2) Evaluating the intuitiveness of the interaction concept for the passers-by, (3) Assessing people's willingness to support the ACE robot in its task, i.e. assessing the social acceptability. The field trial demonstrates that the architecture enables successful autonomous path finding without any prior map knowledge just by route directions given by passers-by. An additional street survey and observational data moreover attests the intuitiveness of the interaction paradigm and the high acceptability of the ACE robot in the public place.
    Keywords: autonomous mobile robot, field trial, human-robot interaction, social acceptance
    A larger audience, please!: encouraging people to listen to a guide robot BIBAKFull-Text 31-38
      Masahiro Shiomi; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
    Tour guidance is a common task of social robots. Such a robot must be able to encourage the participation of people who are not directly interacting with it. We are particularly interested in encouraging people to overhear its interaction with others, since it has often been observed that even people who hesitate to interact with a robot are willing to observe its activity. To encourage such participation as bystanders, we developed a robot that walks backwards based on observations of human tour guides. Our developed system uses a robust human tracking system that enables a robot to guide people by walking forward/backward and allows us to scrutinize people's behavior after the experiment. We conducted a field experiment to compare the ratios of overhearing in "walking forward" and "walking backward." The experimental results revealed that in fact people do more often overhear the robot's conversation in the "walking backward" condition.
    Keywords: eliciting spontaneous participation, social human-robot interaction, tour-guide robot

    Paper session 2: affect from appearance & motion

    A study of a retro-projected robotic face and its effectiveness for gaze reading by humans BIBAKFull-Text 39-44
      Frédéric Delaunay; Joachim de Greeff; Tony Belpaeme
    Reading gaze direction is important in human-robot interactions as it supports, among others, joint attention and non-linguistic interaction. While most previous work focuses on implementing gaze direction reading on the robot, little is known about how the human partner in a human-robot interaction is able to read gaze direction from a robot. The purpose of this paper is twofold: (1) to introduce a new technology to implement robotic face using retro-projected animated faces and (2) to test how well this technology supports gaze reading by humans. We briefly discuss the robot design and discuss parameters influencing the ability to read gaze direction. We present an experiment assessing the user's ability to read gaze direction for a selection of different robotic face designs, using an actual human face as baseline. Results indicate that it is hard to recreate human-human interaction performance. If the robot face is implemented as a semi sphere, performance is worst. While robot faces having a human-like physiognomy and, perhaps surprisingly, video projected on a flat screen perform equally well and seem to suggest that these are the good candidates to implement joint attention in HRI.
    Keywords: eye gaze, human-robot interaction, joint attention, robotic face
    Judging a bot by its cover: an experiment on expectation setting for personal robots BIBAKFull-Text 45-52
      Steffi Paepcke; Leila Takayama
    Managing user expectations of personal robots becomes particularly challenging when the end-user just wants to know what the robot can do, and neither understands nor cares about its technical specifications. In describing what a robot can do to such an end-user, we explored the questions of (a) whether or not such users would respond to expectation setting about personal robots and, if so, (b) how such expectation setting would influence human-robot interactions and people's perceptions of the robots. Using a 2 (expectation setting: high vs. low) x 2 (robot type: Pleo vs. AIBO) between-participants experiment (N=24), we examined these questions. We found that people's initial beliefs about the robot's capabilities are indeed influenced by expectation setting tactics. Contrary to the hypotheses predicted by the Self-Fulfilling Prophecy and Confirmation Bias, we found that erring on the side of setting expectations lower rather than higher led to less disappointment and more positive appraisals of the robot's competence.
    Keywords: human-robot interaction, user expectations
    Perception of affect elicited by robot motion BIBAKFull-Text 53-60
      Martin Saerbeck; Christoph Bartneck
    Nonverbal behaviors serve as a rich source of information in inter human communication. In particular, motion cues can reveal details on a person's current physical and mental state. Research has shown, that people do not only interpret motion cues of humans in these terms, but also the motion of animals and inanimate devices such as robots. In order to successfully integrate mobile robots in domestic environments, designers have therefore to take into account how the device will be perceived by the user.
       In this study we analyzed the relationship between motion characteristics of a robot and perceived affect. Based on a literature study we selected two motion characteristics, namely acceleration and curvature, which appear to be most influential for how motion is perceived. We systematically varied these motion parameters and recorded participants interpretations in terms of affective content. Our results suggest a strong relation between motion parameters and attribution of affect, while the type of embodiment had no effect. Furthermore, we found that the level of acceleration can be used to predict perceived arousal and that valence information is at least partly encoded in an interaction between acceleration and curvature. These findings are important for the design of behaviors for future autonomous household robots.
    Keywords: affective communication, expressive robotic behavior, nonverbal communication
    Cooperative gestures: effective signaling for humanoid robots BIBAKFull-Text 61-68
      Laurel D. Riek; Tal-Chen Rabinowitch; Paul Bremner; Anthony G. Pipe; Mike Fraser; Peter Robinson
    Cooperative gestures are a key aspect of human-human pro-social interaction. Thus, it is reasonable to expect that endowing humanoid robots with the ability to use such gestures when interacting with humans would be useful. However, while people are used to responding to such gestures expressed by other humans, it is unclear how they might react to a robot making them. To explore this topic, we conducted a within-subjects, video based laboratory experiment, measuring time to cooperate with a humanoid robot making interactional gestures. We manipulated the gesture type (beckon, give, shake hands), the gesture style (smooth, abrupt), and the gesture orientation (front, side). We also employed two measures of individual differences: negative attitudes toward robots (NARS) and human gesture decoding ability (DANVA2-POS). Our results show that people cooperate with abrupt gestures more quickly than smooth ones and front-oriented gestures more quickly than those made to the side, people's speed at decoding robot gestures is correlated with their ability to decode human gestures, and negative attitudes toward robots is strongly correlated with a decreased ability in decoding human gestures.
    Keywords: affective robotics, cooperation, gestures, human-robot interaction

    Late-breaking abstracts session/poster session 1

    Towards robust human robot collaboration in industrial environments BIBAKFull-Text 71-72
      Batu Akan; Baran Çürüklü; Giacomo Spampinato; Lars Asplund
    In this paper a system, which is driven through natural language, that allows operators to select and manipulate objects in the environment using an industrial robot is proposed. In order to hide the complexities of robot programming we propose a natural language where the user can control and jog the robot based on reference objects in the scene. We used semantic networks to relate different types of objects in the scene.
    Keywords: human robot interaction, object selection, robot collaboration
    Similarities and differences in users' interaction with a humanoid and a pet robot BIBAKFull-Text 73-74
      Anja Austermann; Seiji Yamada; Kotaro Funakoshi; Mikio Nakano
    In this paper, we compare user behavior towards the humanoid robot ASIMO and the dog-shaped robot AIBO in a simple task, in which the users has to teach commands and feedback to the robot.
    Keywords: human-robot interaction, humanoid, user study
    Create children, not robots! BIBAKFull-Text 75-76
      Christoph Bartneck
    This essay investigates the situation of young researchers in the HRI community. I argue that we need to have a more child friendly environment to encourage young staff members to create children.
    Keywords: academia, birth rate, children, elderly
    Robots, children, and helping: do children help a robot in need? BIBAKFull-Text 77-78
      Tanya N. Beran; Alejandro Ramirez-Serrano
    This study examined the interactions between children and robots by observing whether children help a robot complete a task under five conditions to determine which elicited the most help. Each condition had an experimental and control group, with 20-32 children (even number of boys and girls) in each group. Visitors to a science centre located in a major Western Canadian city were invited to participate in an experiment set up at the centre. Their behaviors with a robot, a small 5 degree of freedom robot arm, were observed. Results of chi-square analyses indicated that children are most likely to help a robot after being introduced to it, X2(1) = 4.15, p = .04. This condition was the only one of five tested that demonstrated a significant increase in children's helping behaviors. These results suggest that an adult's demonstrated positive introduction to a robot impacts children's helping behaviors towards it.
    Keywords: children, prosocial behaviors, robotics
    Learning context-based feature descriptors for object tracking BIBAKFull-Text 79-80
      Ali Borji; Simone Frintrop
    A major problem with previous object tracking approaches is adapting object representations depending on scene context to account for changes in illumination, viewpoint changes, etc. To adapt our previous approach to deal with background changes, here we first derive some clusters from a training sequence and the corresponding object representations for those clusters. Next, for each frame of a separate test sequence, its nearest background cluster is determined and then the corresponding descriptor of that cluster is used for object representation in this frame. Experiments show that the proposed approach tracks objects and persons in natural scenes more effectively.
    Keywords: clustering, descriptor adaptation, feature-based tracking, particle filter
    RoboLeader: an agent for supervisory control of multiple robots BIBAKFull-Text 81-82
      Jessie Y. C. Chen; Michael J. Barnes; Zhihua Qu
    We developed an intelligent agent, RoboLeader, that could assist human operators in route planning for a team of ground robots. We compared the operators' target detection performance in the 4-robot and 8-robot conditions. Results showed that the participants detected significantly less targets and had significantly worse situation awareness when there were 8 robots compared to the 4-robot condition. Those participants with higher spatial ability detected more targets than did those with lower spatial ability. Participants' self-assessed workload was affected by the number of robots under control, their gender, and their attentional control ability.
    Keywords: individual differences, intelligent agent, military, simulation, supervisory control
    Evaluation of on screen navigational methods for a touch screen device BIBAKFull-Text 83-84
      Andrew Ho Siyong; Chua Wei Liang Kenny
    This study involves the design and evaluation of control methods for a touch screen device to enable effective navigation for UGVs (Unmanned Ground Vehicles). 6 different control methods were designed and evaluated. An experiment was conducted requiring participants to conduct navigational tasks. Analysis considers number of errors committed, task completion time and user preference.
    Keywords: human factors, human robot interaction, interface design, touch screen
    Towards industrial robots with human-like moral responsibilities BIBAKFull-Text 85-86
      Baran Çürüklü; Gordana Dodig-Crnkovic; Batu Akan
    Robots do not have any capability of taking moral responsibility. At the same time industrial robotics is entering a new era with "intelligent" robots sharing workbench with humans. Teams consisting of humans and industrial robots are no longer science fiction. The biggest worry in this scenario is the fear of humans losing control and robots running amok. We believe that the current way of implementing safety measures have shortcomings, and cannot address challenges related to close collaboration between humans and robots. We propose that "intelligent" industrial robots of the future should have moral responsibilities towards their human colleagues. We also propose that implementation of moral responsibility is radically different from standard safety measures.
    Keywords: ethics, human-robot interaction, industrial robots, moral responsibilities, safety
    Neel: an intelligent shopping guide -- using web data for rich interactions BIBAKFull-Text 87-88
      Chandan Datta; Ritukar Vijay
    The project Myneel and its portal myneel.com together were envisaged to provide us with crucial insights into the commercialization of service robots. In this paper we describe our system and propose an approach to develop an interactive conversational agent which can serve shopping needs of the visitors in a shopping mall.
    Keywords: interactive robots, mass collaboration, service robots, user interface
    An adaptive probabilistic graphical model for representing skills in pbd settings BIBAKFull-Text 89-90
      Haris Dindo; Guido Schillaci
    Understanding and efficiently representing skills is one of the most important problems in a general Programming by Demonstration (PbD) paradigm. We present Growing Hierarchical Dynamic Bayesian Networks (GHDBN), an adaptive variant of the general DBN model able to learn and to represent complex skills. The structure of the model, in terms of number of states and possible transitions between them, is not needed to be known a priori. Learning in the model is performed incrementally and in an unsupervised manner.
    Keywords: dynamic Bayesian networks, growing hierarchical dynamic Bayesian network, imitation learning, incremental learning, machine learning
    A midsummer night's dream: social proof in HRI BIBAKFull-Text 91-92
      Brittany A. Duncan; Robin R. Murphy; Dylan Shell; Amy G. Hopper
    The introduction of two types of unmanned aerial vehicles into a production of A Midsummer Night's Dream suggests that social proof informs untrained human groups. We describe the metaphors used in instructing actors, who were otherwise untrained and inexperienced with robots, in order to shape their expectations. Audience response to a robot crash depended on whether the audience had seen how the actors interacted with the robot "baby fairies." If they had not seen the actors treating a robot gently, an audience member would likely throw the robot expecting it to fly or handle it roughly. If they had seen the actors with the robots, the audience appeared to adopt the same gentle style and mechanisms for re-launching the micro-helicopter. The difference in audience behavior suggests that the principle of social proof will govern how untrained humans will react to robots.
    Keywords: human-robot interaction, performing arts, robotic theater, social interaction, social proof, uav-human interaction
    iForgot: a model of forgetting in robotic memories BIBAKFull-Text 93-94
      Cathal Gurrin; Hyowon Lee; Jer Hayes
    Much effort has focused in recent years on developing more life-like robots. In this paper we propose a model of memory for robots, based on human digital memories, though our model incorporates an element of forgetting to ensure that the robotic memory appears more human and therefore can address some of the challenges for human-robot interaction.
    Keywords: digital memories, forgetting, life experiences, robotics
    Exploring emotive actuation and its role in human-robot interaction BIBAKFull-Text 95-96
      John Harris; Ehud Sharlin
    In this paper, we present our research efforts in exploring the role of motion and actuation in human-robot interaction. We define Emotive Actuation, and briefly discuss its function and importance in social robotic interaction. We propose a suggested methodology for exploring Emotive Actuation in HRI, and present a robotic testbed we designed for this purpose. We conclude with informal results of a preliminary design critique we performed using our testbed.
    Keywords: emotive actuation, social human-robot interaction
    Multi-touch interaction for tasking robots BIBAKFull-Text 97-98
      Sean Timothy Hayes; Eli R. Hooten; Julie A. Adams
    The objective is to develop a mobile human-robot interface that is optimized for multi-touch input. Our existing interface was designed for mouse and keyboard input and was later adopted for voice and touch interaction. A new multi-touch interface permits multi-touch gestures, for example zooming and panning a map, and robot task specific touch interactions. An initial user evaluation found that the multi-touch interface is preferred and yields superior performance.
    Keywords: human-robot interaction, multi-touch interaction
    Active navigation landmarks for a service robot in a home environment BIBAKFull-Text 99-100
      Kentaro Ishii; Akihiko Ishida; Greg Saul; Masahiko Inami; Takeo Igarashi
    This paper proposes a physical user interface for a user to teach a robot to navigate a home environment. The user places small devices containing infrared based communication functionality as landmarks in the environment. The robot follows these landmarks to navigate to a goal landmark. Active landmarks communicate with each other to map their spatial relationships. Our method allows the user to start using the system immediately after placing the landmarks without installing any global position sensing system or prior mapping by the robot.
    Keywords: active landmarks, end-user interface, infrared communication, navigation path, robot navigation
    Toward coactivity BIBAKFull-Text 101-102
      Matthew Johnson; Jeffrey M. Bradshaw; Paul J. Feltovich; Catholijn Jonker; Maarten Sierhuis; Birna van Riemsdijk
    This paper introduces the concept of Coactivity as a new focal point for Human-Robot Interaction to address the more sophisticated roles of partner or teammate envisioned for future human-robot systems. We propose that most approaches to date have focused on autonomy and suggest that autonomy is the wrong focal point. The envisioned roles, if properly performed, have a high level of interdependence that cannot be addressed solely by autonomy and necessitate a focus on the coactivity.
    Keywords: autonomy, coactive, coordination, interdependence
    A code of ethics for robotics engineers BIBAKFull-Text 103-104
      Brandon Ingram; Daniel Jones; Andrew Lewis; Matthew Richards; Charles Rich; Lance Schachterle
    The future of robotics engineering is in the hands of engineers and must be handled to ensure the safety of all people and the reputation of the field. We are in the process of developing a code of ethics for professional robotics engineers to serve as a guideline for the ethical development of the field. This document contains the current version of this code and describes the methodology used in developing it.
    Keywords: code, ethics, robotics engineering
    Sociable dining table: the effectiveness of a "konkon" interface for reciprocal adaptation BIBAKFull-Text 105-106
      Yuki Kado; Takanori Kamoda; Yuta Yoshiike; P. Ravindra S. De Silva; Michio Okada
    We developed a creatures-based social dining table that can communicate through a knocking sound, which in Japanese is pronounced as "KonKon." Our main focus was to create a minimal number of cues for a proto-communication by establishing social interactions between a creature and a human. In particular, humans used the "KonKon" interface to communicate with a creature to demonstrate the social behaviors necessary to adapt to a person's intentions. The creature used a mutual adaptation model for achieving a more ideal adaptation during the interactions. In the experimental results, we discuss the concept of the creature and indicate the effectiveness of the communication-protocol on the "KonKon" interface for mutual adaptation.
    Keywords: mutual adaptation, social cues, social dining table
    Effects of intergroup relations on people's acceptance of robots BIBAKFull-Text 107-108
      Yunkyung Kim; Sonya S. Kwak; Myung-suk Kim
    The objective of this study is to examine the effect of intergroup relations on robots through comparison with other objects. In an experiment, participants watched eight stimuli drawn from four types of objects (people vs. robots vs. animals vs. products) according to two types of intergroup relations (in-group vs. out-group) and rated each stimuli in terms of familiarity, reliability, and preference. Regarding familiarity and reliability, the effect of intergroup relations on robots was greater than that on animals or products, but smaller than that on people. The degree of the effect regarding reliability was larger than that regarding familiarity for all types of object. In the case of preference, the effects of intergroup relations between people and robots and between animals and products were similar, and the effect on people and robots was greater than that on animals and products.
    Keywords: intergroup relations
    Choosing answerers by observing gaze responses for museum guide robots BIBAKFull-Text 109-110
      Yoshinori Kobayashi; Takashi Shibata; Yosuke Hoshi; Yoshinori Kuno; Mai Okada; Keiichi Yamazaki
    This paper presents a method of selecting the answerer from audiences for a museum guide robot. We performed the preliminary experiments that a robot distributed its gaze towards visitors to select an answerer and analyzed visitors' responses. From these experiments, we have found that the visitors who are asked questions by the robot feel embarrassed when they have no prior knowledge about the question and the visitor's gaze during the question plays an important role to avoid being asked question. Based on these findings we developed functions for a guide robot to select the answerer by observing behaviors of multiple visitors. Multiple visitors' head motions are tracked and recognized by using an omni-directional camera and a laser range sensor. The robot detects the visitors' positive and negative responses by observing their head motions while asking questions. We confirmed the effectiveness of our method by experiments.
    Keywords: conversation analysis, gaze tracking, guide robot, non-verbal action, sensor fusion
    From cartoons to robots: facial regions as cues to recognize emotions BIBAKFull-Text 111-112
      Tomoko Koda; Yuka Nakagawa; Kyota Tabuchi; Zsofia Ruttkay
    This paper introduces a preliminary result of a cross-cultural study on the facial regions as cues to recognize virtual agents' facial expressions. We believe providing research results on the perception of cartoonish virtual agents' facial expressions to HRI research community is meaningful in order to minimize the effort to develop robot's facial expressions. The result implies 1) the mouth region is more effective in conveying the emotions of the facial expressions than the eye region, 2) there are cultural differences in using facial regions as cues to recognize cartoonish facial expressions between Hungary and Japan.
    Keywords: cross-culture, facial expression
    Human training using HRI approach based on fuzzy ARTMap networks BIBAKFull-Text 113-114
      Felipe Machorro-Fernández; Vicente Parra-Vega; Ismael López-Juárez
    Based on recent studies which establishes that skill acquisition requires not just specification of motor skills, learning and skill application but also intervention of human expert only in certain phases, we present an approach which encode the human expert demonstration into a teacher class based on Fuzzy ArtMap network. Then, the human novice trainee produces the approximate knowledge, which is in turn coded into student class. The evaluation function introduces a class metric which simultaneously allows the student to refine motor commands to increase the trainee pace while modifies accordingly the desired trajectory of the robot. Preliminary experiments indicates a high success rate in contact robotic tasks, in a deterministic setting.
    Keywords: machine learning human robot interaction
    Robot-assisted upper-limb rehabilitation platform BIBAKFull-Text 115-116
      Matteo Malosio; Nicola Pedrocchi; Lorenzo Molinari Tosatti
    This work presents a robotic platform for upper-limb rehabilitation robotics. It integrates devices for human multi-sensorial feedback for engaging and immersive therapies. Its modular software design and architecture allows the implementation of advanced control algorithms for effective and customized rehabilitations. A flexible communication infrastructure allows straightforward devices integration and system expandability.
    Keywords: human-robot interaction, open controller, rehabilitation robotics, virtual haptics
    The development of small size humanoid robot which is easy to use BIBAKFull-Text 117-118
      Hirofumi Niimi; Minoru Koike; Seiichi Takeuchi; Noriyoshi Douhara
    We designed humanoid robots based on the skeleton. They were easy to make the motions by designing them in the proportion of human. The motions of crawl on the hands and knees, roll-over and crawl on the back were made by using humanoid robot SANDY-3.
    Keywords: crawl, humanoid robot, sandy
    Application of unexpectedness to the behavioral design of an entertainment robot BIBAKFull-Text 119-120
      Hyojung Oh; Sonya S. Kwak; Myung-Suk Kim
    The objectives of this study are to apply unexpectedness to the behavioral design of an entertainment robot and to evaluate the impression and satisfaction provided by the robot. Participants(N=44) observed four robot behaviors, which are distinguished by type of expectancy disconfirmation (positive disconfirmation, negative disconfirmation, simply confirmation, unexpected disconfirmation), and evaluated each behavior in terms of novelty, enjoyment, satisfaction, performance, and reliability. Participants perceived the unexpected disconfirmation behavior to be more novel and enjoyable such that they preferred this type over the other types. On the other hand, they evaluated the positive disconfirmation behavior as more intelligent and reliable than the other types. These findings will provide an essential basis for designing the behavior of an entertainment robot with the use of unexpectedness.
    Keywords: behavioral design, entertainment robot, expectancy disconfirmation, expectation incongruity, unexpectedness
    Design guidelines for industrial power assist robots for lifting heavy objects based on human's weight perception for better HRI BIBAKFull-Text 121-122
      S. M. Mizanoor Rahman; Ryojun Ikeura; Masaya Nobe; Hideki Sawai
    We hypothesized that weight perception (WP) due to inertia might be different from WP due to gravity for lifting an object with a power assist robot (PAR). Objects were lifted with a PAR independently under three different lifting schemes-unimanual, bimanual, and cooperative lift. Then, psychophysical relationships between actual and power-assisted weights (PAWs) as well as excess in load forces (LFs) were determined for each scheme separately. A novel control strategy was introduced to reduce the excess in LFs for each scheme. Finally, we proposed to use the findings as design guidelines to design PARs for lifting heavy objects in industries that would improve HRI in terms of human, robot and system.
    Keywords: design guidelines, feedback position control, lifting objects, power assist robot, psychophysics, weight perception
    Psychological intimacy with robots?: using interaction patterns to uncover depth of relation BIBAKFull-Text 123-124
      Peter H., Jr. Kahn; Jolina H. Ruckert; Takayuki Kanda; Hiroshi Ishiguro; Aimee Reichert; Heather Gary; Solace Shen
    This conceptual paper broaches possibilities and limits of establishing psychological intimacy in HRI.
    Keywords: authenticity, design methodology, human-robot interaction, interaction patterns, intimacy, social and moral development
    Exploring interruption in HRI using wizard of oz BIBAKFull-Text 125-126
      Paul Saulnier; Ehud Sharlin; Saul Greenberg
    We are interested in exploring how robots controlled using Wizard of Oz (WoO) should interrupt humans in various social settings. While there is considerable work on interruption and interruptibility in HCI, little has been done to explore how these concepts will map robotic interaction. As part of our efforts to investigate interruption and interruptibility in HRI we used WoO-based methodology to investigate robot behaviours in a simple interruption scenario. In this report we contribute a design critique that discusses this methodology, and common concerns that could be generalized to other social HRI experiments as well as reflections on our future interruption HRI research.
    Keywords: component, human-robot interaction, interruption, methodology, robot behaviours, social
    Survivor buddy and SciGirls: affect, outreach, and questions BIBAKFull-Text 127-128
      Robin Murphy; Vasant Srinivasan; Negar Rashidi; Brittany Duncan; Aaron Rice; Zachary Henkel; Marco Garza; Clifford Nass; Victoria Groom; Takis Zourntos; Roozbeh Daneshvar; Sharath Prasad
    This paper describes the Survivor Buddy human-robot interaction project and how it was used by four middle-school girls to illustrate the scientific process for an episode of "SciGirls", a Public Broadcast System science reality show. Survivor Buddy is a four degree of freedom robot head, with the face being a MIMO 740 multi-media touch screen monitor. It is being used to explore consistency and trust in the use of robots as social mediums, where robots serve as intermediaries between dependents (e.g., trapped survivors) and the outside world (doctors, rescuers, family members). While the SciGirl experimentation was neither statistically significant nor rigorously controlled, the experience makes three contributions. It introduces the Survivor Buddy project and social medium role, it illustrates that human-robot interaction is an appealing way to make robotics more accessible to the general public, and raises interesting questions about the existence of a minimum set of degrees of freedom for sufficient expressiveness, the relative importance of voice versus non-verbal affect, and the range and intensity of robot motions.
    Keywords: assistive robots, gaze and gestures, human-robot interaction, interaction styles, robots, user interfaces
    Considering the bystander's perspective for indirect human-robot interaction BIBKFull-Text 129-130
      Katherine M. Tsui; Munjal Desai; Holly A. Yanco
    Keywords: experiment, social etiquette, trust
    Interactive story creation for knowledge acquisition BIBAKFull-Text 131-132
      Shohei Yoshioka; Takuya Maekawa; Yasushi Hirano; Shoji Kajita; Kenji Mase
    This paper proposes an agent system that semi-automatically creates stories about daily events detected by ubiquitous sensors. These stories are knowledge of inhabitants' daily lives and it may be useful for human-friendly agent. Story flows in daily lives are extracted from interaction between sensor room inhabitants and a symbiotic agent. The agent asks causal relationships among daily events for inhabitants to create the story flow. Experimental results show that created stories let created stories perceive agent's intelligence.
    Keywords: humanoid robot, story creation, ubiquitous environment
    Showing robots how to follow people using a broomstick interface BIBAKFull-Text 133-134
      James E. Young; Kentaro Ishii; Takeo Igarashi; Ehud Sharlin
    Robots are poised to enter our everyday environments such as our homes and offices, contexts that present unique questions such as the style of the robot's actions. Style-oriented characteristics are difficult to define programmatically, a problem that is particularly prominent for a robot's interactive behaviors, those that must react accordingly to dynamic actions of people. In this paper, we present a technique for programming the style of how a robot should follow a person by demonstration, such that non-technical designers and users can directly create the style of following using their existing skill sets. We envision that simple physical interfaces like ours can be used by non-technical people to design the style of a wide range of robotic behaviors.
    Keywords: human-robot interaction, programming by demonstration
    Cues for sociable PC: coordinate and synchronize its cues based on user attention and activities on display BIBAKFull-Text 135-136
      Yuta Yoshiike; P. Ravindra S. De Silva; Michio Okada
    A sociable PC (SPC) is capable of engaging and interacting with social cues while users use it. SPC is a kind of artifact which is capable of coordinating and synchronizing its behaviors based on user attention and information on a display. In particular, SPC can exhibit behaviors to induce a trust through social rapport with the user while responding to the user's behaviors and activities on a PC. We used the concept of a minimalism designing mechanism to invent the SPC. The SPC appearance is much like soft "Tofu," so the user can touch and sense it. The SPS can also provide feedback to the user using attractive social cues such as shaking its body, displaying an attractive motion and joint attention with the user, etc.
    Keywords: sociable pc, social cues, social rapport

    Late-breaking abstracts session/poster session 2

    Do children perceive robots as alive?: children's attributions of human characteristics BIBAKFull-Text 137-138
      Tanya N. Beran; Alejandro Ramirez-Serrano
    Centuries ago, the existence of life was explained by the presence of a soul [1]. Known as animism, this term was re-defined in the 1970s by Piaget as young children's beliefs that inanimate objects are capable of actions and have lifelike qualities. With the development of robots in the 21st century, researchers have yet to examine whether animism is apparent in children's impressions of robots. The purpose of this study was to examine children's perspectives about the cognitive, affective, and behavioral attributes of a robot. Visitors to a science centre located in a major Western Canadian city were invited to participate in an experiment set up at the centre. A total of 198 children ages 5 to 16 years (M = 8.18 years) with an approximate even number of boys and girls participated. Children were interviewed after observing a robot, a small 5 degree of freedom robot arm, perform a block stacking task. Answers to the six questions about the robot were scored according to whether they referenced humanistic qualities. Frequency and content analysis results suggest that a significant proportion of children ascribe cognitive, affective, and behavioral characteristics to robots.
    Keywords: animism, children, robotics
    Effects of operator spatial ability on uav-guided ground navigation BIBAKFull-Text 139-140
      Jessie Y. C. Chen
    We simulated a military reconnaissance environment and examined the performance of ground robotics operators who were instructed to utilize streaming video from an unmanned aerial vehicle to navigate his/her ground robot to the locations of the targets. We evaluated participants' spatial ability and examined if it affected their performance or perceived workload. Results showed that participants with higher spatial ability performed significantly better in target-mapping performance and reported less workload than those with lower spatial ability. Participants with poor sense-of-direction performed significantly worse in the target search task in the night condition compared with those with better sense-of-direction.
    Keywords: human-robot interaction, individual differences, military, navigation, reconnaissance, spatial ability, uav
    Effects of (in)accurate empathy and situational valence on attitudes towards robots BIBAKFull-Text 141-142
      Henriette Cramer; Jorrit Goddijn; Bob Wielinga; Vanessa Evers
    Empathy has great potential in human-robot interaction. However, the challenging nature of assessing the user's emotional state points to the importance of also understanding the effects of empathic behaviours incongruent with users' affective experience. A 3x2 between-subject video-based survey experiment (N=133) was conducted with empathic robot behaviour (empathically accurate, neutral, inaccurate) and valence of the situation (positive, negative) as dimensions. Trust decreased when empathic responses were incongruent with the affective state of the user. However, in the negative valence condition, reported perceived empathic abilities were greater when the robot responded as if the situation were positive.
    Keywords: emotional valence, empathy, human-robot interaction, social robots
    Using proxemics to evaluate human-robot interaction BIBAKFull-Text 143-144
      David Feil-Seifer; Maja Matariæ
    Recent feasibility studies involving children with autism spectrum disorders (ASD) interacting with socially assistive robots have shown that children can have both positive and negative reactions to robots. These reactions can be readily identified by a human observer watching videos from an overhead camera. Our goal is to automate the process of such behavior analysis. This paper shows how a heuristic classifier can be used to discriminate between children that are attempting to interact socially with a robot and children that are not.
    Keywords: autism spectrum disorders, human-robot interaction
    Is a telepresence-system an effective alternative to manned missions? BIBAKFull-Text 145-146
      Lena Geiger; Michael Popp; Berthold Färber; Jordi Artigas; Philipp Kremer
    Telepresence-systems have the potential to take on an important role in on-orbit servicing scenarios. In comparison to manned missions, these systems offer a safer way to operate in outer space. One of the main goals of telepresence research is to learn whether immersive telepresence systems can achieve the efficiency of astronauts in typical mounting tasks, considering that astronauts' mobility is restricted by a range of factors including microgravity and space suits. In order to determine whether a telepresence system is more efficient in performing tasks compared to suited astronauts, an experimental study comparing both scenarios was accomplished.
    Keywords: completion time, on-orbit servicing, simulated extra-vehicular activity, telepresence
    Specialization, fan-out, and multi-human/multi-robot supervisory control BIBAKFull-Text 147-148
      Jonathan M. Whetten; Michael A. Goodrich
    This paper explores supervisory control of multiple, heterogeneous, independent robots by operator teams. Experimental evidence is presented which suggests that two cooperating operators may have free capacity that can be used to improve primary task performance without increasing average fan-out.
    Keywords: human-robot interaction, multi-user interface
    Practical evaluation of robots for elderly in Denmark: an overview BIBAKFull-Text 149-150
      Soren Tranberg Hansen; Hans Jorgen Andersen; Thomas Bak
    Robots for elderly have drawn a great deal of attention as it is a controversial topic being pushed forward by the fact that there will be a dramatic increase of elderly in most western countries. Within the field of HRI, much research has been conducted on robots interacting with elderly and also a number of commercial products have been introduced to the market. Since 2006, a number of projects have been launched in Denmark in order to evaluate robot technology in practice in elder care. This paper gives an brief overview of a selected number of projects and outlines the characteristics and results. Finally it is discussed how HRI can benefit from these.
    Keywords: commercial robots, elderly, evaluation
    Human performance moderator functions for human-robot peer-based teams BIBAKFull-Text 151-152
      Caroline E. Harriott; Julie A. Adams
    Interaction between humans and robots in peer-based teams can be dramatically affected by human performance. Our research is focused on determining if existing human performance moderator functions apply to peer-based human-robot interaction and if not, how such functions must be modified. Our initial work focuses on modeling workload. Validation of the models will require human subject evaluations. Future work will incorporate larger numbers of performance moderator functions and will apply the results to distributing tasks to team members.
    Keywords: human performance modeling, human performance moderator functions, human-robot teams
    Photograph-based interaction for teaching object delivery tasks to robots BIBAKFull-Text 153-154
      Sunao Hashimoto; Andrei Ostanin; Masahiko Inami; Takeo Igarashi
    Personal photographs are important media for communication in our daily lives. People take photos to remember things about themselves and show them to others to share the experience. We expect that a photograph can be useful tool for teaching a task to a robot. We propose a novel human-robot interaction using photographs. The user takes a photo to remember the target in a real-world situation involving a task and shows it to the system to make it physically execute the task. We developed a prototype system in which the user took a photo of a dish arrangement on a table and showed it to the system later to then have a small robot deliver and arrange the dishes in the same way.
    Keywords: delivery robots, object arrangement, photograph-based interaction
    Human-robot collaboration for a shared mission BIBAKFull-Text 155-156
      Abir-Beatrice Karami; Laurent Jeanpierre; Abdel-Illah Mouaddib
    We are interested in collaboration domains between a robot and a human partner, the partners share a common mission without an explicit communication about their plans. The decision process of the robot agent should consider the presence of its human partner. Also, the robot planning should be flexible to human comfortability and all possible changes in the shared environment. To solve the problem of human-robot collaboration with no communication, we present a model that gives the robot the ability to build a belief over human intentions in order to predict his goals, this model counts mainly on observing the human actions. We integrate this prediction into a Partially Observable Markov Decision Process (POMDP) model to achieve the most appropriate and flexible decisions for the robot.
    Keywords: human-robot collaboration, pomdp
    Humanoid robot vs. projector robot: exploring an indirect approach to human robot interaction BIBAKFull-Text 157-158
      Eun Kwon; Gerard Jounghyun Kim
    In this paper, we compare the efficiency in information transfer and user acceptance between the traditional humanoid robot and the projector robot.
    Keywords: humanoid, projector, robot, user acceptance
    Dona: urban donation motivating robot BIBAKFull-Text 159-160
      Min Su Kim; Byung Keun Cha; Dong Min Park; Sae Mee Lee; Sonya Kwak; Min Kyung Lee
    The rate of donations made by individuals is relatively low in Korea when compared to other developed countries. To address this problem, we propose the DONA, an urban donation motivating robot prototype. The robot roams around in a public space and solicits donation from passers-by by engaging them through a pet like interaction. In this paper, we present the prototype of the robot and our design process.
    Keywords: charity, donation, emotion, human-robot interaction, interaction design, ludic experience, pet-like interaction
    Design targeting voice interface robot capable of active listening BIBAKFull-Text 161-162
      Yuka Kobayashi; Daisuke Yamamoto; Toshiyuki Koga; Sachie Yokoyama; Miwako Doi
    The EU, South Korea and Japan have a pressing need to compensate for growing labor shortages in their aging societies. There is growing awareness that robotic technology has the potential to ameliorate this problem in terms of both physical and mental labor. To take an example of mental labor, a human therapist dealing with elderly people must be an active listener. In order to realize a robot capable of active listening, we adopt Ivey's basic listening sequence skills in microcounseling. In this paper, we describe a voice interface robot that realizes simple feedback, repeat feedback and questions for Ivey's basic listening sequence. We conducted an experiment, whose results show that 69% of incidences of feedback have adequate reflective words for spoken sentences and 56% of questions are adequate for these reflective words.
    Keywords: active listening, dialogue, interaction, microcounseling, non-verbal, verbal
    Users' reactions toward an on-screen agent appearing on different media BIBAKFull-Text 163-164
      Takanori Komatsu; Yuuki Seki
    We experimentally investigated users' reactions toward an on-screen agent appearing on three different types of media: a 42-inch television, 17-inch display, and 4.5-inch mobile PC. Specifically, we observed whether the users accepted the agent's invitation to a Shiritori game while they were engaged in given tasks. The results showed that most participants who received the invitation from the on-screen agent appearing on a 4.5-inch mobile PC accepted the agent's invitation, while most participants did not accept the invitation from the agent appearing on the other two formats. Therefore, the mobile PC appears to be an appropriate media for an on-screen agent that is required for interaction with users.
    Keywords: media terminals, on-screen agent, Shiritori game
    5w viewpoints associative topic search for networked conversation support system BIBAKFull-Text 165-166
      Yukitaka Kusumura; Hironori Mizuguchi; Dai Kusui; Yoshio Ishizawa; Yusuke Muraoka
    To build up spontaneous conversation, it is important to select topics without a feeling of strangeness. When someone notices others are not interested in a topic, he/she tries to find a new topic. Then, he/she thinks of viewpoints of the conversation and selects a topic associated with the current topic from the viewpoints. To automate viewpoint-based topic selection, we present 5W viewpoint associative topic search. The method estimates the weights of 5W viewpoints (who, what, where, when and why) from conversation, to use an appropriate similarity to search for the next topic.
    Keywords: conversation support, topic search and associative search
    Dialogue patterns of an Arabic robot receptionist BIBAKFull-Text 167-168
      Maxim Makatchev; Imran Fanaswala; Ameer Abdulsalam; Brett Browning; Wael Ghazzawi; Majd Sakr; Reid Simmons
    Hala is a bilingual (Arabic and English) culturally-sensitive robot receptionist located at Carnegie Mellon University in Qatar. We report results from Hala's deployment by comparing her English dialogue corpus to that of a similar monolingual robot (named "Tank") located at CMU's Pittsburgh campus. Specifically, we compare the average number of turns per interaction, duration of interactions, frequency of interactions with personal questions, rate of non-understandings, and rate of thanks after the robot's answer. We provide possible explanations for observed similarities and differences and highlight potential cultural implications on the interactions.
    Keywords: conversational agents, culture, human-robot interaction, natural language dialogue, social robots
    Modular control for human motion analysis and classification in human-robot interaction BIBAKFull-Text 169-170
      Juan Alberto Rivera-Bautista; Ana Cristina Ramirez-Hernandez; Virginia A. Garcia-Vega; Antonio Marin-Hernandez
    Trajectories followed by the humans can be interpreted as an attitude gesture. Based on this interpretation an autonomous mobile robot can decide how to initiate interaction with a given human. In this work is presented a modular control system to analyze human walking trajectories in order to engage a robot on a Human-Robot interaction. When the robot detects a human with their vision system a visual tracking module begins to work over the Pan/Tilt/Zoom (PTZ) camera unit. Camera parameters configuration and global robot localization are then used by another module to filter and track human's legs over the laser range finder (LRF) data. Path followed by the human over the global reference frame is then processed by another module which determines the kind of attitude showed by the human. Based on the result the robot decides if an interaction is needed and who is expected to begin it. At this moment are used only three kinds of attitudes: confidence, curiosity and nervousness.
    Keywords: attitude interpretation, human walking gestures, human-robot interaction, sensor fusion
    A panoramic vision system for human-robot interaction BIBAKFull-Text 171-172
      Ester Martínez; Angel P. del Pobil
    We have proposed a new approach for solving a fundamental issue in HRI, that is, how to properly detect and identify people in everyday environments since some conditions might make it a difficult task. For that, fisheye cameras are used since they provide panoramic vision and one or two of them allow to cover the whole workspace. A modified background maintenance approach was developed for fast, robust motion detection; while person identification for interaction is dealt with Viola-Jones classifier, although, instead of searching in the whole image, its input is only composed of the detected moving elements. Moreover, in order to avoid restricting the system autonomy by requiring the person has to face the system any time, once a person is identified as a target for interaction, they are tracked by using another designed method. We have also carried out an implementation of the proposed approach and a comparative experiment to assess its feasibility.
    Keywords: maintenance background, motion detection, segmentation, tracking
    Multimodal human-humanoid interaction using motions, brain NIRS and spike trains BIBAKFull-Text 173-174
      Yasuo Matsuyama; Nimiko Ochiai; Takashi Hatakeyama; Keita Noguchi
    Heterogeneous bio-signals including human motions, brain NIRS and neural spike trains are utilized for operating biped humanoids. The Bayesian network comprising Hidden Markov Models and Support Vector Machines is designed for the signal integration. By this method, the system complexity is reduced so that that total operation is within the scope of PCs. The designed system is capable of transducing original sensory meaning to another. This leads to prosthesis, rehabilitation and gaming. In addition to the supervised mode, the humanoid can act autonomously for its own designed tasks.
    Keywords: brain NIRS, hmm/svm-embedded bn, human-humanoid interaction, motion recognition, multimodal, neural spike train, non-verbal, sensory transducing
    Changes of utterances in the skill acquisition of collaborative conveyer task BIBAKFull-Text 175-176
      Shuichi Nakata; Harumi Kobayashi; Satoshi Suzuki; Hiroshi Igarashi
    Importance of developing human adaptive robots or systems is increasing these days. To accomplish it, we have to clarify features of human-human communication. In this study, we analyzed human speech while completing computerized collaborate task to clarify how humans speak in collaborative work. We extracted questioning speeches from conversations using CLAN, and classified them into four categories according to their content. We investigated whether the number and ratio of each type of questions changed over ten trials. The results indicate that leader role person is sensitive to task planning.
    Keywords: collaborative task, corpus study, speech analysis
    Intuitive multimodal interaction for service robots BIBAKFull-Text 177-178
      Matthias Nieuwenhuisen; Jörg Stückler; Sven Behnke
    Domestic service tasks require three main skills from autonomous robots: robust navigation in indoor environments, flexible object manipulation, and intuitive communication with the users. In this report, we present the communication skills of our anthropomorphic service and communication robots Dynamaid and Robotinho. Both robots are equipped with an intuitive multimodal communication system, including speech synthesis and recognition, gestures and mimic. We evaluate our systems in the @Home league of the RoboCup competitions and in a museum tour guide scenario.
    Keywords: anthropomorphism, multimodal human-robot-interaction, service robotics
    Toward the body image horizon: how do users recognize the body of a robot? BIBAKFull-Text 179-180
      Hirotaka Osawa; Yuji Matsuda; Ren Ohmura; Michita Imai
    In this study, we investigated the boundary for recognizing robots. Many anthropomorphic robots are used for interactions with users. These robots show various body forms and appearances, which are recognized by their users. This ability to recognize a variety of robotic appearances suggests that a user can recognize a wide range of imaginary body forms compared with the native human appearance. We attempted to determine the boundary for the recognition of robot appearances. On the basis of our previous studies, we hypothesized that the discrimination of robot appearances depends of the order of the parts. If the body parts of a robot are placed in order from top to bottom, the user can recognize the assembly as a robot body. We performed a human-robot experiment in which we compared the results for robots with ordered parts with those for robots with inverted parts. The result showed that the users' perception of the robot's body differed between the two groups. This result confirms our hypothesized boundary for the recognition of robot appearances.
    Keywords: anthropomorphization, design, human agent interaction, human interface, human-robot interaction
    Solving ambiguities with perspective taking BIBAKFull-Text 181-182
      Raquel Ros; Emrah Akin Sisbot; Rachid Alami; Jasmin Steinwender; Katharina Hamann; Felix Warneken
    Humans constantly generate and solve ambiguities while interacting with each other in their every day activities. Hence, having a robot that is able to solve ambiguous situations is essential if we aim at achieving a fluent and acceptable human-robot interaction. We propose a strategy that combines three mechanisms to clarify ambiguous situations generated by the human partner. We implemented our approach and successfully performed validation tests in several different situations both, in simulation and with the HRP-2 robot.
    Keywords: human-robot interaction, perspective taking
    Validating interaction patterns in HRI BIBAKFull-Text 183-184
      Peter H., Jr. Kahn; Brian T. Gill; Aimee L. Reichert; Takayuki Kanda; Hiroshi Ishiguro; Jolina H. Ruckert
    In recent work, "interaction patterns" have been proposed as a means to characterize essential features of human-robot interaction. A problem arises, however, in knowing whether the interaction patterns generated are valid. The same problem arises when researchers in HRI propose other broad conceptualizations that seek to structure social interaction. In this paper, we address this general problem by distinguishing three ways of establishing the validity of interaction patterns. The first form of validity seeks to establish whether the conclusions about interaction patterns are warranted from the data. The second seeks to establish whether the interaction patterns account for the data. And the third seeks to provide sound reasons for the labels of the patterns themselves. Often these three forms of validity are confused in discussions about conceptual categories in HRI.
    Keywords: human-robot interaction, interaction patterns, validity
    A study of three interfaces allowing non-expert users to teach new visual objects to a robot and their impact on learning efficiency BIBAKFull-Text 185-186
      Pierre Rouanet; Pierre-Yves Oudeyer; David Filliat
    We developed three interfaces to allow non-expert users to teach name for new visual objects and compare them through user's studies in term of learning efficiency.
    Keywords: human-robot interaction, interfaces, joint attention, learning, social robotics
    Help me help you: interfaces for personal robots BIBKFull-Text 187-188
      Ian J. Goodfellow; Nate Koenig; Marius Muja; Caroline Pantofaru; Alexander Sorokin; Leila Takayama
    Keywords: hri, information theory, mobile user interface
    The hesitation of a robot: a delay in its motion increases learning efficiency and impresses humans as teachable BIBAKFull-Text 189-190
      Kazuaki Tanaka; Motoyuki Ozeki; Natsuki Oka
    If robots learn new actions through human-robot interaction, it is important that the robots can utilize rewards as well as instructions to reduce humans' efforts. Additionally, "interval" which allows humans to give instructions and evaluations is also important. We hence focused on "delays in initiating actions" and changed them according to the progress of learning: long delays at early stages, and short at later stages. We compared the proposed varying delay with a constant delay by an experiment. The result demonstrated that the varying delay improves learning efficiency significantly and impresses humans as teachable.
    Keywords: delay, hesitation, learning efficiency, teachability
    Can a robot deceive humans? BIBAKFull-Text 191-192
      Kazunori Terada; Akira Ito
    In the present study, we investigated whether a robot is able to deceive a human by producing a behavior against him/her prediction. A feeling of being deceived by a robot would be a strong indicator that the human treat the robot as an intentional entity. We conducted a psychological experiment in which a subject played Darumasan ga Koronda, a Japanese children's game, with a robot. A main strategy to deceive a subject was to make his/her mind believe that the robot is stupid so as not to be able to move quickly. The experimental result indicated that unexpected change of a robot behavior gave rise to an impression of being deceived by the robot.
    Keywords: deception, intention attribution, theory of mind
    Developing heuristics for assistive robotics BIBKFull-Text 193-194
      Katherine M. Tsui; Kareem Abu-Zahra; Renato Casipe; Jason M'Sadoques; Jill L. Drury
    Keywords: heuristic evaluation, human-robot interaction
    Effect of social robot's behavior in collaborative learning BIBAKFull-Text 195-196
      Hirohide Ushida
    This paper describes about the effect of social robot's behavior on human performance. The robot behaves based on an artificial mind model, and it expresses emotions according to the situation. In this research, we consider about the case where human and the robot learn cooperatively. The robot emotionally reacts to the joint learner's success and failure. The experimental result shows that social behavior of the robot influences the performance of human learners.
    Keywords: human-robot interaction, personality, social robot
    STB: human-dependent sociable trash box BIBAKFull-Text 197-198
      Yuto Yamaji; Taisuke Miyake; Yuta Yoshiike; P. Ravindra S. De Silva; Michio Okada
    We developed a Sociable Trash Box (STB) as a children-assisted robot able to collect the trash in order to convey its intentional stance to children. The STB is capable of engaging manifold affiliation behaviors to build a social rapport with children by collecting the trash around their environment. In particular, the STB is a child-dependent robot that walks alone in a public space for tracing humans and trash for the purpose of collecting the trash. The robot is incapable of collecting the trash by itself, and it engages by using interactive behaviors and vocalizations to make a social coupling with children based on the robot's anticipation to accomplish its goal. The present experiment investigates how STB behaviors are effective in conveying intentions to evoke children's social interactions and to assist in collecting the trash in their environment.
    Keywords: intentional stance, sociable trash box, social coupling
    Relationships between user experiences and children's perceptions of the education robot BIBAKFull-Text 199-200
      Eunja Hyun; Hyunmin Yoon; Sooryun Son
    The purpose of this study is to investigate the biological, mental, social, moral, and educational perceptions of young children of the intelligent robot iRobiQ and to explore the effects of user experience on them. The interview was conducted with 111 five-year-old children attending two kindergartens and two childcare centers in which iRobiQ had been purchased and had been in use since March 2009. The young children interacted with the robot for one hour or less everyday over a period of two weeks or less. The robot contents were related to the socio-emotional perceptions of robots and had a high level of human-robot interactions, such as "Talking with the Robot" or "Attendance Check." Children who experienced the "voice" and "touch screen" functions of the robot showed higher educational perception. The social and educational perception was higher when the robot was placed in a classroom than when it was placed in the hallway or in the office. The results indicated that robot content focusing on socio-emotional characteristics should be developed for educational purposes and that a robot should be placed in the classroom for individual use.
    Keywords: education robot, perception of robot, user experience

    Keynote

    Dance partner robot: an engineering approach to human-robot interaction BIBAKFull-Text 201
      Kazuhiro Kosuge
    A Dance Partner Robot, PBDR (Partner Ball Room Dance Robot), dances a waltz as a female dancer together with a human male dancer. The waltz, a ball room dance, is usually performed by a male dancer and a female dancer, and consists of a certain number of steps, and transition of the steps. The dance is lead by the male dancer based on the transition rule of the dance. The female dance partner estimates the following step through physical interactions with the male dancer. The dance partner robot has a database about the waltz and its transition rule for estimating the following dance step and generating an appropriate step motion. The step estimation is done based on the time-series data of the force/torque applied by the male dancer to the robot upper body. The robot motion is generated for the estimated step using the step motion in the database compliantly against the interface force/moment between the human dancer and the robot in real time. The development of the dance partner robot has suggested us a lot of important issues for robots having interaction with a human. Why we are developing the dance partner robot and how the concept will be applied to other robot systems will be discussed in the presentation.
    Keywords: dance partner robot, human robot interaction

    Paper session 3: social & moral interaction with robots

    Gracefully mitigating breakdowns in robotic services BIBAKFull-Text 203-210
      Min Kyung Lee; Sara Kielser; Jodi Forlizzi; Siddhartha Srinivasa; Paul Rybski
    Robots that operate in the real world will make mistakes. Thus, those who design and build systems will need to understand how best to provide ways for robots to mitigate those mistakes. Building on diverse research literatures, we consider how to mitigate breakdowns in services provided by robots. Expectancy-setting strategies forewarn people of a robot's limitations so people will expect mistakes. Recovery strategies, including apologies, compensation, and options for the user, aim to reduce the negative consequence of breakdowns. We tested these strategies in an online scenario study with 317 participants. A breakdown in robotic service had severe impact on evaluations of the service and the robot, but forewarning and recovery strategies reduced the negative impact of the breakdown. People's orientation toward services influenced which recovery strategy worked best. Those with a relational orientation responded best to an apology; those with a utilitarian orientation responded best to compensation. We discuss robotic service design to mitigate service problems.
    Keywords: error recovery, human-robot interaction, robot breakdown, robot error, service recovery, services, social robot
    Critic, compatriot, or chump?: responses to robot blame attribution BIBAKFull-Text 211-218
      Victoria Groom; Jimmy Chen; Theresa Johnson; F. Arda Kara; Clifford Nass
    As their abilities improve, robots will be placed in roles of greater responsibility and specialization. In these contexts, robots may attribute blame to humans in order to identify problems and help humans make sense of complex information. In a between-participants experiment with a single factor (blame target) and three levels (human blame vs. team blame vs. self blame) participants interacted with a robot in a learning context, teaching it their personal preferences. The robot performed poorly, then attributed blame to either the human, the team, or itself. Participants demonstrated a powerful and consistent negative response to the human-blaming robot. Participants preferred the self-blaming robot over both the human and team blame robots. Implications for theory and design are discussed.
    Keywords: blame attribution, face-threatening acts, human-robot interaction, politeness
    No fair!!: an interaction with a cheating robot BIBAKFull-Text 219-226
      Elaine Short; Justin Hart; Michelle Vu; Brian Scassellati
    Using a humanoid robot and a simple children's game, we examine the degree to which variations in behavior result in attributions of mental state and intentionality. Participants play the well-known children's game "rock-paper-scissors" against a robot that either plays fairly, or that cheats in one of two ways. In the "verbal cheat" condition, the robot announces the wrong outcome on several rounds which it loses, declaring itself the winner. In the "action cheat"' condition, the robot changes its gesture after seeing its opponent's play. We find that participants display a greater level of social engagement and make greater attributions of mental state when playing against the robot in the conditions in which it cheats.
    Keywords: affective & emotional responses, beliefs about robots, mental models of robot behavior

    Paper session 4: teleoperation

    UAV video coverage quality maps and prioritized indexing for wilderness search and rescue BIBAKFull-Text 227-234
      Bryan S. Morse; Cameron H. Engh; Michael A. Goodrich
    Video-equipped mini unmanned aerial vehicles (mini-UAVs) are becoming increasingly popular for surveillance, remote sensing, law enforcement, and search and rescue operations, all of which rely on thorough coverage of a target observation area. However, coverage is not simply a matter of seeing the area (visibility) but of seeing it well enough to allow detection of targets of interest, a quality we here call "see-ability". Video flashlights, mosaics, or other geospatial compositions of the video may help place the video in context and convey that an area was observed, but not necessarily how well or how often. This paper presents a method for using UAV-acquired video georegistered to terrain and aerial reference imagery to create geospatial video coverage quality maps and indices that indicate relative video quality based on detection factors such as image resolution, number of observations, and variety of viewing angles. When used for offline post-analysis of the video, or for online review, these maps also enable geospatial quality-filtered or prioritized non-sequential access to the video. We present examples of static and dynamic see-ability coverage maps in wilderness search-and-rescue scenarios, along with examples of prioritized non-sequential video access. We also present the results of a user study demonstrating the correlation between see-ability computation and human detection performance.
    Keywords: coverage quality maps, unmanned aerial vehicles, video indexing, wilderness search and rescue
    Single operator, multiple robots: an eye movement based theoretic model of operator situation awareness BIBAKFull-Text 235-242
      Raj M. Ratwani; J. Malcolm McCurry; J. Gregory Trafton
    For a single operator to effectively control multiple robots, operator situation awareness is a critical component of the human-robot system. There are three levels of situation awareness: perception, comprehension, and projection into the future [1]. We focus on the perception level to develop a theoretic model of the perceptual-cognitive processes underlying situation awareness. Eye movement measures were developed as indicators of cognitive processing and these measures were used to account for operator situation awareness on a supervisory control task. The eye movement based model emphasizes the importance of visual scanning and attention allocation as the cognitive processes that lead to operator situation awareness and the model lays the groundwork for real-time prediction of operator situation awareness.
    Keywords: eye tracking, human-robot system, situation awareness, supervisory control
    Multimodal interaction with an autonomous forklift BIBAKFull-Text 243-250
      Andrew Correa; Matthew R. Walter; Luke Fletcher; Jim Glass; Seth Teller; Randall Davis
    We describe a multimodal framework for interacting with an autonomous robotic forklift. A key element enabling effective interaction is a wireless, handheld tablet with which a human supervisor can command the forklift using speech and sketch. Most current sketch interfaces treat the canvas as a blank slate. In contrast, our interface uses live and synthesized camera images from the forklift as a canvas, and augments them with object and obstacle information from the world. This connection enables users to "draw on the world," enabling a simpler set of sketched gestures. Our interface supports commands that include summoning the forklift and directing it to lift, transport, and place loads of palletized cargo. We describe an exploratory evaluation of the system designed to identify areas for detailed study.
       Our framework incorporates external signaling to interact with humans near the vehicle. The robot uses audible and visual annunciation to convey its current state and intended actions. The system also provides seamless autonomy handoff: any human can take control of the robot by entering its cabin, at which point the forklift can be operated manually until the human exits.
    Keywords: autonomous, forklift, interaction, robotic, tablet

    Paper session 5: natural language interaction

    Following directions using statistical machine translation BIBAKFull-Text 251-258
      Cynthia Matuszek; Dieter Fox; Karl Koscher
    Mobile robots that interact with humans in an intuitive way must be able to follow directions provided by humans in unconstrained natural language. In this work we investigate how statistical machine translation techniques can be used to bridge the gap between natural language route instructions and a map of an environment built by a robot. Our approach uses training data to learn to translate from natural language instructions to an automatically-labeled map. The complexity of the translation process is controlled by taking advantage of physical constraints imposed by the map. As a result, our technique can efficiently handle uncertainty in both map labeling and parsing. Our experiments demonstrate the promising capabilities achieved by our approach.
    Keywords: human-robot interaction, instruction following, natural language, navigation, statistical machine translation
    Toward understanding natural language directions BIBAKFull-Text 259-266
      Thomas Kollar; Stefanie Tellex; Deb Roy; Nicholas Roy
    Speaking using unconstrained natural language is an intuitive and flexible way for humans to interact with robots. Understanding this kind of linguistic input is challenging because diverse words and phrases must be mapped into structures that the robot can understand, and elements in those structures must be grounded in an uncertain environment. We present a system that follows natural language directions by extracting a sequence of spatial description clauses from the linguistic input and then infers the most probable path through the environment given only information about the environmental geometry and detected visible objects. We use a probabilistic graphical model that factors into three key components. The first component grounds landmark phrases such as "the computers" in the perceptual frame of the robot by exploiting co-occurrence statistics from a database of tagged images such as Flickr. Second, a spatial reasoning component judges how well spatial relations such as "past the computers" describe a path. Finally, verb phrases such as "turn right" are modeled according to the amount of change in orientation in the path. Our system follows 60% of the directions in our corpus to within 15 meters of the true destination, significantly outperforming other approaches.
    Keywords: direction understanding, route instructions, spatial language
    Robot-directed speech: using language to assess first-time users' conceptualizations of a robot BIBAKFull-Text 267-274
      Sarah Kriz; Gregory Anderson; J. Gregory Trafton
    It is expected that in the near-future people will have daily natural language interactions with robots. However, we know very little about how users feel they should talk to robots, especially users who have never before interacted with a robot. The present study evaluated first-time users' expectations about a robot's cognitive and communicative capabilities by comparing robot-directed speech to the way in which participants talked to a human partner. The results indicate that participants spoke more loudly, raised their pitch, and hyperarticulated their messages when they spoke to the robot, suggesting that they viewed the robot as having low linguistic competence. However, utterances show that speakers often assumed that the robot had humanlike cognitive capabilities. The results suggest that while first-time users were concerned with the fragility of the robot's speech recognition system, they believed that the robot had extremely strong information processing capabilities.
    Keywords: human-robot communication, natural language, spatial language
    Robust spoken instruction understanding for HRI BIBAKFull-Text 275-282
      Rehj Cantrell; Matthias Scheutz; Paul Schermerhorn; Xuan Wu
    Natural human-robot interaction requires different and more robust models of language understanding (NLU) than non-embodied NLU systems. In particular, architectures are required that (1) process language incrementally in order to be able to provide early backchannel feedback to human speakers; (2) use pragmatic contexts throughout the understanding process to infer missing information; and (3) handle the underspecified, fragmentary, or otherwise ungrammatical utterances that are common in spontaneous speech. In this paper, we describe our attempts at developing an integrated natural language understanding architecture for HRI, and demonstrate its novel capabilities using challenging data collected in human-human interaction experiments.
    Keywords: dialogue interactions, integrated architecture, natural human-robot interaction, natural language processing

    Keynote

    Action understanding and gesture acquisition in the great apes BIBAKFull-Text 283
      Josep Call
    A growing number of scholars have suggested that gestural communication may have been especially important in the early stages of language origins. Of special interest in this debate is the communication of other primates, especially those most closely related to humans, the great apes. The aim of this talk is to explore the interrelations between instrumental actions, action understanding and gesture generation in humans and other apes. In doing so, I will contrast the similarities and differences in the use and comprehension of gestures in humans and apes. Like humans, apes use gestures flexibly and they can even learn new gestures. Unlike humans, however, imitative learning does not seem to be the main mechanism underlying gesture acquisition in great apes. Instead apes seem to learn many of their gestures in social interaction with others via processes of ontogenetic ritualization by means of which instrumental actions are transformed into gestures. Like humans, apes can extract information about the goals contained in the actions of others but there is much less evidence that they also grasp some of the representational properties of certain kinds of gestures and the communicative intentions behind them.
    Keywords: gesture acquisition, great apes

    Paper session 6: nonverbal interaction

    Reconfiguring spatial formation arrangement by robot body orientation BIBAKFull-Text 285-292
      Hideaki Kuzuoka; Yuya Suzuki; Jun Yamashita; Keiichi Yamazaki
    An information-presenting robot is expected to establish an appropriate spatial relationship with people. Drawing upon sociological studies of spatial relationships involving "F-formation" and "body torque," we examined the effect of a robot rotating its body on the reconfiguration of the F-formation arrangement. The results showed that a robot can change the position of a visitor by rotating its body. We also confirmed that to reconfigure the F-formation arrangement, it is more effective to rotate the whole body of the robot than only its head.
    Keywords: body torque, communication robot, f-formation, human-robot interaction
    Head motions during dialogue speech and nod timing control in humanoid robots BIBAKFull-Text 293-300
      Carlos T. Ishi; ChaoRan Liu; Hiroshi Ishiguro; Norihiro Hagita
    Head motion naturally occurs in synchrony with speech and may carry paralinguistic information, such as intention, attitude and emotion, in dialogue communication. With the aim of verifying the relationship between head motion and the dialogue acts carried by speech, analyses were conducted on motion-captured data for several speakers during natural dialogues. The analysis results first confirmed the trends of our previous work, showing that regardless of the speaker, nods frequently occur during speech utterances, not only for expressing dialogue acts such as agreement and affirmation, but also appearing at the last syllable of the phrase, in strong phrase boundaries, especially when the speaker is talking confidently, or expressing interest in the interlocutor's talk. Inter-speaker variability indicated that the frequency of head motion may vary according to the speaker's age or status, while intra-speaker variability indicated that the frequency of head motion also differs depending on the inter-personal relationship with the interlocutor. A simple model for generating nods based on rules inferred from the analysis results was proposed and evaluated in two types of humanoid robots. Subjective scores showed that the proposed model could generate head motions with naturalness comparable to the original motions.
    Keywords: dialogue act, head motion, humanoid robot, nodding generation
    Pointing to space: modeling of deictic interaction referring to regions BIBAKFull-Text 301-308
      Yasuhiko Hato; Satoru Satake; Takayuki Kanda; Michita Imai; Norihiro Hagita
    In daily conversation, we sometimes observe a deictic interaction scene that refers to a region in a space, such as saying "please put it over there" with pointing. How can such an interaction be possible with a robot? Is it enough to simulate people's behaviors, such as utterance and pointing? Instead, we highlight the importance of simulating human cognition. In the first part of our study, we empirically demonstrate the importance of simulating human cognition of regions when a robot engages in a deictic interaction by referring to a region in a space. The experiments indicate that a robot with simulated cognition of regions improves efficiency of its deictic interaction. In the second part, we present a method for a robot to computationally simulate cognition of regions.
    Keywords: cognition of regions, communicating about regions, pointing gesture, social robots, spatial deixis

    Paper session 7: social learning

    Investigating multimodal real-time patterns of joint attention in an hri word learning task BIBAKFull-Text 309-316
      Chen Yu; Matthias Scheutz; Paul Schermerhorn
    Joint attention -- the idea that humans make inferences from observable behaviors of other humans by attending to the objects and events that these others humans attend to -- has been recognized as a critical component in human-robot interactions. While various HRI studies showed that having robots to behave in ways that support human recognition of joint attention leads to better behavioral outcomes on the human side, there are no studies that investigate the detailed time course of interactive joint attention processes.
       In this paper, we present the results from an HRI study that investigates the exact time course of human multi-modal attentional processes during an HRI word learning task in an unprecedented way. Using novel data analysis techniques, we are able to demonstrate that the temporal details of human attentional behavior are critical for understanding human expectations of joint attention in HRI and that failing to do so can force humans into assuming unnatural behaviors.
    Keywords: human-robot interaction, joint attention
    Transparent active learning for robots BIBAKFull-Text 317-324
      Crystal Chao; Maya Cakmak; Andrea L. Thomaz
    This research aims to enable robots to learn from human teachers. Motivated by human social learning, we believe that a transparent learning process can help guide the human teacher to provide the most informative instruction. We believe active learning is an inherently transparent machine learning approach because the learner formulates queries to the oracle that reveal information about areas of uncertainty in the underlying model. In this work, we implement active learning on the Simon robot in the form of nonverbal gestures that query a human teacher about a demonstration within the context of a social dialogue. Our preliminary pilot study data show potential for transparency through active learning to improve the accuracy and efficiency of the teaching process. However, our data also seem to indicate possible undesirable effects from the human teacher's perspective regarding balance of the interaction. These preliminary results argue for control strategies that balance leading and following during a social learning interaction.
    Keywords: active learning, human-robot interaction, interactive learning, social robots, socially guided machine learning
    From manipulation to communicative gesture BIBAKFull-Text 325-332
      Shichao Ou; Roderic Grupen
    This paper advocates an approach for learning communicative actions and manual skills in the same framework. We exploit a fundamental relationship between the structure of motor skills, intention, and communication. Communicative actions are acquired using the same learning framework and the same primitive states and actions that the robot uses to construct manual behavior for interacting with other objects in the environment. A prospective behavior algorithm is used to acquire modular policies for conveying intention and goals to nearby human beings and recruiting their assistance. The learning framework and a preliminary case study are presented in which a humanoid robot learns expressive communicative behavior incrementally by discovering the manual affordances of human beings. Results from interactions with 16 people provide support for the hypothesized benefits of this approach. Behavior reuse makes learning from relatively few interactions possible. This approach compliments other efforts in the field by grounding social behavior, and proposes a mechanism for negotiating a communicative vocabulary between humans and robots.
    Keywords: communicative behavior, developmental robotics, human-robot interaction, knowledge acquisition

    Video session

    A trial English class with a teaching assistant robot in elementary school BIBAKFull-Text 335-336
      Jeonghye Han; Seungmin Lee Lee; Bokhyun Kang; Sungju Park; Jungkwan Kim; Myungsook Kim; Mihee Kim
    Various studies propose that robots can be an effective tool for language teaching and learning. Especially they have been remarkably successful in elementary English classes [1][2][3][4]. The purpose of this study was to investigate some effects of a teaching assistant robot, Langbot, in elementary English classes in Korea. We adopted IROBIQ as Longbot for a pilot study.
       We designed some activities for elementary English classes using a teaching assistant robot, Langbot: introduction, look and listen, listen and say, look and say, act out, song and chant. The introduction includes the birth story of Langbot that children want to know where the robot comes from, how old it is, why it came to their classroom, etc, since Hur and Han (2009) found that the robot storytelling was working to increase children's tolerance toward the failure of recognition of a robot [2].
    Keywords: English class, language learning, robot storytelling, teaching assistant robot
    Sociable trash box BIBAKFull-Text 337-338
      Yuta Yoshiike; Yuto Yamaji; Taisuke Miyake; P. Ravindra S. De Silva; Michio Okada
    The STB is capable of engaging manifold affiliation behaviors to build a social rapport toward the goal of collecting trash around an environment. In particular, STB is a child-dependent robot that walks alone in a public space for the purpose of tracing humans and trash and to collect the trash. In a crowded space, STBs move toward the trash by engaging with an attractive twisting motion (behaviors) and vocal interaction to convey STB's intention to children. Our STB robot is incapable of collecting the trash by itself. In this sense, children have to infer a robot's intentional stance or expectation for interaction with the STB. To collect trash while creating social rapport with children is a novel concept. The STB engages with twisting and bowing motions when children put trash into an STB container.
    Keywords: child-dependent robot, sociable trash box, social rapport
    Dona: urban donation motivating robot BIBAKFull-Text 339-340
      Min Su Kim; Byung Keun Cha; Dong Min Park; Sae Mee Lee; Sonya Kwak; Min Kyung Lee
    The rate of donations made by individuals is relatively low in Korea when compared to other developed countries. To address this problem, we propose the DONA, an urban donation motivating robot prototype. The robot roams around in a public space and solicits donation from passers-by by engaging them through a pet like interaction. In this paper, we present the prototype of the robot and our design process.
    Keywords: charity, donation, emotion, human-robot interaction, interaction design, ludic experience, pet-like interaction
    FusionBot: a barista robot -- fusionbot serving coffees to visitors during technology exhibition event BIBAKFull-Text 341-342
      Dilip Kumar Limbu; Yeow Kee Tan; Lawrence T. C. Por
    This video shows a service robot named FusionBot autonomously serving coffees to visitors on their request, which occurred during two days-long experiment in TechFest 2008 event. The coffee serving task involves taking coffee order from a visitor, identifying a cup and smart coffee machine, moving towards the coffee machine, communicating with the coffee machine and fetching the coffee cup to the visitor.
       The main purpose of this experiment is to explore and demonstrate the utility of an interactive service robot in smart home environment, thereby improving the quality of human life. Before conducting the experiments, visitors were given general procedural instructions and simple introduction on how the FusionBot works. Visitors then performed experiment tasks, i.e., ordering a cup of coffee. Thereafter, the visitors were asked to fill out the satisfaction questionnaires to find out their reaction and perception on the FusionBot. Of just over 100 survey questionnaires handed out, sixty eight (68) valid responses (i.e. 68%) were received. Over all, with regards to the FusionBot task satisfaction, more than half of respondents were satisfied with what the FusionBot can do. Nearly one quarter of the respondents indicated that it was not easy to communicate with the FusionBot. This could be due to occurrence of various background noises, which were falsely picked up by the FusionBot as speech input from the visitor. Similarly, less than one quarter indicated that it was not easy to learn how to use the FusionBot. This could be due to the not knowing what to do with the FusionBot and not knowing what the FusionBot does.
       The experiment was successful in two main dimensions; 1) the robot demonstrated the ability to interact with visitors and perform challenging real-world task autonomously, and 2) It provided some evidence towards the feasibility of using autonomous service robot and smart coffee machine to serve drink in a reception/home or acting as a host in an organization. While preliminary, the experiment also suggests that while developing a service robot; 1) static appearance is very important, 2) requires robust speech recognition and vision understanding, and finally 3) requires comprehensive training on speech and vision with respective data.
    Keywords: barista robot, human-robot interaction, service robot, social robotics
    The step-on interface (SOI) on a mobile platform: basic functions BIBAKFull-Text 343-344
      Takafumi Matsumaru; Yuichi Ito; Wataru Saitou
    This video shows the basic functions of HFAMRO-2 equipped with the step-on interface (SOI). In the SOI the projected screen is used as a bilateral interface. It not only presents information from the equipment to the user but also delivers the instructions from the user to the equipment. HFAMRO is intended to represent the concept based on which robots interact with users. It assumes, for example, the ability to play 'tag' -- in this case, playing tag with light, similar to 'shadow' tag. The HFAMRO-2 mobile robot, developed to study the SOI's application with mobility, has two sets of the SOI consisting of a projector and a range scanner on a mobile platform. The projector displays a direction screen on a travel surface and the two-dimensional range scanner detects and measures the user's stepping to specify the selected button.
    Keywords: friendly-amusing mobile (fam) function, human-robot interaction, projector, range scanner, step-on interface (soi)
    The step-on interface (SOI) on a mobile platform: rehabilitation of the physically challenged BIBAKFull-Text 345-346
      Takafumi Matsumaru; Yuichi Ito; Wataru Saitou
    The rehabilitation of the physically challenged is one of the trial applications of the step-on interface (SOI) on a mobile platform as the friendly amusing mobile (FAM) function. This video shows the result of the preliminary trial.
    Keywords: friendly-amusing mobile (fam) function, human-robot interaction, physically challenged, rehabilitation, step-on interface (soi)
    Robot rescue!: an HRI engineering outreach activity BIBAKFull-Text 347-348
      Jonathan T. Morgan; Sarah Kriz
    This video is an example of an engineering outreach activity that we designed to illustrate some of the core issues in HRI research. High school students attending the University of Washington 2009 Summer Math Academy were given a disaster scenario and were asked think about how a robot could help a victim who was trapped by fallen rubble during an earthquake. Students were led through a series of thought questions that encouraged them to consider the type of information the robot would need to give to the victim, the victim's family, and the rescue team. They also considered how a victim might respond to a robot, and the behaviors the robot should display during the rescue. The students then created a script for the rescue scenario, writing not only their own lines, but the behavior and communication of the robot as well. All relevant consent forms were obtained from the participants prior to the outreach event.
    Keywords: disaster scenarios, engineering outreach, human-robot interaction
    Mysterious machines BIBAKFull-Text 349-350
      Billy Schonenberg; Christoph Bartneck
    Alan Turing proposed a test for the intelligence of machines in 1950 [1]. Despite great efforts, no computer has passed this test so far. Each year, chat bots compete for the Loebner Prize, the first formal instantiation of a Turing Test. No contender was able to fool the jury yet. Major problems of the chat bots are the lack of common knowledge and the logical consistency of a dialogue.
       We explore a new approach to chat bots by focusing on non-logical conversation topics: mysticism. The founding books of the major religions are widely acknowledged examples of mystical topics. We selected the New Testament, the Koran and Rigveda as the knowledge base for our conversational robots.
       The robots are able to autonomously talk to each other and to humans about their religious believe. Each robot represents a belief, but we do not reveal their convictions. This ambiguity forces observers to follow the actual conversations instead of quickly applying stereotypes.
    Keywords: chatbot, exhibition, religion, Turing
    Olivia @ TechFest 09: receptionist robot impressed visitors with lively interactions BIBAKFull-Text 351-352
      Lawrence T. C. Por; Adrian Tay; Dilip Kumar Limbu
    Olivia 2.0 is a Social Robot designed to interact and serve in office environment as a Robotic Receptionist. This is the forth model of Service Robot developed by A*STAR Robotics Team in Singapore.
       For a start, the occupation & background story of Olivia as a receptionist has set a common ground between human and robot for interaction around topics fitting to the job. The use of vision technology enabled Olivia to detect the presence of a visitor standing in front of her so that she will initiate a dialogue. Through speech recognition technology and careful designed dialogue management system, visitors are able to converse with Olivia to know more about the amenities in Fusionopolis building as well as to engage in small talk.
       Taking the persona of a 5 years old kid, with a cute face and child voice, coupled with nice decorations has set the stage for a fun interaction time, as Olivia proceeds to engage the visitor to play a simple game that showcased object recognition and tracking capability.
       Olivia is built with advanced Mechatronics design with 13 degree of freedom for head, body and hands motions. The advance motion control algorithm and imitation learning software trained her well to display humanlike hand gestures and upper body movements. We noticed the lively gestures coupled with expressive robotic voice are very crucial to draw attention from human for the continuous engagement with Social Robot.
       We knew that this would be a valuable field trial opportunity whereby many visitors were having first encountering with service robot. We took advantage to study social acceptance by taking video recording for every human-robot interactions, follow by inviting visitors to participate in on-site feedback gathering with questionnaires. Of more than 100 questionnaires completed, 62% gave an overall rating of good and above, several expressed that the response of the robot is slow and 75.5% found that the robot is able to recognize their speech without any difficulties. The top 3 robot features that people would like to have in the robot are: fast response; clear way of talking; delivering relevant information.
       At the end of this technology exhibition over two days, more than 100 visitors interacted with Olivia for information enquiry and playing games. They were greatly impressed by her capabilities and above all they had a lot of fun interacting with her.
    Keywords: human-robot interaction, receptionist robot, service robot, social robotics
    actDresses: interacting with robotic devices -- fashion and comics BIBAKFull-Text 353-354
      Rob Tieben; Ylva Fernaeus; Mattias Jacobsson
    Robotic devices, such as the Roomba vacuum cleaner, are customised and personalised by their users, using for example signs, stickers and clothes.
       The actDresses project explores how these metaphors from fashion and comics can be used in novel interactions with robotic devices. This movie shows one explorative prototype, where clothes and accessories are used to program the Roomba's behaviour.
       The clothes influence personality characteristics of the Roomba; the accessories, iconic flags, determine movement characteristics. Combined, the clothes and flags allow the user to create different types of behaviour.
    Keywords: exploration, interaction, physical languages, robots, semiotics, tangible interaction
    Selecting and commanding individual robots in a vision-based multi-robot system BIBAKFull-Text 355-356
      Alex Couture-Beil; Richard T. Vaughan; Greg Mori
    This video presents a computer vision based system for interaction between a single human and multiple robots. Face contact and motion-based gestures are used as two different non-verbal communication channels; a user first selects a particular robot by simply looking at it, then assigns it a task by waving his or her hand.
    Keywords: distributed system, face detection, gesture-controlled robot, human-robot interaction, multi-robot system, task assignment
    The articulated head pays attention BIBAKFull-Text 357-358
      Christian Kroos; Damith C. Herath; A Stelarc
    The Articulated Head (AH) is an artistic installation that consists of a LCD monitor mounted on an industrial robot arm (Fanuc LR Mate 200iC) displaying the head of a virtual human. It was conceived as the next step in the evolution of Embodied Conversational Agents (ECAs) transcending virtual reality into the physical space shared with the human interlocutor. Recently an attention module has been added as part of a behavioural control system for non-verbal interaction between robot/ECA and human.
       Unstructured incoming perceptual information (currently originating from a custom acoustic localisation algorithm and a commercial people tracking software) is narrowed down to the most salient aspects allowing the generation of a single motor response. The requirements of the current task determine what salient means at any point in time, that is, the rules and associated thresholds and weights of the attention system are modified by the requirements of the current task while the task itself is specified by the central control system depending on the overall state of the AH with respect to the ongoing interaction. The attention system determines a single attended event using a winner-takes-all strategy and relays it to the central control system. It also directly generates a motor goal and forwards it to the motor system.
       The video shows how the robot's attention system drives its behaviour, (1) when there is no stimulus over an extended period of time, (2) when a person moves within its visual field, and (3) when a sudden loud auditory event attracts attention during an ongoing visually-based interaction (auditory-visual attention conflict). The subtitles are direct mappings from numeric descriptions of the central control system's internal states to slightly more entertaining English sentences.
    Keywords: attention model, embodied conversational agent, multimodal, robot arm

    Paper session 8: evaluation of interaction

    When in Rome: the role of culture & context in adherence to robot recommendations BIBAKFull-Text 359-366
      Lin Wang; Pei-Luen Patrick Rau; Vanessa Evers; Benjamin Krisper Robinson; Pamela Hinds
    In this study, we sought to clarify the effects of users' cultural background and cultural context on human-robot team collaboration by investigating attitudes toward and the extent to which people changed their decisions based on the recommendations of a robot collaborator. We report the results of a 2×2 experiment with nationality (Chinese vs. US) and communication style (implicit vs. explicit) as dimensions. The results confirm expectations that when robots behave in more culturally normative ways, subjects are more likely to heed their recommendations. Specifically, subjects with a Chinese vs. a US cultural background changed their decisions more when collaborating with robots that communicated implicitly vs. explicitly. We also found evidence that Chinese subjects were more negative in their attitude to robots and, as a result, relied less on the robot's advice. These findings suggest that cultural values affect responses to robots in collaborative situations and reinforce the importance of culturally sensitive design in HRI.
    Keywords: cross-cultural design, human robot collaboration
    Lead me by the hand: evaluation of a direct physical interface for nursing assistant robots BIBAKFull-Text 367-374
      Tiffany L. Chen; Charles C. Kemp
    When a user is in close proximity to a robot, physical contact becomes a potentially valuable channel for communication. People often use direct physical contact to guide a person to a desired location (e.g., leading a child by the hand) or to adjust a person's posture for a task (e.g., a dance instructor working with a dancer). Within this paper, we present an implementation and evaluation of a direct physical interface for a human-scale anthropomorphic robot. We define a direct physical interface (DPI) to be an interface that enables a user to influence a robot's behavior by making contact with its body. Human-human interaction inspired our interface design, which enables a user to lead our robot by the hand and position its arms. We evaluated this interface in the context of assisting nurses with patient lifting, which we expect to be a high-impact application area. Our evaluation consisted of a controlled laboratory experiment with 18 nurses from the Atlanta area of Georgia, USA. We found that our DPI significantly outperformed a comparable wireless gamepad interface in both objective and subjective measures, including number of collisions, time to complete the tasks, workload (Raw Task Load Index), and overall preference. In contrast, we found no significant difference between the two interfaces with respect to the users' perceptions of personal safety.
    Keywords: assistive robotics, direct physical interface, healthcare robotics, nursing, user study
    Recognizing engagement in human-robot interaction BIBAKFull-Text 375-382
      Charles Rich; Brett Ponsleur; Aaron Holroyd; Candace L. Sidner
    Based on a study of the engagement process between humans, we have developed and implemented an initial computational model for recognizing engagement between a human and a humanoid robot. Our model contains recognizers for four types of connection events involving gesture and speech: directed gaze, mutual facial gaze, conversational adjacency pairs and backchannels. To facilitate integrating and experimenting with our model in a broad range of robot architectures, we have packaged it as a node in the open-source Robot Operating System (ROS) framework. We have conducted a preliminary validation of our computational model and implementation in a simple human-robot pointing game.
    Keywords: conversation, dialogue, nonverbal communication