HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Extended Abstracts of the 2015 ACM/IEEE International Conference on Human-Robot Interaction

Fullname:Extended Abstracts of the 10th ACM/IEEE International Conference on Human-Robot Interaction
Editors:Julie A. Adams; William Smart; Bilge Mutlu; Leila Takayama
Location:Portland, Oregon
Dates:2015-Mar-02 to 2015-Mar-05
Volume:2
Publisher:ACM
Standard No:ISBN: 978-1-4503-3318-4; ACM DL: Table of Contents; hcibib: HRI15-2
Papers:155
Pages:311
Links:Conference Website
  1. HRI 2015-03-02 Volume 2
    1. Late-Breaking Reports -- Session 1
    2. Late-Breaking Reports -- Session 2
    3. Late-Breaking Reports -- Session 3
    4. HRI Pioneers -- Poster Session 1
    5. HRI Pioneers -- Poster Session 2
    6. HRI Pioneers -- Poster Session 3
    7. Workshops
    8. Videos
    9. Demonstrations -- Session 1
    10. Demonstrations -- Session 2
    11. Demonstrations -- Session 3

HRI 2015-03-02 Volume 2

Late-Breaking Reports -- Session 1

Toward Museum Guide Robots Proactively Initiating Interaction with Humans BIBAFull-Text 1-2
  M. Golam Rashed; R. Suzuki; A. Lam; Y. Kobayashi; Y. Kuno
This paper describes current work toward the design of a guide robot system. We present a method to recognize people's interest and intention from their walking trajectories in indoor environments, which enables a service robot to proactively approach people to provide services to them. We conducted observational experiments in a museum as a target test environment where participants were asked to visit that museum. From these experiments, we have found mainly three kinds of walking trajectory patterns of the participants inside the museum that depend on their interest in the exhibits. Based on these findings, we developed a method to identify participants that may need guidance. We confirm the effectiveness of our method by experiments.
Human Compliance with Task-oriented Dialog in Social Robot Interaction BIBAFull-Text 3-4
  Eunji Kim; Jonathan Sangyun Lee; Sukjae Choi; Ohbyung Kwon
This study empirically investigates the factors affecting compliance with robot requests in task-oriented environments such as registration guide services in a hospital setting in which compliance is important for patient treatment. We examine the relative impact of interaction time, task understanding, and homophily on compliance. The results suggest that task understanding and interaction time are negatively related with intention to comply. However, homophily is not significantly related to intention to comply.
Improving the Expressiveness of a Social Robot through Luminous Devices BIBAFull-Text 5-6
  Raúl Pérula-Martínez; Esther Salichs; Irene P. Encinar; Álvaro Castro-González; Miguel A. Salichs
Social robots during human-robot interaction have to follow certain behavioral norms. To improve the expressiveness of a robot, we focus this work on the visual non-verbal expressive capabilities. Our robot has been equipped with two eyes, two cheeks, a mouth, and a heart (some of them allowing expressive modes non existent in humans). Each one of these parts do the robot expressing different emotions or states, or even communicating in a non-verbal fashion with users.
Trust In Unmanned Driving System BIBAFull-Text 7-8
  Jae-Gil Lee; Jihyang Gu; Dong-Hee Shin
This study proposes a between-subject experiment with four conditions representing different levels of anthropomorphism and automation embedded in unmanned driving systems. Participants will be exposed to either a humanoid robot (high anthropomorphism) or a smartphone (low anthropomorphism) that have high and low automation level respectively as an independent driving agent. The study argues that the agent with high level of anthropomorphism and low level of automation is more likely to trigger greater feelings of trust and perceived safety, which then leads to positive perceptions of the system.
Social Group Interactions in a Role-Playing Game BIBAFull-Text 9-10
  Marynel Vázquez; Elizabeth J. Carter; Jo Ana Vaz; Jodi Forlizzi; Aaron Steinfeld; Scott E. Hudson
We present initial findings from an experiment in which participants played Mafia, an established role-playing game, with our robot. In one condition, the robot played like the rest of the participants and, in the other, the robot moderated the game. We discuss general aspects of the interaction, participants' perceptions, and the potential of this scenario for studying group spatial behavior from robotic platforms.
Robot as a Facilitator in Language Conversation Class BIBAFull-Text 11-12
  Jae-eun Shin; Dong-Hee Shin
With concern of robotics application in educational field, robot assisted language learning (RALL) has become of interest to second language learning researchers. This study aims to examine the effect of RALL compared to existing computer assisted language learning (CALL) on students' affective states and engagement towards English conversation class. For the field study, non-equivalent-groups quasi-experiment was employed with 66 Korean middle school students between CALL class and RALL class. The result revealed that there was marginally significant difference in the motivation; there was significant difference both in the participation and in the satisfaction between implementing robot and using computer in English conversation class. This result corresponds both with previous theoretical studies in SLA and with empirical studies in HRI. This study suggests that robot acts as a facilitator in language conversation class.
Therabot: The Initial Design of a Robotic Therapy Support System BIBAFull-Text 13-14
  Dexter Duckworth; Zachary Henkel; Stephanie Wuisan; Brendan Cogley; Christopher Collins; Cindy Bethel
Therabot is an assistive-robotic therapy system designed to provide support during counseling sessions and home therapy practice to patients diagnosed with conditions associated with trauma. Studies were conducted to determine desired features of potential end-users of the system, such as clinicians, with feedback from past survivors of trauma to guide the participatory design process. The results from a survey of 1,045 respondents revealed a preferred form factor of a floppy-eared dog with coloring similar to that of a beagle. The most requested features were that the robot be of a size that would be comfortable to fit in a person's lap and a covering that was soft, durable, and had multiple textures.
Investigating User Perceptions of HRI: A Marketing Approach BIBAFull-Text 15-16
  Willy Barnett; Kathy Keeling; Thorsten Gruber
In this paper we highlight a complementary approach to examining users' preferences surrounding robot interaction. We introduce widely used concepts and methods from the field of marketing in order to gain deeper insights into user decision-making processes. The study focuses on potential interactions between older adults and robots. The preliminary results show that the new approach can serve both as a means to augment current needs based analysis in HRI, and to enable users to provide more detailed responses to technology they may be unfamiliar with or afraid of.
Heuristic Evaluation of Swarm Metrics' Effectiveness BIBAFull-Text 17-18
  Matthew D. Manning; Caroline E. Harriott; Sean T. Hayes; Julie A. Adams; Adriane E. Seiffert
Typical visualizations of robot swarms (greater than 50 entities) display each individual entity; however, it is immensely difficult to maintain accurate position information for each member in real-world situations with limited communications. Generally, it will be difficult for humans to maintain an awareness of all individual entities. Further, the swarm's tasks may impact the desired visualization. Thus, an open question is how best to visualize a swarm given various swarm tasks. This paper presents a heuristic evaluation that analyzes the application of swarm metrics to different swarm visualizations and tasks. A brief overview of the visualizations is provided, along with a description of the heuristic metrics and the analysis.
The Impact of User Control Design Types on People's Perception of a Robot BIBAFull-Text 19-20
  Jee Yoon Lee; Jung Ju Choi; Sonya S. Kwak
This study suggests user control design as a way to increase social acceptance and usability of a robot. We executed a 3 (user control design: anthropomorphic control vs. non-anthropomorphic control vs. remote controller control) within-participants experiment design (N=24). When participants controlled a robot more anthropomorphically, they perceived a robot more sociable and were more satisfied with the service provided by a robot. This study provides evidence that user control design could be effectively used to increase social acceptance as well as usability of a robot. Implications for the design of human-robot interaction are discussed.
Multimodal Manipulator Control Interface using Speech and Multi-touch Gesture Recognition BIBAFull-Text 21-22
  Tetsushi Oka; Keisuke Matsushima
In this study, we describe a novel multimodal interface to control a manipulator that uses speech and multi-touch gesture recognition. In addition, we describe our prototype system and discuss findings from a preliminary study that employs the system. The interface operates in three control modes that allow the user of a manipulator to translate, rotate, open, and close the gripper using touch gestures. The user can employ multimodal commands to switch among modes and control the manipulator. In our study, inexperienced users were able to control a 7-degree-of-freedom manipulator using the prototype interface.
Choreographing Robot Behaviors by Means of Japanese Onomatopoeias BIBAFull-Text 23-24
  Takanori Komatsu
Onomatopoeias are used when one cannot describe certain phenomena or events literally in the Japanese language, and it is said that one's ambiguous and intuitive feelings are embedded in these onomatopoeias. Therefore, an interface system that can use onomatopoeia as input information could comprehend such users' feelings, and I then proposed the basic concept for such interface system; that is, preparing the mapping rules between the quantified onomatopoeias expressions and the physical features of a certain target. In this paper, I briefly introduced a concrete application based on the above concept that can extract users' ambiguous feelings from their onomatopoeias and reflect any of these extracted feelings on a robot's behaviors.
Evaluation of Interfaces for 3d Pointing BIBAFull-Text 25-26
  Daniel A. Lazewatsky; William D. Smart
A variety of tasks with robots require directing the robot to interact with objects or locations in the world. While many interfaces currently exist for such interactions, in this paper, we focus on inputs which can be categorized as pointing. Specifically, we look at two ways of using the head as a pointing input: Google Glass, and a head pose estimation technique which uses RGBD data. While both of these input modalities have their own advantages and disadvantages, we evaluate them simply as pointing devices, looking at how the device characteristics affects pointing performance. This is evaluated in a user study in which participants perform a series of object designation tasks. We then use distance, time, and object size data to evaluate input devices using Fitts' Law.
Human-Centric Assistive Remote Control for Co-located Mobile Robots BIBAFull-Text 27-28
  Akansel Cosgun; Arnold Maliki; Kaya Demir; Henrik Christensen
Autonomous navigation is an essential capability for domestic service robots, however at times direct remote control may be desired for cases where robot and user are co-located. In this work, we propose a remote control method that allows a user to control the robot with smartphone gestures. The robot moves with respect to the user's coordinate frame and avoids obstacles if a collision is imminent. We think that interpreting the commands from human's perspective would decrease the cognitive load of the user, therefore allowing efficient operation.
Exploring the Potential of Information Gathering Robots BIBAFull-Text 29-30
  Michael Jae-Yoon Chung; Andrzej Pronobis; Maya Cakmak; Dieter Fox; Rajesh P. N. Rao
Autonomous mobile robots equipped with a number of sensors will soon be ubiquitous in human populated environments. In this paper we present an initial exploration into the potential of using such robots for information gathering. We present findings from a formative user survey and a 4-day long Wizard-of-Oz deployment of a robot that answers questions such as "Is there free food on the kitchen table?" Our studies allow us to characterize the types of information that InfoBots might be most useful for.
Shared Displays for Remote Rover Science Operations BIBAFull-Text 31-32
  Electa A. Baker; Julie A. Adams; Terry Fong; Hyunjung Kim; Young-Woo Park
Robotic rovers are expected to play a major role in future lunar in-situ resource prospecting. Prospecting missions will involve a ground control team of planetary scientists and rover operators. These ground controllers will need to evaluate prospecting data gathered by a rover and make operational decisions in real-time. In October 2014, the NASA Ames Research Center conducted a lunar analog robotic prospecting mission in the Mojave Desert to study how to support such operations. This paper describes the roles within the Science Operations Team during this analog mission, as well as preliminary findings regarding the scientists' use of shared displays.
User Tracking in HRI Applications with the Human-in-the-loop BIBAFull-Text 33-34
  Silvia Rossi; Mariacarla Staffa; Maurizio Giordano; Massimo De Gregorio; Antonio Rossi; Anna Tamburro; Civita Vellucci
In HRI applications, tracking performance should not be evaluated as a passive sensing behavior, but by considering it as an active process, where the human is involved within the loop. We foresee that the presence of the human being, actively participating in the interaction, improves a tracker performance with a limited additional effort. We tested a tracking approach into a HRI scenario, modeled as a game, measuring both quantitative and qualitative performance.
Head Pose Estimation is an Inadequate Replacement for Eye Gaze in Child-Robot Interaction BIBAFull-Text 35-36
  James Kennedy; Paul Baxter; Tony Belpaeme
Gaze analysis of human-robot interactions can reveal much about the dynamics of the interaction and be a useful step in establishing levels of engagement and attention. Currently, much of this work has to be conducted manually through post-hoc video coding due to current limitations in non-invasive, real-time gaze tracking solutions. This paper assesses whether real-time head pose estimation from an RGB-D camera may be used in place of manual post-hoc coding of gaze direction. Using data collected from an experiment 'in the wild', it is found that the proposed RGB-D based pose estimation method is neither accurate nor consistent enough to provide a reliable measure of gaze within human-robot interactions.
Metrics for Assessing Human Skill When Demonstrating a Bimanual Task to a Robot BIBAFull-Text 37-38
  Ana-Lucia Pais Ureche; Aude Billard
One of the major challenges in Programming by Demonstration is deciding who to imitate. In this paper we propose a set of metrics for assessing how skilled a user is when demonstrating a bimanual task to a robot, that requires both a coordinated motion of the arms, and proper contact forces. We record successful demonstrations relative to the task goal and evaluate user performance with respect to 3 measures: the ability to maneuver the tool, the consistency in teaching, and the degree of coordination between the two arms. We present preliminary results on a scooping task.
The Acoustic-Phonetics Change of English Learners in Robot Assisted Learning BIBAFull-Text 39-40
  Jiyoung In; Jeonghye Han
This study is to verify the effectiveness of robot TTS technology in assisting Korean English language learners to acquire a native-like accent by correcting the prosodic errors they commonly make. Child English language learners' F0 range, a prosodic variable, will be measured and analyzed for any changes in accent. We examined whether if robot with the currently available TTS technology appeared to be effective as much as a tele-presence robot with native speaker from the acoustic phonetic viewpoint.
Are Tangibles Really Better?: Keyboard and Joystick Outperform TUIs for Remote Robotic Locomotion Control BIBAFull-Text 41-42
  Geoff M. Nagy; James E. Young; John E. Anderson
Prior work has suggested that tangible user interfaces (TUIs) may be more natural and easier to learn than conventional interfaces. We present study results that suggest an opposite effect: we found user performance, satisfaction, and ease of use to be higher with more common-place input methods (keyboard and joystick) than two novel TUIs.
Evaluating Stereoscopic Video with Head Tracking for Immersive Teleoperation of Mobile Telepresence Robots BIBAFull-Text 43-44
  Sven Kratz; Jim Vaughan; Ryota Mizutani; Don Kimber
Our research focuses on improving the effectiveness and usability of driving mobile telepresence robots by increasing the user's sense of immersion during the navigation task. To this end we developed a robot platform that allows immersive navigation using head-tracked stereoscopic video and a HMD. We present the result of an initial user study that compares System Usability Scale (SUS) ratings of a robot teleoperation task using head-tracked stereo vision with a baseline fixed video feed and the effect of a low or high placement of the camera(s). Our results show significantly higher ratings for the fixed video condition and no effect of the camera placement. Future work will focus on examining the reasons for the lower ratings of stereo video and and also exploring further visual navigation interfaces.
Super-Low-Latency Telemanipulation Using High-Speed Vision and High-Speed Multifingered Robot Hand BIBAFull-Text 45-46
  Yugo Katsuki; Yuji Yamakawa; Yoshihiro Watanabe; Masatoshi Ishikawa; Makoto Shimojo
We developed a super-low-latency telemanipulation system using a high-speed vision system and a high-speed robot hand with a high-speed tactile sensor. This system does not require users to wear sensors. Also, since it has latency lower than the sampling rate of visual recognition by humans, the latency is not recognizable by humans. Low-latency telemanipulation systems are needed to perform tasks that require high speed, such as catching falling objects or fast-moving objects. We evaluated the latency of our telemanipulation system and successfully demonstrated catching of falling objects using our system, in contrast to a conventional vision system operating at 30 fps, which failed the task.
Beaming the Gaze of a Humanoid Robot BIBAFull-Text 47-48
  Gérard Bailly; Frédéric Elisei; Miquel Sauze
We here propose to use immersive teleoperation of a humanoid robot by a human pilot for artificially providing the robot with social skills. This so-called beaming approach of learning by demonstration (the robot passively experience social behaviors that can be further modeled and used for autonomous control) offers a unique way to study embodied cognition, i.e. a human cognition driving a controllable robotic body.
EMG-Based Analysis of the Upper Limb Motion BIBAFull-Text 49-50
  Iason Batzianoulis; Sahar El-Khoury; Silvestro Micera; Aude Billard
In a human robot interaction scenario, predicting the human motion intention is essential for avoiding inconvenient delays and for a smooth reactivity of the robotic system. In particular, when dealing with hand prosthetic devices, an early estimation of the final hand gesture is crucial for a smooth control of the robotic hand. In this work we develop an electromyographic (EMG) based learning approach that decodes the grasping intention at an early stage of the reaching to grasping motion, i.e before the final grasp/hand preshape takes place. EMG electrodes are used for recording the arm muscles activities and a cyberglove is used to measure the finger joints during the reach and grasp motion. Results show that we can correctly classify with $90%$ accuracy for three typical grasps before the onset of the hand pre-shape. Such an early detection of the grasp intention allows to control a robotic hand simultaneously to the motion of subject's arm, hence generating no delay between the natural arm motion and the artificial hand motion.
Visualisation of Sound Source Location in a Teleoperation Interface for a Mobile Robot BIBAFull-Text 51-52
  François Ferland; Aurélien Reveleau; François Michaud
Representing the location of sound sources may be helpful when teleoperating a mobile robot. To evaluate this modality, we conducted trials in which the graphical user interface (GUI) displays a blue right icon on the video stream where the sound is located. Results show that such visualization modality provides a clear benefit when a user has to distinguish between multiple sound sources.
Command Robots from Orbit with Supervised Autonomy: An Introduction to the Meteron Supvis-Justin Experiment BIBAFull-Text 53-54
  Neal Y. Lii; Daniel Leidner; André Schiele; Peter Birkenkampf; Benedikt Pleintinger; Ralph Bayer
The on-going work at German Aerospace Center (DLR) and European Space Agency (ESA) on the Meteron Supvis-Justin space telerobotic experiment utilizing supervised autonomy is presented. The Supvis-Justin experiment will employ a tablet UI for an astronaut on the International Space Station (ISS) to communicate task level commands to a service robot. The goal is to explore the viability of supervised autonomy for space telerobotics. For its validation, survey, navigation, inspection, and maintenance tasks will be commanded to DLR's service robot, Rollin' Justin, to be performed in a simulated extraterrestrial environment constructed at DLR. The experiment is currently slated for late 2015-2016.
Auditory Immersion with Stereo Sound in a Mobile Robotic Telepresence System BIBAFull-Text 55-56
  Andrey Kiselev; Mårten Scherlund; Annica Kristoffersson; Natalia Efremova; Amy Loutfi
Auditory immersion plays a significant role in generating a good feeling of presence for users driving a telepresence robot. In this paper, one of the key characteristics of auditory immersion -- sound source localization (SSL) -- is studied from the perspective of those who operate telepresence robots from remote locations. A prototype which is capable of delivering soundscape to the user through Interaural Time Difference (ITD) and Interaural Level Difference (ILD) using the ORTF stereo recording technique was developed. The prototype was evaluated in an experiment and the results suggest that the developed method is sufficient for sound source localization tasks.
Evaluation of a Mobile Robotic Telepresence System in a One-on-One Meeting Scenario BIBAFull-Text 57-58
  Mathis Lauckner; Dejan Pangercic; Serkan Tuerker
Despite the increased popularity and availability of mobile robotic telepresence systems during the last years, there has been little research that systematically compares these systems against more traditional systems, such as teleconference systems. In this work, we present a 40 person user study in a simulated one-on-one meeting scenario. In a between-subject design participants performed the Desert Survival Task with an unknown examiner's confederate either calling in using a conventional phone or beaming in using Beam system from Suitable Technologies [1]. In the study we also simulated a typical meeting disturbance. Several aspects of the discussion's effectiveness were assessed (e.g. connectivity, disturbance, problem solving, ease of collaboration) by using questionnaires as well as video observations. Our findings consistently corroborated a significantly more effective, natural and likeable interaction by using Beam. Though our scenario is real and a true pain point of many large corporations further studies will need to be carried out comparing such telepresence system to video conference systems.
Video Manipulation Techniques for the Protection of Privacy in Remote Presence Systems BIBAFull-Text 59-60
  Alexander Hubers; Emily Andrulis; William D. Scott; Levi Scott; Tanner Stirrat; Duc Tran; Ruonan Zhang; Ross Sowell; Cindy Grimm
Systems that give control of a mobile robot to a remote user raise privacy concerns about what the remote user can see and do through the robot. We aim to preserve some of that privacy by manipulating the video data that the remote user sees. Through two user studies, we explore the effectiveness of different video manipulation techniques at providing different types of privacy. We simultaneously examine task performance in the presence of privacy protection. In the first study, participants were asked to watch a video captured by a robot exploring an office environment and to complete a series of observational tasks under differing video manipulation conditions. Our results show that using manipulations of the video stream can lead to fewer privacy violations for different privacy types. Through a second user study, it was demonstrated that these privacy-protecting techniques were effective without diminishing the task performance of the remote user.

Late-Breaking Reports -- Session 2

A Tool to Diagnose Autism in Children Aged Between Two to Five Old: An Exploratory Study with the Robot QueBall BIBAFull-Text 61-62
  Julie Golliot; Catherine Raby-Nahas; Mark Vezina; Yves-Marie Merat; Audrée-Jeanne Beaudoin; Mélanie Couture; Tamie Salter; Bianca Côté; Cynthia Duclos; Maryse Lavoie; François Michaud
QueBall is a spherical robot capable of motion and equipped with touch sensors, multi-colored lights, sounds, and a wireless interface with an iOS device. While these capabilities may be useful in assisting the early diagnosis of autism, no detailed guidelines have yet been established to achieve this. In this report, we described the exploratory study conducted with an interdisciplinary research team to adapt QueBall's capabilities in order to have clinicians observe how children interact with QueBall. This is the preliminary phase in designing an experimental protocol to evaluate the use of QueBall in diagnosing autism for children from two to five years of age.
Why Do Children Abuse Robots? BIBAFull-Text 63-64
  Tatsuya Nomura; Takayuki Uratani; Takayuki Kanda; Kazutaka Matsumoto; Hiroyuki Kidokoro; Yoshitaka Suehiro; Sachie Yamada
We found that children sometimes abuse a social robot in a hallway of a shopping mall. They spoke bad words, repeatedly obstructed the robot's path, and sometimes even kicked and punched the robot. To investigate why they abused it, we conducted a field study, in which we let visiting children freely interact with the robot, and interviewed when they engaged in a serious abusive behavior including physical contacts. In total, we obtained valid interviews from twenty-three children over 13 days of observations. They are aged between five and nine. Adults and older children were rarely involved. We interviewed them to know whether they perceived the robot as human-like others, why they abused it, and whether they thought that the robot would suffer from their abusive behavior. We found that 1) the majority of the children abused because they were curious about the robot's reactions or enjoyed abusing it while considering it as human-like, and 2) about half of the children believed in the capability of the robot to perceive their abusive behaviors.
The Interplay of Robot Language Level with Children's Language Learning during Storytelling BIBAFull-Text 65-66
  Jacqueline Kory Westlund; Cynthia Breazeal
Children's oral language skills in preschool can predict their success in reading, writing, and academics in later schooling. Helping children improve their language skills early on could lead to more children succeeding later. As such, we examined the potential of a sociable robotic learning/teaching companion to support children's early language development. In a microgenetic study, 17 children played a storytelling game with the robot eight times over a two-month period. We evaluated whether a robot that "leveled" its stories to match the child's current abilities would lead to greater learning and language improvements than a robot that was not matched. All children learned new words, created stories, and enjoyed playing. Children who played with a matched robot used more words, and more diverse words, in their stories than unmatched children. Understanding the interplay between the robot's and the children's language will inform future work on robot companions that support children's education through play.
Social Robot Toolkit: Tangible Programming for Young Children BIBAFull-Text 67-68
  Michal Gordon; Edith Ackermann; Cynthia Breazeal
Teaching children how to program has gained broad interest in the last decade. Approaches range from visual programming languages, tangible programming, as well as programmable robots. We present a novel social robot toolkit that extends common approaches along three dimensions. (i) We propose a tangible programming approach that is suitable for young children with reusable vinyl stickers to represent rules for the robot to perform. (ii) We make use of social robots that are designed to interact directly with children. (iii) We focus the programming tasks and activities around social interaction. In other words, children teach an expressive relational robot how to socially interact by showing it a tangible sticker rulebook that they create. To explore various activities and interactions, we teleoperated the robot's sensors. We present qualitative analysis of children's engagement in and uses of the social robot toolkit and show that they learn to create new rules, explore complex computational concepts, and internalize the mechanism with which robots can be programmed.
The 5-Step Plan: A Holistic Approach to Investigate Children's Ideas on Future Robotic Products BIBAFull-Text 69-70
  Lara Lammer; Astrid Weiss; Markus Vincze
Many educational robotics activities involve children with bottom-up approaches and pre-set robot tasks. However, robotics for education can be much more if used in holistic, non-task deterministic ways, like when children develop design concepts for their favorite robots. The 5-step plan offers a simple yet effective structure for this creative process. Researchers as well as educators can use it to introduce many children to robotics, not only the ones interested in becoming engineers or scientists, while at the same time explore the ideas and needs for a wide range of future robotic products and services from a children's perspective.
A Cognitive and Affective Architecture for Social Human-Robot Interaction BIBAFull-Text 71-72
  Wafa Johal; Damien Pellier; Carole Adam; Humbert Fiorino; Sylvie Pesty
Robots show up frequently in new applications in our daily lives where they interact more and more closely with the human user. Despite a long history of research, existing cognitive architectures are still too generic and hence not tailored enough to meet the specific needs demanded by social HRI. In particular, interaction-oriented architectures require handling emotions, language, social norms, etc, which is quite a handful. In this paper, we present an overview of a Cognitive and Affective Interaction-Oriented Architecture for social human-robot interactions abbreviated CAIO. This architecture is parallel to the BDI (Belief, Desire, Intention) architecture that comes from philosophy of actions by Bratman. CAIO integrates complex emotions and planning techniques. It aims to contribute to cognitive architectures for HRI by enabling the robot to reason on mental states (including emotions) of the interlocutors, and to act physically, emotionally and verbally.
Design of Emotional Conversations with a Child for a Role Playing Robot BIBAFull-Text 73-74
  Sang Hoon Ji; Su Jeong YOU; Hye-Kyung Cho
The children who suffer from psychological and emotional disorder are unaccustomed to cooperation, shared meaning, sympathy, empathy, and magnanimity. In recent, several attempts has been tried at increasing children's social skills by emotional role-playing game with robots because the robotic system can offer dynamic, adaptive and autonomous interaction for learning of imitation skills with real-time performance evaluation and feedback. But there are limits in robot technologies. Especially, it is very difficult to understand the children's word and take suitable behaviors for the children's intents. Therefore, we suggest a method of guiding an emotional robot playing robot conversations with a child in this paper. For the purpose, we design a human-robot-interaction software and a special human intervention device (HID). And finally, we implement our suggested method with a commercial humanoid robot.
How Anthropomorphism Affects Human Perception of Color-Gender-Labeled Pet Robots BIBAFull-Text 75-76
  Kyung-Mi Chung; Dong-Hee Shin
The aim of this study is to examine whether six color-gender-labeled pet robots draw repulsive responses from participants based on the measurement of five key concepts in human-robot interaction: anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. In total, 60 male and 69 female undergraduate and graduate students aged 18 to 37 years participated in the experiment. The results show that anthropomorphism and animacy can be conceptualized as a composite and extended concept. In a plot of results, all visual targets were positioned at the top of the upward curve, and not plotted in the valley. Another finding of this study is that, when confronted with a pet robot PLEO with manipulated gender-related social cues, participants responded automatically to the robots, applying human-human social attraction rules to them.
Nao as an Authority in the Classroom: Can Nao Help the Teacher to Keep an Acceptable Noise Level? BIBAFull-Text 77-78
  Patricia Bianca Lyk; Morten Lyk
We are researching if Nao could work as an authority figure in the classroom, and if it could help the teacher keep the sound volume at an acceptable level. It is also researched if children will perceive the robot differently if they have worked with the robot before, and if this has an influence on the Nao as an authority figure. Furthermore we will try to see if there is a connection between the pupils' perception of Nao as living/not-living and their will to accept it as an authority figure. This is studied through a two-part experiment with a 5th grade class, where one half of the pupils has worked with the robot prior to it acting as an assistant to the teacher in a normal class.
Design and Architecture of a Robot-Child Speech-Controlled Game BIBAFull-Text 79-80
  Samer Al Moubayed; Jill Lehman
We describe the conceptual design, architecture, and implementation of a multimodal, robot-child dialogue system in a fast-paced, speech-controlled collaborative game. In Mole Madness, two players (a user and an anthropomorphic robot) work together to move an animated mole character through its environment via speech commands. Using a combination of speech recognition systems and a microphone array, the system can accommodate children's natural behavior in real time. We also briefly present the details of a recent data collection with children, ages 5 to 9, and some of the challenging behaviors the system elicited that we intend to explore.
Children's Responses to Genuine Child Synthesized Speech in Child-Robot Interaction BIBAFull-Text 81-82
  Anara Sandygulova; Gregory M. P. O'Hare
This paper presents a study of children's responses to the perceived gender and age of a humanoid robot Nao which communicated with four genuine synthesized child voices. Results indicate that manipulations are successful for all voice conditions. Also, voices of UK English are preferred by children in Ireland for Child-Robot Interaction (cHRI).
Ms. An, Feasibility Study with a Robot Teaching Assistant BIBAFull-Text 83-84
  Karina R. Liles; Jenay M. Beer
In this feasibility study, we present a socially interactive robot teaching assistant to engage 5th grade rural minority students in practicing multiplication. We discovered students perceived the robot as a sociable agent; and students preferred their interaction with the robot assistant over other kinds of study support.
Smart Presence for Retirement Community Employees BIBAFull-Text 85-86
  Karina R. Liles; Allison Kacmar; Rachel E. Stuck; Jenay M. Beer
The goal of this study was to understand what employees of continuing care retirement communities (CCRC) think about the smart presence technology. To better understand their perceptions of the benefits, concerns, and adoption criteria for smart presence systems we have conducted a needs assessment with CCRC employees who were given first-hand experience operating the BEAM as a pilot and local user. Participants indicated there is potential for smart presence technology in retirement communities and shared an equal number of benefits and concerns. The benefits that were mentioned included convenience and effort/time saving, visualization and socialization whereas the concerns that were mentioned included limitations of the system, emotional harm to others/residents and physical harm to others. It is important to understand such attitudes toward technology, because they are predictive of adoption.
Leading a Person Using Ethologically Inspired Autonomous Robot Behavior BIBAFull-Text 87-88
  Soh Takahashi; Gácsi Márta; Péter Korondi; Hideki Hashimoto; Mihoko Niitsuma
This study considered a leading behavior of a robot. To lead a person when the person's attention is initially elsewhere, the robot's behavior needs to be designed so that it seeks the person's attention and seamlessly brings him or her to the target location. Therefore we implement a leading behavior for a robot inspired the dog's action sequence. We evaluate the robot behavior through an experiment.
Fundamental Study of Robot Behavior that Encourages Human to Tidy up Table BIBAFull-Text 89-90
  Manabu Gouko; Chyon Hae Kim
In this study, we investigate the influence of a robot's behavior that motivates human tidying up. Using this scenario, robot can accomplish tidying up tasks effectively through human-robot cooperation (HRC). We developed a system that can tidy up a table through HRC. To validate what behavior effectively encourage human to tidy up, we conducted a preliminary experiment with 8 male-participants, aged 21-23. This paper describes its elementary results.
How Would You Describe Assistive Robots to People Who are Blind or Low Vision? BIBAFull-Text 91-92
  Byung-Cheol Min; Aaron Steinfeld; M. Bernardine Dias
Assistive robots can enhance the safety, efficiency, and independence of people who are blind or low vision (B/LV) during urban travel. However, a clear understanding is still lacking in how best to introduce and describe an assistive robot to B/LV persons in a way that facilitates effective human-robot interaction. The goal of this study was to understand how different people would describe an assistive robot to a B/LV traveler. Our preliminary results showed that participants described the robot in a similar order (i.e. robot's appearance, function, and capability in order); however, they had different focuses on their descriptions. This pilot study will lead to better descriptions of assistive robots to B/LV users, supporting more effective interaction in our future real-world deployment works.
Selecting Popular Topics for Elderly People in Conversation-based Companion Agents BIBAFull-Text 93-94
  Kazufumi Tsukada; Yutaka Takase; Yukiko I. Nakano
In aging societies, supporting elderly people is a critical issue, and companion agents that can function as a conversational partner are expected to provide social support to isolated older adults. Aiming at improving companionship dialogues with these agents, this study proposes a topic selection mechanism using blog articles written by the elderly. By categorizing the nouns extracted from blogs using Wikipedia, we defined 219 topic categories consisting of about 3,000 topic words that the elderly discuss in their daily life. The topic selection mechanism is implemented into a companion agent and used to generate the agent's utterances.
An Interactive Robot Facilitating Social Skills for Children BIBAFull-Text 95-96
  Sang-Seok Yun; JongSuk Choi; Sung-Kee Park
In this paper, we propose an interactive robot system to facilitate easy improvement of children's social capability, with robot-assisted interventions effectively offering social skill training for children with autism. This is achieved through therapeutic protocols with therapy, encouragement, and pause modes, which are determined by behavioral responses of children. Furthermore, the robot evaluates the level of children's reactivity in the child-robot interaction by recognition modules for frontal face and touch, and it generates appropriate training tasks through the combination of kinesic acts and displayable contents. From the experiments of the interplay training with autistic and non-autistic children, it is verified that the proposed system has positive effects on social development of children with autism spectrum disorders.
Body Language for Mood Induction Procedures BIBAFull-Text 97-98
  Cristina Diaz; Angel Pascual Del Pobil; Azucena Garcia; Diana Castilla; Ignacio Miralles
According to the principles of positive psychology (PP), social learning, therapeutic robotics and mood induction procedures (MIPs), we have developed an application to be used as part of a positive MIP in a psychological treatment context. We have used the inexpensive humanoid robot Nao because of its ease of use, which allows the proper interaction with therapists to help them on a regular basis. Our hypothesis is that a rich body language set can compensate the lack of facial expressions in such robots. We run a pilot study in the context of cognitive behavioral therapy for the treatment of Fibromyalgia. This work introduces a new way to contribute to MIPs and to the human-robot interaction (HRI).
Formative Work Analysis to Design Caregiver Robots BIBAFull-Text 99-100
  Keith S. Jones; Barbara Cherry; Mohan Sridharan
This paper describes recent developments in a research project that seeks to explore and describe how caregiving robots should function by analyzing caregiving in elders' homes, creating a detailed account of current elder care practices, and translating this account into design recommendations for caregiving robots.
Social Personalized Human-Machine Interaction for People with Autism: Defining User Profiles and First Contact with a Robot BIBAFull-Text 101-102
  Pauline Chevalier; Adriana Tapus; Jean-Claude Martin; Brice Isableu
Our research aims to develop a new personalized social interaction model between a humanoid robot and/or a virtual agent and an individual suffering of Autistic Spectrum Disorder (ASD), so as to enhance his/her social and communication skills. Because of the intra-individual variability among the ASD population, our objective is to propose a customized social interaction for each individual. In light of the ASD impact on vision and motor processing [1], [2], and in order to define individual's profile, we posit that the individual's reliance to proprioceptive and kinematic visual cues will affect the way he/she interacts with a social agent. A first experiment that defines each participants' perceptivo-cognitive and sensorimotor profile with respect to the integration of visual inputs has already been conducted. We also presented the Nao robot to 4 children with ASD, and analyzed their behavior prior to their profiles. First results are promising.
A Social Robot to Mitigate Stress, Anxiety, and Pain in Hospital Pediatric Care BIBAFull-Text 103-104
  Sooyeon Jeong; Deirdre E. Logan; Matthew S. Goodwin; Suzanne Graca; Brianna O'Connell; Honey Goodenough; Laurel Anderson; Nicole Stenquist; Katie Fitzpatrick; Miriam Zisook; Luke Plummer; Cynthia Breazeal; Peter Weinstock
Children and their parents may undergo challenging experiences when admitted for inpatient care at pediatric hospitals. While most hospitals make efforts to provide socio-emotional support for patients and their families during care, gaps still exist between human resource supply and demand. The Huggable project aims to close this gap by creating a social robot able to mitigate stress, anxiety, and pain in pediatric patients by engaging them in playful interactive activities. In this paper, we introduce a larger experimental design to compare the effects of the Huggable robot to a virtual character on a screen and a plush teddy bear, and provide initial qualitative analyses of patients' and parents' behaviors during intervention sessions collected thus far. We demonstrate preliminarily that children are more eager to emotionally connect with and be physically activated by a robot than a virtual character, illustrating the potential of social robots to provide socio-emotional support during inpatient pediatric care.
Enhancing Long-term Children to Robot Interaction Engagement through Cloud Connectivity BIBAFull-Text 105-106
  Jordi Albo-Canals; Adso Fernández-Baena; Roger Boldu; Alex Barco; Joan Navarro; David Miralles; Cristobal Raya; Cecilio Angulo
In this paper, we introduce a cloud-based structure to enhance long-term engagement in a pet-robot companion treatment to reduce stress an anxiety to hospitalized children. Cloud connectivity enables to combine human intervention with artificial intelligent multi-agent to bias the Robot Companion behavior in order to foster a better engagement and decrease the drop out during the treatment.
Designing a Robot Guide for Blind People in Indoor Environments BIBAFull-Text 107-108
  Catherine Feng; Shiri Azenkot; Maya Cakmak
Navigating indoors is challenging for blind people and they often rely on assistance from sighted people. We propose a solution for indoor navigation involving multi-purpose robots that will likely reside in many buildings in the future. In this report, we present a design for how robots can guide blind people to an indoor destination in an effective and socially-acceptable way. We used participatory design, creating a design team with three designers and five non-designers. All but one member of the team had a visual impairment. Our resulting design specifies how the robot and the user initially meet, how the robot guides the user through hallways and around obstacles, and how the robot and user conclude their session.
Robot Trustworthiness: Guidelines for Simulated Emotion BIBAFull-Text 109-110
  David J. Atkinson
Well-justified human evaluations of autonomous robot trustworthiness require evidence from a variety of sources, including observation of robot behavior. Displays of affect by a robot that reflect important internal states not otherwise overtly visible could provide useful evidence for evaluation of robot agent trustworthiness. As an analogy, the human limbic system, sometimes described as an ancient sub-cognitive system, drives human display of affect in a manner that is largely independent of purposeful behavior arising from cognition. Such displays of affect and corresponding attributions of emotion provide important social information that aids understanding and prediction of human behavior. Could an "artificial limbic system" provide similar useful insight into a robot's internal state? The value of affect signals for evaluation of robot trustworthiness depends on three crucial factors that require investigation: 1) Correlation of affective signals to trust-related, measurable attributes of robot agent internal state, 2) Fidelity in portrayal of emotion by the robot agent such that affective signals evoke human anthropomorphic social recognition, and 3) Correct human interpretation of the affective signals for justifiable modulation of beliefs about the robot agent. This paper discusses these three factors as principles to guide robotic simulation of emotion for increasing human ability to make reasonable assessments of robot trustworthiness and appropriate reliance.
Robotic Sonification for Promoting Emotional and Social Interactions of Children with ASD BIBAFull-Text 111-112
  Ruimin Zhang; Myounghoon Jeon; Chung Hyuk Park; Ayanna Howard
Deficiency in social interaction is one of the most crucial issues for children with Autism Spectrum Disorder (ASD). To foster their emotional and social communication, we have developed an orchestration robot platform. After describing our concepts of the use of sonification in the intervention sessions, we describe our efforts in developing a facial expression detection system and implementing a platform-free sonification server system.
Effects of SMILE Emotional Model on Humanoid Robot User Interaction BIBAFull-Text 113-114
  Elise Russell; Andrew B. Williams
Naturalistic conversation and emotions, while difficult to approximate in robots, facilitate interactions with non-expert users and serve to make robots more relatable and predictable. This paper describes the implementation and evaluation of two major improvements upon an existing interface, the SMILE app for the MU-L8 humanoid robot. The original version of the app is compared to a version in which popups and extraneous user touches are removed, and they are both compared to a third version in which the robot's emotions decay with time. These versions are tested in terms of ease of use, user engagement, and naturalness of interaction. User feedback and observer ratings are collected for 15 participants, and their results are described. These improvements contribute advances in the field of smartphone humanoid robotics interfaces toward a more ideal emotional and conversational model.
Provisions of Human-Robot Friendship BIBAFull-Text 115-116
  Sean A. McGlynn; Wendy A. Rogers
In this paper, we provide an overview of theories on human-robot relationship development with an emphasis on equity relationships. Specifically, we discuss the potential for robots and humans to engage in a communal cost/reward relationship structure that is characteristic of friendship. The "Provisions of Friendship" have been proposed as being necessary for satisfying human-human relationships. We provide insights into what will be required of a robot at each stage in a dynamic relationship development process for a human to treat it as a friend.
High-speed Human / Robot Hand Interaction System BIBAFull-Text 117-118
  Yugo Katsuki; Yuji Yamakawa; Masatoshi Ishikawa
We propose an entirely new human hand / robot hand interaction system designed with a focus on high speed. The speed of this system, from input via a high-speed vision system to output by a high-speed multifingered robot hand, exceeds the visual recognition speed of humans. Therefore, the motion of the interaction system cannot be recognized by the human eye. As an application, we created a system called "Rock-Paper-Scissors robot system with 100% winning rate", based on this interaction system. This system always beats human players in the Rock-Paper-Scissors game due to the high speed of our interaction system. We also discuss the future possibilities of this system.
From a Robotic Vacuum Cleaner to Robot Companion: Acceptance and Engagement in Domestic Environments BIBAFull-Text 119-120
  Maria Luce Lupetti; Stefano Rosa; Gabriele Ermacora
This paper shows preliminary results of the project DR4GHE (Domestic Robot 4 Gaming Health and Eco-sustainability). The main purpose is to develop a Robot Companion for domestic applications able to advise and suggest good practices to users. Interaction and engagement with the user are introduced providing to the Robot Vacuum Cleaner RVC an additional intelligence and leveraging the existing level of acceptance. Morphological aspects, in addition to behavioral traits, assume a key role in the perceptual transition of the RVC from object to subject. Human-robot interaction takes place on two levels: direct interaction, in particular with visual and sound signals; and mediated interaction, through a GUI for smartphone and tablets.
The Effect of Robot Appearance Types and Task Types on Service Evaluation of a Robot BIBAFull-Text 121-122
  Jung Ju Choi; Sonya S. Kwak
Robot's appearance types could be classified into two types: human-oriented and product-oriented. Human-oriented robot resembles human's appearance whereas product-oriented robot is an intelligent product that robotic technologies are integrated into existing product. In this study, we investigated the impact of two robot appearance types and two task types on service evaluation of a robot. We executed a 2 (robot appearance types: human-oriented vs. product-oriented) x 2 (robot task types: social context vs. task-oriented context) mixed-participants experiment design (N=48). In the case of social context, people evaluated the service provided by a human-oriented robot better than by a product-oriented robot while in the case of task-oriented context, they evaluated the service provided by a product-oriented robot more positively than by a human-oriented robot. Implications for the design of human-robot interaction are discussed.
Do People Purchase a Robot Because of Its Coolness? BIBAFull-Text 123-124
  Gyu-Ri Kim; Kyung-Mi Chung; Dong-Hee Shin
The purpose of this study is to verify the research model that coolness and perceived usefulness are effective predictor variables, attitude is mediating variable, and purchase intention is criterion variable. In total, 41 respondents with no prior exposure robot JIBO completed the on-line survey after watching the scenario movie of explaining its usage. Coolness and perceived usefulness is significant predictor for attitude, and attitude has positive impact on purchase intention. Based on the results, a "cool" product is more likely to arouse potential consumer's purchase intention directly and indirectly in the market. Theoretical and practical implications were discussed in detail.

Late-Breaking Reports -- Session 3

Museum Guide Robot by Considering Static and Dynamic Gaze Expressions to Communicate with Visitors BIBAFull-Text 125-126
  Kaname Sano; Keisuke Murata; Ryota Suzuki; Yoshinori Kuno; Daijiro Itagaki; Yoshinori Kobayashi
Human eyes not only serve the function of enabling us "to see" something, but also perform the vital role of allowing us "to show" our gaze for non-verbal communication. We have investigated the static design and dynamic behaviors of robot heads for suitable gaze communication with humans while giving a friendly impression. In this paper, we focus on how the robot's impression is affected by its eye blink and eyeball movement synchronized with head turning. Through experiments with human participants, we found that robot head turning with eye blinks give a friendly impression while robot head turning without eye blinks is suitable for making people shift their attention towards the robot's gaze direction. These findings are very important for communication robots such as museum guide robots. Therefore to demonstrate our approach, we developed a museum guide robot system employing suitable facial design and gaze behavior based on all of our findings.
Controlling Robot's Gaze according to Participation Roles and Dominance in Multiparty Conversations BIBAFull-Text 127-128
  Takashi Yoshino; Yutaka Takase; Yukiko I. Nakano
A robot's gaze behaviors are indispensable in allowing the robot to participate in multiparty conversations. To build a robot that can convey appropriate attentional behavior in multiparty human-robot conversations, this study proposes robot head gaze models in terms of participation roles and dominance in a conversation. By implementing such models, we developed a robot that can determine appropriate gaze behaviors according to its conversational roles and dominance.
I am Interested in What You are Saying: Role of Nonverbal Immediacy Cues in Listening BIBAFull-Text 129-130
  Seongmi Jeong; Jihyang Gu; Dong-Hee Shin
Immediacy plays a key role in interpersonal communication. Some of immediate behaviors in human-human interaction (i. e. gaze and nodding) have received much attention in HRI, however, others (i. e. body posture) don't. This study investigates whether robot's posture (lean forward vs. upright) and nodding manner (small and fast vs. large and slow) can affect perception of the robot. The current study argues that the lean forward and nodding manner are likely to have significant effects on psychological and behavior outcomes, including perceived empathy, human-likeness, and likability of the robot.
A Gaze Controller for Coordinating Mutual Gaze During Conversational Turn-Taking in Human-Robot Interaction BIBAFull-Text 131-132
  Frank Broz; Hagen Lehmann
This report proposes a method for modelling conversational mutual gaze from human interaction data in a way that allows the resulting gaze controller to be used in combination with any dialog manager that controls turn-taking. It also describes a set of experimental measures that will be used to investigate the effect of the gaze controller on people's impressions of the robot and the quality of the interaction.
Do People Spontaneously Take a Robot's Visual Perspective? BIBAFull-Text 133-134
  Xuan Zhao; Corey Cusimano; Bertram F. Malle
This study takes a novel approach to the topic of perspective taking in HRI. In a human behavioral experiment, we examined whether and in what circumstances people spontaneously take a humanoid robot's visual perspective. We found that specific nonverbal behaviors displayed by a robot -- namely, referential gaze and goal-directed reaching -- led human viewers to take the robot's visual perspective, though marginally less frequently than when they encounter the same behaviors displayed by another human. This project identifies specific features of robot behavior that trigger spontaneous social-cognitive processes in human viewers and informs the design of interactive robots in the future.
Layering Laban Effort Features on Robot Task Motions BIBAFull-Text 135-136
  Heather Knight; Reid Simmons
Motion is an essential area of social communication that has the potential to enable robots and people to collaborate naturally, develop rapport, and seamlessly share environments. The Laban Effort System is a well-known methodology from dance and acting training that has been in active use teaching performers to overlay sequences of motion with expressivity for over fifty years. We present our methodology to layer expression on robot base motions, using the same set of joints for both procedural task completion and expressive communications, followed by early results on the legibility of our Effort implementations and how their settings affect robot projections of state.
A Gaze-contingent Dictating Robot to Study Turn-taking BIBAFull-Text 137-138
  Alessandra Sciutti; Lars Schillingmann; Oskar Palinko; Yukie Nagai; Giulio Sandini
In this paper we describe a human-robot interaction scenario designed to evaluate the role of gaze as implicit signal for turn-taking in a robotic teaching context. In particular we propose a protocol to assess the impact of different timing strategies in a common teaching task (English dictation). The task is designed to compare the effects of a teaching behavior whose timing is dependent on the student's gaze with the more standard fixed timing approach. An initial validation indicates that this scenario could represent a functional tool for investigating the positive and negative impacts that personalized timing might have on different subjects.
Exception Handling for Natural Language Control of Robots BIBAFull-Text 139-140
  Lanbo She; Yunyi Jia; Ning Xi; Joyce Y. Chai
Enabling natural language control of robots is challenging, since human users are often not familiar with the underlying robotic system, and its capabilities and limitations. Many exceptions may occur when natural language commands are translated into lower-level robot actions. This paper gives a brief introduction to three levels of exceptions and discusses how dialogue can be applied to handle these exceptions during human-robot interaction.
Learning Bimanual Coordinated Tasks From Human Demonstrations BIBAFull-Text 141-142
  Ana-Lucia Pais Ureche; Aude Billard
In robot programming by demonstration dealing with high dimensional data that comes from human demonstrations is often subject to embedding prior knowledge of which variables should be retained and why. This paper proposes an approach for automatizing robot learning through the detection of causalities in the set of variables recorded during demonstration. This allows us to infer a notion of coherence and coordination between multiple systems that apparently work independently. We test the approach on a bimanual scooping task, consisting of multiple phases. We detect the coordination between the two arms, between the arms and the hands and between the fingers of each hand and observe how these coordination patterns change throughout the task.
Negotiating Instruction Strategies during Robot Action Demonstration BIBAFull-Text 143-144
  Lars C. Jensen; Kerstin Fischer; Dadhichi Shukla; Justus Piater
This paper describes the kinds of strategies naïve users of an industrial robotic platform make use of and analyze how these strategies are adjusted based on the robot's feedback. The study shows that users' actions are contingent on the robot's response to such a degree that users will try out alternative instruction strategies if they do not see an effect in the robot within a time frame of two seconds. Thus, the timing of the robot's actions (or in-actions) influences how users instruct the robot.
Human Smile Distinguishes between Collaborative and Solitary Tasks in Human-Robot Interaction BIBAFull-Text 145-146
  Franziska Kirstein; Kerstin Fischer; Özgür Erkent; Justus Piater
In this paper, the smiling behavior of participants when they instruct a robot to assist them assembling a wooden toolbox is analyzed. The results show that participants smile more when interacting with the robot than when they assemble the box. Thus, human tutors' smiling behavior can be used as an indicator to distinguish between collaborative and solitary phases during human-robot collaborative work.
Knowledge Acquisition with Selective Active Learning for Human-Robot Interaction BIBAFull-Text 147-148
  Batbold Myagmarjav; Mohan Sridharan
This paper describes an architecture for robots interacting with non-expert humans to incrementally acquire domain knowledge. Contextual information is used to generate candidate questions that are ranked using measures of information gain, ambiguity, and human confusion, with the objective of maximizing the potential utility of the response. We report results of preliminary experiments evaluating the architecture in a simulated environment.
Fidgebot: Working Out while Working BIBAFull-Text 149-150
  Jürgen Brandstetter; Noah Liebman; Kati London
More and more people suffer from chronic health issues related to posture and lack of movement in their office work. We developed a novel approach to motivate employees to be more physically active during their office work. Our approach focuses on using the social characteristics of the NAO robot platform to deliver social cues for motivation. Like a coworker who is very motivated to exercise, we used NAO to invite employees to do short "micro-exercises" along with NAO. This approach has multiple advantages when compared to conventional notification systems. Our pilot study shows that employees found it easy and enjoyable to perform micro-exercises with NAO. According to our qualitative data, NAO's social appearance was essential for the motivation of the employees.
Human-Robot Teamwork in USAR Environments: the TRADR Project BIBAFull-Text 151-152
  Joachim de Greeff; Koen Hindriks; Mark A. Neerincx; Ivana Kruijff-Korbayova
The TRADR project aims at developing methods and models for human-robot teamwork, enabling robots to operate in search & rescue environments alongside humans as teammates, rather than as tools. Through a user-centered cognitive engineering method, human-robot teamwork is analyzed, modeled, implemented and evaluated in an iterative fashion. Important is the notion of persistence: rather than treating each sortie as a separate instance for which the build-up of situation awareness and exploration starts from scratch, the objective for the TRADR project is to provide robotic support in an ongoing, fluent manner. This paper provides a short overview of important aspects for human-robot teaming, such as human-robot teamwork coordination and joint situation awareness.
Enabling Synchronous Joint Action in Human-Robot Teams BIBAFull-Text 153-154
  Samantha Rack; Tariq Iqbal; Laurel D. Riek
Joint action is an increasing area of interest for HRI researchers. To be effective team members, robots need to be able to understand, anticipate, and react appropriately to high-level human social behavior. We have designed a new approach to enable an autonomous robot to act fluently within a synchronous human-robot team. We present an initial description and validation study of this approach. Using a synchronous dance scenario as an experimental testbed, we found that our robot was able to execute appropriate actions using our method. Moving forward, we aim to extend this method by developing predictions for the robot's actions using an understanding of the group's dynamics. Our method will be helpful to other researchers working to achieve fluency of action within human-robot groups.
Sliding Autonomy in Cloud Robotics Services for Smart City Applications BIBAFull-Text 155-156
  Gabriele Ermacora; Stefano Rosa; Basilio Bona
The aim of this paper is to present a sliding autonomy approach for Unmanned Aerial Vehicles (UAVs) in the context of the project Fly4SmartCity. The project consists in the implementation of a cloud robotics service in which small UAVs are employed for emergency management, monitoring and surveillance in a smart city scenario. Human-robot interaction is mediated by the cloud robotics platform. We imagine three main levels of autonomy for UAVs: full autonomy, mixed-initiative and teleoperation. Then we propose different scenarios in which we analyze the Level Of Autonomy and the sliding autonomy approach. All services use shared knowledge (crowdsourcing and other data sources available on the Internet) for the management and control of the UAVs.
Efficient Space Utilization for Improving Navigation in Congested Environments BIBAFull-Text 157-158
  Moondeep Shrestha; Hayato Yanagawa; Erika Uno; Shigeki Sugano
In this paper, we have looked into some two behaviors for 'efficient space utilization' by humans in a congested situation. From observations, we have noticed that 'last minute avoidance' and 'shoulder turning' are two crucial behaviors in achieving efficient navigation in crowded environments. The presented study verifies the shoulder turning behavior and investigates the typical values for these behaviors. These results will form an initial framework for local avoidance planner for future research.
Towards a Human Factors Model for Underwater Robotics BIBAFull-Text 159-160
  Xian Wu; Rachel E. Stuck; Ioannis Rekleitis; Jenay M. Beer
The goal of this study is to understand the factors between a human and semi-Autonomous Underwater Vehicles (sAUVs) from a HRI perspective. A SME interview approach was used to analyze video data of operators interacting with sAUVs. The results suggest considerations for the capabilities and limitations of the human and robot, in relation to the dynamic demands of the task and environment. We propose a preliminary human factors model to depict these components and discuss how they interact.
Automated Planning for Peer-to-peer Teaming and its Evaluation in Remote Human-Robot Interaction BIBAFull-Text 161-162
  Vignesh Narayanan; Yu Zhang; Nathaniel Mendoza; Subbarao Kambhampati
Human factor studies on remote human-robot interaction are often restricted to various forms of supervision, in which the robot is essentially being used as a smart mobile manipulation platform with sensing capabilities. In this study, we investigate the incorporation of a general planning capability into the robot to facilitate peer-to-peer human-robot teaming, in which the human and robot are viewed as teammates that are physically separated. One intriguing question is to what extent humans may feel uncomfortable at such robot autonomy and lose situation awareness, which can potentially reduce teaming performance. Our results suggest that peer-to-peer teaming is preferred by humans and leads to better performance. Furthermore, our results show that peer-to-peer teaming reduces cognitive loads from objective measures (even though subjects did not report this in their subjective evaluations), and it does not reduce situation awareness for short-term tasks.
Visual Robot Programming for Generalizable Mobile Manipulation Tasks BIBAFull-Text 163-164
  Sonya Alexandrova; Zachary Tatlock; Maya Cakmak
General-purpose robots present the opportunity to be programmed for a specific purpose after deployment. This requires tools for end-users to quickly and intuitively program robots to perform useful tasks in new environments. In this paper, we present a flow-based visual programming language (VPL) for mobile manipulation tasks, demonstrate the generalizability of tasks programmed in this VPL, and present a preliminary user study of a development tool for this VPL.
A Shared Autonomy Interface for Household Devices BIBAFull-Text 165-166
  Matthew Rueben; William D. Smart
As robots begin to enter our homes and workplaces, they will have to deal with the devices and appliances that are already there. Unfortunately, devices that are easy for humans to operate often cause problems for robots [3]. In teleoperation settings, the lack of tactile feedback often makes manipulation of buttons and switches awkward and clumsy [7]. Also, the robot's gripper often occludes the control, making teleoperation difficult. In the autonomous setting, perception of small buttons and switches is often difficult due to sensor limitations and poor lighting conditions. Adding depth information does not help much, since many of the controls we want to manipulate are small, and often close to the noise threshold of currently-available depth sensors typically installed on a mobile robot. This makes it extremely difficult to segment the controls from the other parts of the device.
   In this paper, we present a shared autonomy approach to the operation of physical device controls. A human operator gives high-level guidance, helps identify controls and their locations, and sequences the actions of the robot. Autonomous software on our robot performs the lower-level actions that require closed-loop control, and estimates the exact positions and parameters of controls. We describe the overall system, and then give the results of our initial evaluations, which suggest that the system is effective in operating the controls on a physical device.
Understanding Second-Order Theory of Mind BIBAFull-Text 167-168
  Laura M. Hiatt; J. Gregory Trafton
Theory of mind is a key factor in the effectiveness of robots and humans working together as a team. Here, we further our understanding of theory of mind by extending a theory of mind model to account for a more complicated, second-order theory of mind task. Ultimately, this will provide robots with a deeper understanding of their human teammates, enabling them to better perform in human-robot teams.
Facial Expression Synthesis on Robots: An ROS Module BIBAFull-Text 169-170
  Maryam Moosaei; Cory J. Hayes; Laurel D. Riek
We present a generalized technique for easily synthesizing facial expressions on robotic faces. In contrast to other work, our approach works in near real time with a high level of accuracy, does not require any manual labeling, is a fully open-source ROS module, and can enable the research community to perform objective and systematic comparisons between the expressive capabilities of different robots.
Experimental Verification of Learning Strategy Fusion for Varying Environments BIBAFull-Text 171-172
  Akihiko Yamaguchi; Masahiro Oshita; Jun Takamatsu; Tsukasa Ogasawara
We investigate a real robot applicability of our method, general-purpose behavior-learning for high degree-of-freedom robots in varying environments. Our method is based on the learning strategy fusion proposed in [Yamaguchi et al. 2011], and extended theoretically in [Yamaguchi et al. 2013]. This report discusses its applicability to real robot systems, and demonstrates some positive experimental results.
A Cloud Robotic System based on Robot Companions for Children with Autism Spectrum Disorders to Perform Evaluations during LEGO Engineering Workshops BIBAFull-Text 173-174
  Jordi Albo-Canals; Danielle Feerst; Daniel de Cordoba; Chris Rogers
In this paper, we propose a non-invasive way to measure the level of anxiety and stress of participants in the Autism Spectrum Disorders without using wearable devices. This measurement is done through a robot companion, which will log children behavior during children's social skills training sessions based on building LEGO Robotics. All this data can be uploaded to a cloud system for future comparison and research. The work presented is the validation of the technology proposed and the session's layout.
Cloud based VR System with Immersive Interfaces to Collect Human Gaze and Body Motion Behaviors BIBAFull-Text 175-176
  Yoshinobu Hagiwara; Yoshiaki Mizuchi; Yongwoon Choi; Tetsunari Inamura
In this study, we present a cloud VR system with immersive interfaces to collect human gaze and body behaviors. Human beings can log in to a VR space and communicate with a robot by the proposed system. Oculus Rift and Kinect provide immersive visualization and motion control, respectively. Human gaze and body behaviors are collected to database during the human-robot interaction. An application experiment to share object concept demonstrates the availability of the proposed system.
Achieving the Vision of Effective Soldier-Robot Teaming: Recent Work in Multimodal Communication BIBAFull-Text 177-178
  Susan G. Hill; Daniel Barber; Arthur W., III Evans
The U.S. Army Research Laboratory (ARL) Autonomous Systems Enterprise has a vision for the future of effective Soldier-robot teaming. Our research program focuses on three primary thrust areas: communications, teaming, and shared cognition. Here we discuss a recent study in communications, where we collected data using a multimodal interface comprised of speech, gesture, touch and a visual display to command a robot to perform semantically-based tasks. Observations on usability and participant expectations with respect to the interaction with the robot were obtained. Initial observations are reported, showing that the speech-gesture-visual multimodal interface was liked and performed well. Areas for improvement were noted.
Effects of Agent Transparency on Operator Trust BIBAFull-Text 179-180
  Michael W. Boyce; Jessie Y. C. Chen; Anthony R. Selkowitz; Shan G. Lakhmani
We conducted a human-in-the-loop robot simulation experiment. The effects of displaying transparency information, in the interface for an autonomous robot, on operator trust were examined. Participants were assigned to one of three transparency conditions and trust was measured prior to observing the autonomous robotic agent's progress and post observation. Results demonstrated that participants who received more transparency information reported higher trust in the autonomous robotic agent. Overall findings indicate that displaying SAT model-based transparency information on a robotic interface is effective for appropriate trust calibration in an autonomous robotic agent.
Modeling Human-Robot Collaboration in a Simulated Environment BIBAFull-Text 181-182
  Jekaterina Novikova; Leon Watts; Tetsunari Inamura
In this paper, we describe a project that explores an open-sourced enhanced robot simulator SIGVerse towards researching a social human-robot interaction. Research on high level social human-robot interaction systems that includes collaboration and emotional intercommunication between people and robots requires a big amount of data based on embodied interaction experiments. However, the cost of developing real robots and performing many experiments can be very high. On another hand, virtual robot simulators are very limited in terms of interaction between simulated robots and real people. Thus we propose using an enhanced human-robot interaction simulator SIGVerse that enables users to join the virtual world occupied by simulated robots through immersive user interface. In this paper, we describe a collaborative human-robot interaction task where a virtual human agent is controlled remotely by human subjects to interact with an automatic virtual robot with implemented artificial emotional reactions. Our project sets the first steps to explore the potential of using an enhanced human-robot interaction simulator to build socially interactive robots that can serve in educational, team building, and collaborative task solving applications.
Human Safety and Efficiency of a Robot Controlled by Asymmetric Velocity Moderation BIBAFull-Text 183-184
  Gustavo Alfonso Garcia Ricardez; Akihiko Yamaguchi; Jun Takamatsu; Tsukasa Ogasawara
Maintaining human safety during HRI is key in the integration of the humanoids in our daily lives. With this in mind, we previously proposed Asymmetric Velocity Moderation (AVM) as a way of restricting the robot speed when interacting with humans. With AVM, the robot reduces its speed according to distance and the direction of the motion. In this paper, we propose a new way of calculating the speed restriction which solves a problem of previous proposals where human safety was sacrificed due to unexpected lesser restriction. We focus on a detailed investigation of how AVM treats situations where a humanoid could endanger a human and test it using different calculation methods of the speed restriction. Finally, we evaluate the efficiency of the humanoid HRP-4 in terms of task completion time by performing simulation experiments in simple HRI scenarios.

HRI Pioneers -- Poster Session 1

Personal Space Invaders: Exploring Robot-initiated Touch-based Gestures for Collaborative Robotics BIBAFull-Text 185-186
  Jeff Allen; Karon E. MacLean
Robots have been physically interacting with people for many years, but almost exclusively people initiate physical contact. Robots able to initiate touch will enable a large new category of human interaction, for example collaboration in noisy environments; and can help bridge cultural and language barriers. In this paper, we outline a research plan with the goal of developing a framework for robot-initiated physical communication, towards enabling robots to work and collaborate safely with people in close proximity. Physical touch-based interaction that is acceptable and understandable to all people is a challenging goal. Instead, we aim to develop robots that start with a repertoire of general touch behaviours which are adapted to individual people's preferences as each person interacts with the robot.
Skill Demonstration Transfer for Learning from Demonstration BIBAFull-Text 187-188
  Tesca Fitzgerald; Andrea L. Thomaz
Learning from Demonstration is an effective method for interactively teaching skills to a robot learner. However, a skill learned via demonstrations is often learned within a particular environment and uses a specific set of objects, and thus may not be immediately applicable for use in unfamiliar environments. Transfer learning addresses this problem by enabling a robot to apply learned skills to unfamiliar environments. We describe our ongoing work to develop a system which enables transfer learning by representing skill demonstrations according to the level of similarity between the source and target environments.
Encouraging User Autonomy through Robot-Mediated Intervention BIBFull-Text 189-190
  Jillian Greczek; Maja Mataric
Robots Interacting with Style BIBAFull-Text 191-192
  Wafa Johal
Our research goal is to identify ways to adapt non-verbal behavior and skills of a companion robot for children. We present an experiment considering parents' and children's perception of role changing and behavioral style in an interactive scenario with children. Behavioral styles being nonverbal parameters affect the way a robot expresses itself within a specific task. The results of this ongoing experiment aim to determine the influence of role changing and styles in term of perceived credibility and engagement of the child interacting with the robot.
Fostering Learning Gains Through Personalized Robot-Child Tutoring Interactions BIBAFull-Text 193-194
  Aditi Ramachandran; Brian Scassellati
Social robots can be used to tutor children in one-on-one interactions. It would be most beneficial for these robots to adapt their behavior to suit the individual learning needs of children. Each child is different; they learn at their own pace and respond better to certain types of feedback and exercises. Furthermore, being able to detect various affective signals during an interaction with a social robot would allow the robot to adaptively change its behavior to counter negative affective states that occur during learning, such as confusion or boredom. This type of adaptive behavior based on perceived signals from the child (such as facial expressions, body posture, etc.) will create more effective tutoring interactions between the robot and child. We propose that a robotic tutoring system that can leverage both affective signals as well as progress through a learning task will lead to greater engagement and learning gains from the child in a one-on-one tutoring interaction.
Tactile Skin Deformation Feedback for Conveying Environment Forces in Teleoperation BIBAFull-Text 195-196
  Samuel B. Schorr; Zhan Fan Quek; William R. Provancher; Allison M. Okamura
Teleoperated robots are used in a variety of applications. The immersive nature of the teleoperated experience is often limited by a lack of haptic information. However, in many applications there are difficulties conveying force information due to limitations in hardware fidelity and the inherent tradeoffs between stability and transparency. In situations where force feedback is limited, it is possible to use sensory substitution methods to convey this force information via other sensory modalities. We hypothesize that skin stretch feedback is a useful substitute for kinesthetic force feedback in force-sensitive teleoperated tasks. We created and tested a tactile device that emulates the natural skin deformation present during tool mediated manual interaction. With this device, experiment participants performed teleoperated palpation to determine the orientation of a stiff region in a surrounding artificial tissue using skin stretch, force, reduced gain force, graphic, or vibration feedback. Participants using skin stretch feedback were able to determine the orientation of the region as accurately as when using force feedback and significantly better than when using vibration feedback, but also exhibited higher interaction forces. Thus, skin stretch feedback may be useful in scenarios where force feedback is reduced or infeasible.
When is it Better to Give Up?: Towards Autonomous Action Selection for Robot Assisted ASD Therapy BIBAFull-Text 197-198
  Emmanuel Senft; Paul Baxter; James Kennedy; Tony Belpaeme
Robot Assisted Therapy (RAT) for children with ASD has found promising applications. In this paper, we outline an autonomous action selection mechanism to extend current RAT approaches. This will include the ability to revert control of the therapeutic intervention to the supervising therapist. We suggest that in order to maintain the goals of therapy, sometimes it is better if the robot gives up.
Analyzing Human Behavior and Bootstrapping Task Constraints from Kinesthetic Demonstrations BIBAFull-Text 199-200
  Ana Lucia Pais Ureche; Aude Billard
In robot Programming by Demonstration (PbD), the interaction with the human user is key to collecting good demonstrations, learning and finally achieving a good task execution. We therefore take a dual approach in analyzing demonstration data. First we automatically determine task constraints that can be used in the learning phase. Specifically we determine the frame of reference to be used in each part of the task, the important variables for each axis and a stiffness modulation factor. Additionally for bi-manual tasks we determine arm-dominance and spatial or force coordination patterns between the arms. Second we analyze human behavior during demonstration in order to determine how skilled the human user is and what kind of feedback is preferred during the learning interaction. We test this approach on complex tasks requiring sequences of actions, bi-manual or arm-hand coordination and contact on each end effector.
Toward More Natural Human-Robot Dialogue BIBFull-Text 201-202
  Tom Williams
A Robotized Environment for Improving Therapist Everyday Work with Children with Severe Mental Disabilities BIBAFull-Text 203-204
  Igor Zubrycki; Grzegorz Granosik
Burnout rate is very high among therapists who work with mentally disabled children, especially those with autism. To address this issue, our group of researchers from different fields conducted a series of interviews and participant-observations in a workplace and proposed a programmable, robotized environment which therapists can use in their everyday work. Our research question is to what extent such an environment will be programmed and used in practice, and whether it will improve therapists' well being.

HRI Pioneers -- Poster Session 2

Information Management for Map-Based Visualizations of Robot Teams BIBAFull-Text 205-206
  Electa A. Baker
Complex human machine systems, including remotely deployed mobile robots and sensors can generate an overwhelming amount of data. Filtering the available geospatial information is necessary to make the most time critical information salient to the system operators. The General Visualization and Abstraction (GVA) algorithm abstracts the presented information in order to reduce visual clutter and has been shown to reduce the cognitive demands and perceived workload of a single operator tasked when supervising teams of multiple robots with high levels of autonomy [1, 2]. My research focuses on significantly extending the GVA algorithm to support multiple human operators who share a common high level goal, but have role specific subgoals for their designated human-robot teams.
An Adaptive Robotic Tablet Gaming System for Post-Stroke Hand Function Rehabilitation BIBAFull-Text 207-208
  Brittney A. English; Ayanna M. Howard
Physical therapy is a common treatment for the rehabilitation of hemiparesis, or the weakness of one side of the body. Stroke is a common cause of hemiparesis. Stroke survivors regularly struggle with motivation and engagement, especially in-between sessions when the therapist is absent from the exercising process. As a solution, we have developed a robotic tablet gaming system to facilitate post-stroke hand function rehabilitation. Healthy subject pilot studies have been completed to verify that this system increases engagement and is capable of encouraging specific therapeutic motions. In the future, a learning model algorithm will be added to the system to assess the patient's progress and optimize the recovery time.
Extended Virtual Presence of Therapists through Home Service Robots BIBAFull-Text 209-210
  Hee-Tae Jung; Yu-kyong Choe; Roderic Grupen
The use of robots in rehabilitation is an increasingly viable option, given the shortage of well-trained therapists who can address individual patients' needs and priorities. Despite the acknowledged importance of customized therapy for individual patients, the means to realize it has received less research attention. Many approaches rely on rehabilitation robots, such as InMotion [3], where therapy customization is achieved by physically assisting patients when they cannot complete expected exercise movements. Consequently, it is important to accurately detect the patients' unsuccessful efforts to make exercise movements using various signals. An example that utilizes electromyography signal can be found in Dipietro et al. [1]. These approaches lack of adaptive therapy programs where generic exercise targets do not necessarily address the specific needs/deficit of individual patients nor impose appropriate challenges.
Robotic Coaching of Complex Physical Skills BIBAFull-Text 211-212
  Alexandru Litoiu; Brian Scassellati
The research area of using robots to coach complex physical skills is underserved. Whereas robots have been used extensively in the form of robotic orthoses to rehabilitate early trauma patients, there is more that can be done to develop robots that help children, the elderly and late-stage rehabilitation patients to excel at physical skills. In order to do this, we must develop robots that do not actuate on the students, but coach them through hands-off modalities such as verbal advice and demonstrations. This approach requires sophisticated perception, and modeling of the student's movement in order to deliver effective advice. Preliminary results suggest that these goals can be achieved with consumer-grade sensing hardware. We present planned future work towards achieving this vision.
An Unsupervised Learning Approach for Classifying Sequence Data for Human Robotic Interaction Using Spiking Neural Network BIBAFull-Text 213-214
  Banafsheh Rekabdar; Monica Nicolescu; Mircea Nicolescu
The goal of this research is to enable robots to learn spatio-temporal patterns from a human's demonstration. We propose an approach based on Spiking Neural Networks. The method brings the following contributions: first, it enables the encoding of patterns in an unsupervised manner. Second, it requires a very small number of training examples. Third, the approach is invariant to scale and translation. We validated our method on a dataset of hand movement gestures representing the drawing of digits 0 to 9, in front of a camera. We compared the proposed approach with other standard pattern recognition approaches. The results indicate the superiority of proposed method over other approaches.
Creating Interactive Robotic Characters: Through a combination of artificial intelligence and professional animation BIBAFull-Text 215-216
  Tiago Guiomar Ribeiro; Ana Paiva
We are integrating artificial intelligent agents with generic animation systems in order to provide socially interactive robots with expressive behavior defined by animation artists. Such animators will therefore be able to apply principles of traditional and 3D animation to such robotic systems, and thus allow to achieve the illusion of life in robots. Our work requires studies and interactive scenario development alongside with the artists.
Context-Aware Assistive Interfaces for Persons with Severe Motor Disabilities BIBAFull-Text 217-218
  Matthew Rueben
Persons with severe motor disabilities have a great need for assistive robots, but also struggle to communicate these needs in ways that a robot can understand. I propose an interface that will make it possible to communicate with robots using limited movements. This will be done using contextual information from the robot's semantic model of the world. I also describe the state-of-the-art hardware and personal collaborations that equip our lab for this research. Assistive robotic interfaces also evoke concerns that a robot could violate personal privacy expectations, particularly if a remote operator can see the robot's video stream. This is especially important for persons with disabilities because it may be harder for them to monitor the robot's whereabouts. I describe ongoing work on two interfaces that help make it possible for robots to be privacy conscious. Answers for privacy concerns need to be developed alongside the new interface technologies prior to in-home deployment.
Affect and Inference in Bayesian Knowledge Tracing with a Robot Tutor BIBAFull-Text 219-220
  Samuel Spaulding; Cynthia Breazeal
In this paper, we present work to construct a robotic tutoring system that can assess student knowledge in real time during an educational interaction. Like a good human teacher, the robot draws on multimodal data sources to infer whether students have mastered language skills. Specifically, the model extends the standard Bayesian Knowledge Tracing algorithm to incorporate an estimate of the student's affective state (whether he/she is confused, bored, engaged, smiling, etc.) in order to predict future educational performance. We propose research to answer two questions: First, does augmenting the model with affective information improve the computational quality of inference? Second, do humans display more prominent affective signals in an interaction with a robot, compared to a screen-based agent? By answering these questions, this work has the potential to provide both algorithmic and human-centered motivations for further development of robotic systems that tightly integrate affect understanding and complex models of inference with interactive, educational robots.
Towards Efficient Collaborations with Trust-Seeking Adaptive Robots BIBAFull-Text 221-222
  Anqi Xu; Gregory Dudek
We are interested in asymmetric human-robot teams, where a human supervisor occasionally takes over control to aid an autonomous robot in a given task. Our research aims to optimize team efficiency by improving the robot's task performance, decreasing the human's workload, and building trust in the team. We envision synergistic collaborations where the robot adapts its behaviors dynamically to optimize efficacy, reduce manual interventions, and actively seek for greater trust. We describe recent works that study two facets of this trust-seeking adaptive methodology: modeling human-robot trust dynamics, and developing interactive behavior adaptation techniques. We also highlight ongoing efforts to combine these works, which will enable future human-robot teams to be maximally trusting and efficient.
The Effect of Robot Appearance Types and Task Types on Service Evaluation of a Robot BIBAFull-Text 223-224
  Jung Ju Choi; Sonya S. Kwak
Robot's appearance types could be classified into two types: human-oriented and product-oriented. Human-oriented robot resembles human's appearance whereas product-oriented robot is an intelligent product that robotic technologies are integrated into existing product. In this study, we investigated the impact of two robot appearance types and two task types on service evaluation of a robot. We executed a 2 (robot appearance types: human-oriented vs. product-oriented) x 2 (robot task types: social context vs. task-oriented context) mixed-methods experiment design (N=48). In the case of social context, people evaluated the service provided by a human-oriented robot better than by a product-oriented robot while in the case of task-oriented context, they evaluated the service provided by a product-oriented robot more positively than by a human-oriented robot. Implications for the design of human-robot interaction are discussed.

HRI Pioneers -- Poster Session 3

Error Feedback for Robust Learning from Demonstration BIBAFull-Text 225-226
  Maria Vanessa aus der Wieschen; Kerstin Fischer; Norbert Krüger
The present study applies a user-centered approach to investigating feedback modalities for robot teleoperation by naïve users. It identifies the reasons why novice users need feedback and evaluates feedback modalities by employing participatory design. Moreover, drawing on document design theory, it studies which design guidelines need to be followed in the creation of legible error feedback screens.
Studying Socially Assistive Robots in Their Organizational Context: Studies with PARO in a Nursing Home BIBAFull-Text 227-228
  Wan-Ling Chang; Selma Šabanovic
We explore human-robot interaction (HRI) with socially assistive robots within a broader social context instead of one-on-one interaction.. In this paper, we describe two in situ studies of the socially assistive robot PARO in a local nursing home -- one in a controlled small group setting, and one in free-form interaction in a public space -- as well as our future research agenda to facilitate socially situated exploration of assistive robotics in the wild. We particularly focus on how people and institutions scaffold successful HRI, and identify how social mediation, individual sensemaking, and other social factors affect the success of HRI.
Social Personalized Human-Machine Interaction for People with Autism BIBAFull-Text 229-230
  Pauline Chevalier
My PhD research aims to develop a new personalized social interaction model between a humanoid robot and/or a virtual agent and an individual (child and/or adult) suffering of Autistic Spectrum Disorder (ASD), so as to enhance his/her social and communication skills. Because of the variability of the syndrome among the ASD population, our objective is to propose a customized social interaction for each individual. Previous studies explored the link between the individual integration of proprioceptive and visual feedbacks and communication, interactions skills, and emotions recognition [1], [2]. In light of the ASD impact on vision and motor processing [3], [4] and in order to define individual's profile, we posit that the individual's reliance to proprioceptive and kinematic visual cues will affect the way he/she interacts with a social agent. In our work, a first experiment that defines each participants' perceptivo-cognitive and sensorimotor profile with respect to the integration of visual inputs has already been conducted. Our next work will focus on developing appropriate agent behaviors that fit the user's profile.
Towards Analyzing Cooperative Brain-Robot Interfaces Through Affective and Subjective Data BIBAFull-Text 231-232
  Chris S. Crawford; Juan E. Gilbert
Several single-user Brain-Computer Interface (BCI) systems currently exist. These systems are often used to provide input to robots. Although these systems are useful with some applications they often cause issues such as high cognitive workloads and fatigue. The presented research investigates an alternative approach, which consists of dividing cognitive tasks amongst multiple users. The primary goal of this research is to investigate the effectiveness of cooperative Brain-Robot Interfaces (cBRI) by analyzing affective data (engagement) provided by a BCI device and subjective data collected from participants.
Developing Learning from Demonstration Techniques for Individuals with Physical Disabilities BIBAFull-Text 233-234
  William Curran
Learning from demonstration research often assumes that the demonstrator can quickly give feedback or demonstrations. Individuals with severe motor disabilities are often slow and prone to human errors in demonstrations while teaching. Our work develops tools to allow persons with severe motor disabilities, who stand to benefit most from assistive robots, to train these systems. To accommodate slower feedback, we will develop a movie-reel style learning from demonstration interface. To handle human error, we will use dimensionality reduction to develop new reinforcement learning techniques.
Autonomy, Embodiment, and Obedience to Robots BIBAFull-Text 235-236
  Denise Geiskkovitch; Stela Seo; James E. Young
We conducted an HRI obedience experiment comparing an autonomous robotic authority to: (i) a remote-controlled robot, and (ii) robots of variant embodiments during a deterrent task. The results suggest that half of people will continue to perform a tedious task under the direction of a robot, even after expressing desire to stop. Further, we failed to find impact of robot embodiment and perceived robot autonomy on obedience. Rather, the robot's perceived authority status may be more strongly correlated to obedience.
Open Learner Modelling with a Robotic Tutor BIBAFull-Text 237-238
  Aidan Jones; Susan Bull; Ginevra Castellano
This paper describes research to explore how personalisation in a robot tutor using an open leaner model (OLM) based approach impacts on effectiveness of children's learning. An OLM is a visualisation of inferred knowledge state. We address the feasibility of using social robotics to present an OLM to a learner. Results to date indicate that a robotic tutor can increase trust in explanations of an OLM over text based representations. We outline the remaining work to create and evaluate an autonomous robotic tutor that will use an OLM to scaffold reflection.
Challenges in Developing a Collaborative Robotic Assistant for Automotive Assembly Lines BIBAFull-Text 239-240
  Vaibhav Vasant Unhelkar; Julie A. Shah
Industrial robots are on the verge of emerging from their cages, and entering the final assembly to work along side humans. Towards this we are developing a collaborative robot capable of assisting humans in the final automotive assembly. Several algorithmic as well as design challenges exist when the robots enter the unpredictable, human-centric and time-critical environment of final assembly. In this work, we briefly discuss a few of these challenges along with developed solutions and proposed methodologies, and their implications for improving human-robot collaboration.
Co-Adaptive Optimal Control Framework for Human-Robot Physical Symbiosis BIBFull-Text 241-242
  Ker-Jiun Wang; Mingui Sun; Ruiping Xia; Zhi-Hong Mao
Bidirectional Learning of Handwriting Skill in Human-Robot Interaction BIBAFull-Text 243-244
  Hang Yin; Aude Billard; Ana Paiva
This paper describes the design of a robot agent and associated learning algorithms to help children in handwriting acquisition. The main issue lies in how to program a robot to obtain human-like handwriting and then exploit it to teach children. We propose to address this by integrating learning from demonstrations paradigm, which allows the robot to extract a task index from intuitive expert (e.g., adults) demonstrations. We present our work on the development of an algorithm, as well as its validation by learning compliant robotic writing motion from the extracted index. Also discussed is the synthesis of the learned task in the prospective work of transferring the task skill to users, especially in terms of learning by teaching. The undergoing work about the design of a sensor-embedded pen is introduced. This will be used as an intuitive interface for recording various handwriting related information in the interaction.

Workshops

HRI Education Workshop: How to Design and Teach Courses in Human-Robot Interaction BIBAFull-Text 245-246
  Carlotta A. Berry; Cindy Bethel; Selma Šabanovic
This workshop aims to share best practices for teaching courses in Human-Robot Interaction (HRI). The main focus is on undergraduate and graduate education and training, but K-12 and informal learning environments are also of interest. HRI is still a relatively new field with no standardized textbook or curriculum. Furthermore, HRI education requires an interdisciplinary approach, which poses challenges for both students and instructors. This workshop will bring together researchers and educators to discuss strategies for designing and teaching HRI to students with diverse backgrounds and skill sets.
The Emerging Policy and Ethics of Human Robot Interaction BIBAFull-Text 247-248
  Laurel D. Riek; Woodrow Hartzog; Don A. Howard; AJung Moon; Ryan Calo
As robotics technology forays into our daily lives, research, industry, and government professionals in the field of human-robot interaction (HRI) in must grapple with significant ethical, legal, and normative questions. Many leaders in the field have suggested that "the time is now" to start drafting ethical and policy guidelines for our community to guide us forward into this new era of robots in human social spaces. However, thus far, discussions have been skewed toward the technology side or policy side, with few opportunities for cross-disciplinary conversation, creating problems for the community. Policy researchers can be concerned about robot capabilities that are scientifically unlikely to ever come to fruition (like the singularity), and technologists can be vehemently opposed to ethics and policy encroaching on their professional space, concerned it will impede their work. This workshop aims to build a cross-disciplinary bridge that will ensure mutual education and grounding, and has three main goals: 1) Cultivate a multidisciplinary network of scholars who might not otherwise have the opportunity to meet and collaborate, 2) Serve as a forum for guided discussion of relevant topics that have emerged as pressing ethical and policy issues in HRI, 3) Create a working consensus document for the community that will be shared broadly.
Workshop on Enabling Rich, Expressive Robot Animation BIBAFull-Text 249-251
  Elizabeth Jochum; David Nuñez
HRI researchers and practitioners often need to generate complex, rich, expressive movement from machines to facilitate effective interaction. Techniques often include live puppeteering via Wizard-of-Oz setups, sympathetic interfaces, or custom control software. Often, animation is accomplished by playing back pre-rendered movement sequences generated by offline animators, puppeteers, or actors providing input to motion capture systems. Roboticists have also explored real-time parametric animation, affected motion planning, mechanical motion design, or blends of offline and live methods. Generating robot animation is not always straightforward and can be time consuming, costly, or even counter-productive when human-robot interaction breaks down due to inadequate animation. This workshop addresses a need to compare the various approaches to animating robots, to identify when particular techniques are most appropriate, and explore opportunities for further experimentation and tool-building.
Workshop on Behavior Coordination between Animals, Humans and Robots BIBAFull-Text 253-254
  Hagen Lehmann; Luisa Damiano; Lorenzo Natale
This workshop intends to bring together researchers investigating one or more aspects of behavior coordination in three different research domains: human-human interaction, human-animal interaction, human-robot interaction.
HRI Workshop on Human-Robot Teaming BIBAFull-Text 255-256
  Bradley Hayes; Matthew C. Gombolay; Malte F. Jung; Koen Hindriks; Joachim de Greeff; Catholijn Jonker; Mark Neerincx; Jeffrey M. Bradshaw; Matthew Johnson; Ivana Kruijff-Korbayova; Maarten Sierhuis; Julie A. Shah; Brian Scassellati
Developing collaborative robots that can productively and safely operate out of isolation in uninstrumented, human-populated environments is an important goal for the field of robotics. The development of such agents, those that handle the dynamics of human environments and the complexities of interpreting human interaction, is a strong focus within Human-Robot Interaction and involves underlying research questions deeply relevant to the broader robotics community. "Human-Robot Teaming" is a full-day workshop bringing together peer-reviewed technical and position paper contributions spanning a multitude of topics within the domain of human-robot teaming. This workshop seeks to bring together researchers from a wide array of human-robot interaction research topics with the focus of enabling humans and robots to better work together towards common goals. The morning session is devoted to gaining insight from invited speakers and contributed papers, while the afternoon session heavily emphasizes participant interaction via poster presentations, breakout sessions, and an expert panel discussion.
HRSI 2015: Workshop on Human-Robot Spatial Interaction BIBAFull-Text 257-258
  Marc Hanheide; Christian Dondrup; Ute Leonards; Tamara Lorenz; David Lu
With mobile robots moving into day to day life in private homes, care, or in public spaces and outdoors as robotic guides, Human-Robot Spatial Interaction (HRSI) -- the study of joint movement of humans and robots and the social signals governing the interaction -- becomes more and more important. The main focus of the workshop lies in incorporating social norms, like proxemics, and social signals, like eye contact and motioning, into current navigation approaches, be it constraint-based, learned from observation/demonstration, or interactively. How to evaluate the quality of the HRSI using novel feedback measures and devices, and grounding HRSI in empiric experiments focusing on Human-Human Spatial Interaction. The overall aim is to bring together participants from different fields, looking at all aspects of HRSI by presenting their work and identifying the main problems and questions on the way to a holistic, integrated HRSI system.
FJA@HRI15: Towards a Framework for Joint Action BIBAFull-Text 259-260
  Aurélie Clodic; Rachid Alami; Cordula Vesper; Elisabeth Pacherie; Bilge Mutlu; Julie A. Shah
The HRI 2015 Workshop "Towards a Framework for Joint Action" is a full-day workshop held in conjunction with the 10th ACM/IEEE International Conference on Human-Robot Interaction, in Portland (USA) on March 2nd 2015. The first edition of the workshop took place at RO-MAN 2014. This workshop aims at bringing together researchers from several disciplines to discuss the development of frameworks for analyzing and designing human-robot joint action. It is meant to provide the opportunity to researchers interested in joint action, roboticists but also philosophers and psychologists, to discuss in depth the topic and to contribute to the elaboration of a framework for human-robot joint action. To achieve this goal, we propose to the community to tackle a COMMON EXAMPLE (as it is sometimes done in robotics planning competition) with the goal to identify the capacities and skills needed for the successful performance of the joint action. This should enable us to build upon each other's experience to further develop ongoing work. The proposed example is described on the workshop website: fja.sciencesconf.org.
Cognition: A Bridge between Robotics and Interaction BIBAFull-Text 261-262
  Alessandra Sciutti; Katrin S. Lohan; Yukie Nagai
A key feature of humans is the ability to anticipate what other agents are going to do and to plan accordingly a collaborative action. This skill, derived from being able to entertain models of other agents, allows for the compensation for intrinsic delays of human motor control and is a primary support for efficient and fluid interaction. Moreover, the awareness that other humans are cognitive agents who combine sensory perception with internal models of the environment and others, enables easier mutual understanding and coordination [1]. Cognition represents therefore an ideal link between different disciplines, as the field of Robotics and that of Interaction studies performed by neuroscientists and psychologists. From a robotics perspective, the study of cognition is aimed at implementing cognitive architectures leading to efficient interaction with the environment and other agents (e.g., [2,3]). From the perspective of the human disciplines, robots could represent an ideal stimulus to study which are the fundamental robot properties necessary to make it perceived as a cognitive agent, enabling natural human-robot interaction (e.g., [4,5]). Ideally, the implementation of cognitive architectures may raise new interesting questions for psychologists, and the behavioral and neuroscientific results of the human-robot interaction studies could validate or give new inputs for robotics engineers. The aim of this workshop will be to provide a venue for researchers of different disciplines to discuss the possible points of contact and to highlight the issues and the advantages of bridging different fields for the study of cognition for interaction.

Videos

Telling Stories with Green the DragonBot: A Showcase of Children's Interactions Over Two Months BIBAFull-Text 263
  Jacqueline Kory Westlund
The language skills of young children can predict their academic success in later schooling. We may be able to help more children succeed by helping them improve their early language skills: a prime time for intervention is during preschool. Furthermore, because language lives in a social, interactive, and dialogic context, ideal interventions would not only teach vocabulary, but would also engage children as active participants in meaningful dialogues. Social robots could potentially have great impact in this area. They merge the benefits of using technology -- such as accessibility, customization and easy addition of new content, and student-paced, adaptive software -- with the benefits of embodied, social agents -- such as sharing physical spaces with us, communicating in natural ways, and leveraging social presence and social cues.
   To this end, we developed a robotic learning/teaching companion to support children's early language development. We performed a microgenetic field study in which we took this robot to two Boston-area preschools for two months. We asked two main questions: Could a robot companion support children's long-term oral language development through play? How might children build a relationship with and construe the robot over time?
Robot in Charge: A Relational Study Investigating Human-Robot Dyads with Differences in Interpersonal Dominance BIBAFull-Text 265
  Jamy Li; Wendy Ju; Clifford Nass
We present a controlled experiment exploring how people respond to video stimuli that depict relationships between humans and robots. How participants observed differences in interpersonal dominance in a human-robot pair was investigated using a "relational" study methodology. Participants were more trusting of and more attracted to both the robot and the person in a human-robot relationship where the robot was less dominant than the person compared to vice versa. These differences were not found for a human pair control condition, in which participants watched the same sequence of videos with two human confederates. Exploratory findings suggest that observers may prefer a person to be in charge and that human-robot relationships may be viewed differently than interpersonal ones.
Gaming Humanoids for Facilitating Social Interaction among People BIBAFull-Text 267
  Junya Hirose; Masakazu Hirokawa; Kenji Suzuki
This study proposes a novel approach for Human-Robot Interaction (HRI) called "Gaming Humanoids" using multiple humanoid robots in a video gaming environment. In this scenario, the robots play a typical tennis video game autonomously with humans. By varying the robot's role, (e.g. as a teammate/opponent) various interactions are realized.
The CoWriter Project: Teaching a Robot how to Write BIBAFull-Text 269
  Deanna Hood; Séverin Lemaignan; Pierre Dillenbourg
This video (that accompanies the paper "When Children Teach a Robot to Write: An Autonomous Teachable Humanoid Which Uses Simulated Handwriting" by the same authors, and presented as well during this conference) presents the first results of the EPFL CoWriter project. The project aims at building a robotic partner which children can teach handwriting. The system allows for the learning by teaching paradigm to be employed in the interaction, so as to stimulate meta-cognition, empathy and increased self-esteem in the child user. It is hypothesised that use of a humanoid robot in such a system could not just engage an unmotivated student, but could also present the opportunity for children to experience physically-induced benefits encountered during human-led handwriting interventions, such as motor mimicry.
Using Robots to Moderate Team Conflict: The Case of Repairing Violations BIBAFull-Text 271
  Nikolas Martelaro; Malte Jung; Pamela Hinds
The video shows interactions between a robot and team of people during a short group problem-solving task framed as a bomb defusal scenario. We explore how a robot can influence conflict dynamics through repairing negative violations within the team. The video shows three samples of interactions between two participants, a confederate delivering personal violations, and a robot attempting to moderate the team dynamics. These samples highlight interactions from a larger 2 (negative trigger: task-directed vs. personal attack) x 2 (repair: yes or no) between subjects experiment (N = 57 teams, 114 participants). Specifically, the video provides a qualitative look at our finding that a team's sense of personal conflict increases when the robot identifies and intervenes after a personal violation.
Low-Body-Part Detection using RGB-D camera BIBAFull-Text 273
  Jigwan Park; Kijin An; JongSuk Choi
The reliable perception of a human in a dynamic environment is the most critical issue for interactive human-robot services. In human-robot interaction, a camera on a robot naturally captures the low-body-part of human because robots are usually shorter than the human. Conventionally, a two-dimensional laser range finder is used in low-body-part detection [1, 2]. However, these methods may cause errors when there are similar structures with legs. This video demonstrates a low-body-part detection scheme that not only exploits three-dimensional characteristics and but also the RGB features of the low-body-part. We build the low-body-part candidates by clustering from the legs to the heap. In the results, spurious candidates are eliminated by the proposed method.
Mechanical Ottoman: Engaging and Taking Leave BIBAFull-Text 275
  David Sirkin; Brian Mok; Stephen Yang; Wendy Ju
This video introduces a robotic footstool -- the mechanical ottoman -- which explores how non-humanlike robots can coordinate joint action. It approaches seated people and offers to support their feet, then attempts to take leave during the interaction.
AEROSTABILES: A new approach to HRI researches BIBAFull-Text 277
  David St-Onge; Nicolas Reeves; Philippe Giguère; Inna Sharf; Gregory Dudek; Ioannis Rekleitis; Pierre-Yves Brèches; Patrick Abouzakhm; Philippe Babin
Initiated as a research-creation project by professor and artist Nicolas Reeves, the Aerostabile project quickly expanded to include researchers and artists from a wide range of disciplines. Its current phase brings together four robotic and research-creation labs with various expertises in unstable and dynamic environments. The first group, under the direction of professor Inna Sharf, is based at the department of mechanical engineering at University McGill. It works on control and modeling of autonomous blimps for satellite emulation. The second group is headed by professor Philippe Giguère from University Laval. It focuses on localization systems for robots operating in unknown outdoor environments. The third group is also from McGill, but this time from the computer science department. Headed by professor Gregory Dudek, it investigates the challenges presented by autonomous underwater robots, and by their interactions with human divers. The last team is based at the UQAM school of design. It is headed by professor Nicolas Reeves and engineer David St-Onge. It works on installations and performances in digital and algorithmic arts, and on the impact of new medias and technologies on the fields of art, architecture and design. The Aerostabile project pushes the boundaries of engineering and art by proposing a close hybridization of the two disciplines. It redefines the human-robot interaction paradigm, working specifically on the new interfaces required by the specific nature and context of emerging robotic systems. Multidisciplinary approaches are required to seamlessly integrate aesthetics, grace and precision. Amongst the tools and strategies developed by the research team, one of the most important is the organization of regular meetings similar to art residencies, which are structured around the framework of engineering software and hardware integration workshops. During such meetings, which occur twice a year, the four groups work together with engineers and artists from different disciplines. These intense collaborative events happen in spaces large enough to fit at least two 225-cm floating robotic cubes called "Tryphons", the latest models of a series of flying automata developed by Reeves and St-Onge. Fruitful questions and discussions emerge from these residencies, leading to new questions and development axis both in art and engineering. Whenever possible, they happen in public spaces, allowing direct contact with all kinds of audiences and with inspiring media artists and creators-researchers. The specific constraints of out-of-the-lab environments raise new problematics for all engineers, while the encounter between different academic cultures influence the development priorities. At the end of our journey, on top of the engineering papers that will be published, we aim to produce the first hybrid performance involving four performers interacting with four fully autonomous aerobots.
Robots + Agents for MOOCs: What if Scott Klemmer were a Robot? BIBAFull-Text 279
  Jamy Li; Wendy Ju
Online course lectures often consist of presentation slides with an inset "talking-head" video of the instructor. As the time and financial costs associated with producing these lectures are often high, employing a robot or a digital agent in lieu of an instructor could radically decrease the time and costs required. This video submission describes an initial study in which agent-based alternatives to a "talking-head" video are assessed. University students who viewed a lecture with a robot had similar recall scores but significantly lower ratings for likeability than those who viewed a lecture with a person, perhaps because the robot's voice was a negative social cue. Preliminary results suggest that appropriately designed agents may be useful for online lectures.
A Verifiable and Correct-by-Construction Controller for Robots in Human Environments BIBAFull-Text 281
  Lavindra de Silva; Rongjie Yan; Felix Ingrand; Rachid Alami; Saddek Bensalem
With the increasing use of domestic and service robots alongside humans, it is now becoming crucial to be able to verify whether robot-software is safe, dependable, and correct. Indeed, in the near future it may well be necessary for robot-software developers to provide safety certifications guaranteeing, e.g. that a hospital nursebot will not move too fast while a person is leaning on it, that the arm of a service robot will not unexpectedly open its gripper while holding a glass, or that there will never be a software deadlock while a robot is navigating in an office. To this end, we have provided a framework and software engineering methodology for developing safe and dependable real-world robotic architectures, with a focus on the functional level -- the lowest level of a typical layered robotic architecture -- which has all the basic action and perception capabilities such as image processing, obstacle avoidance, and motion control. Unlike past work we address the formal verification of the functional level, which allows providing guarantees that it will not do steps leading to undesirable/disastrous outcomes.
RRADS: Real Road Autonomous Driving Simulation BIBAFull-Text 283
  Sonia Baltodano; Srinath Sibi; Nikolas Martelaro; Nikhil Gowda; Wendy Ju
This video introduces a methodology for simulating an autonomous vehicle on open public roads. The video showcases participant reaction footage collected in the RRADS (Real Road Autonomous Driving Simulator). Although our study using this simulator did not use overt deception -- the consent form clearly states that a licensed driver is operating the vehicle -- the protocol was designed to support suspension of disbelief. Several participants who did not read the consent form clearly strongly believed that the vehicle was autonomous; this provides a lens onto the attitudes and concerns that people in real-world autonomous vehicles might have, and also points to ways that a protocol that deliberately used misdirection could gain ecologically valid reactions from study participants.
The Empathic Robotic Tutor: Featuring the NAO Robot BIBAFull-Text 285
  Tiago Ribeiro; Patrícia Alves-Oliveira; Eugenio Di Tullio; Sofia Petisca; Pedro Sequeira; Amol Deshmukh; Srinivasan Janarthanam; Mary Ellen Foster; Aidan Jones; Lee J. Corrigan; Fotios Papadopoulos; Helen Hastie; Ruth Aylett; Ginevra Castellano; Ana Paiva
We present an autonomous empathic robotic tutor to be used in classrooms as a peer in a virtual learning environment. The system merges a virtual agent design with HRI features, consisting of a robotic embodiment, a multimedia interactive learning application and perception sensors that are controlled by an artificial intelligence agent.
MiRAE: My Inner Voice BIBAFull-Text 287
  Logan Doyle; Casey C. Bennett; Selma Šabanovic
This video presents the interactions between MiRAE, an interactive robotic face, and visitors to an art exhibition at which it was displayed. The robot operated eight hours a day, six days a week, for three weeks in Spring 2014 and interacted with over 700 people across 300 interactions. The robot was fully autonomous and researchers were not present on site during the exhibit, so people interacted in a free-form manner, both individually and in groups. During the exhibit, video recordings were taken of people's responses to the robot. This video depicts a series of resulting interactions, with MiRAE's interpretation of the events.
A Robotic Companion for Social Support of Isolated Older Adults BIBAFull-Text 289
  Candace Sidner; Charles Rich; Mohammad Shayganfar; Timothy Bickmore; Lazlo Ring; Zessie Zhang
We demonstrate interaction with a relational agent, embodied as a robot, to provide social support for isolated older adults. Our robot supports multiple activities, including discussing the weather, playing cards and checkers socially, maintaining a calendar, talking about family and friends, discussing nutrition, recording life stories, exercise coaching and making video calls.
Collaboration with Robotic Drawers BIBAFull-Text 291
  Brian Mok; Stephen Yang; David Sirkin; Wendy Ju
In this video, we explored how everyday household robots should behave when performing collaborative tasks with human users. We ran a Wizard of Oz study (N=20) that utilized a set of robotic drawers. The participants were asked to assemble a cube by working together with the drawers which contained the tools needed to accomplish the task. We conducted a between-subjects test with the drawers while varying one of two variables (expressivity and proactivity) to yield a 2x2 factorial design.
Video on the Social Robot Toolkit BIBAFull-Text 293
  Michal Gordon; Cynthia Breazeal
The video presents the Social Robot toolkit, a new tangible interface for teaching pre-school children how to program social robots. The social robot toolkit that extends common approaches along three dimensions. (i) We propose a tangible programming approach that is suitable for young children with reusable vinyl stickers to represent rules for the robot to perform. (ii) We make use of social robots that are designed to interact directly with children. (iii) We focus the programming tasks and activities around social interaction. In other words, children teach an expressive relational robot how to socially interact by showing it a tangible sticker rulebook that they create. To explore various activities and interactions, we teleoperated the robot's sensors. We present qualitative analysis of children's engagement in and uses of the social robot toolkit and show that they learn to create new rules, explore complex computational concepts, and internalize the mechanism with which robots can be programmed.

Demonstrations -- Session 1

Introducing the Cuddlebot: A Robot that Responds to Touch Gestures BIBAFull-Text 295
  Jeff Allen; Laura Cang; Michael Phan-Ba; Andrew Strang; Karon MacLean
We present the Cuddlebot, a cat-sized robot equipped with a full-body, flexible fabric pressure sensitive touch sensor. Cuddlebot can move its head, arch its back, purr, and change how it "breathes." Our research explores how touch interactions affect stress and anxiety mitigation, and we demonstrate our current gesture recognition system, enabling users to connect sensed gestures and response behaviours.
Mechanical Ottoman: Up Close and Personal BIBAFull-Text 297
  David Sirkin; Brian Mok; Stephen Yang; Wendy Ju
This demonstration presents a robotic footstool -- the mechanical ottoman -- which approaches seated people and offers to support their feet, or alternatively can serve as seat or side table, then bids to take leave once engaged in the interaction.
Responsive Mouth: Enhancing Your Emotional Skill with Partial Agency BIBAFull-Text 299
  Hirotaka Osawa
The author developed a wearable mouth robot that supports a user's emotional labor. The robot detects people's age, gender, and emotions, and displays mouth gestures on the wearable display for supporting the expression of emotions.

Demonstrations -- Session 2

Intelligent Product Design BIBAFull-Text 301
  Han Nwi Lee; Yeseul Namkoung; Jinhee Kim; Seul Lee; Daun Jeong; Hyunji Seo; Soyeon Park; Kyeongah Lee; Sunbin Yang; Jimin Choi; Yeeun Kim; Jung Ju Choi; Sonya S. Kwak
Robot's appearance types could be classified into human-oriented robot and product-oriented robot. Human-oriented robot resembles human's appearance and behavior whereas product-oriented robot is an intelligent product that is laden with robotic technologies based on the existing product [1]. In Kwak et al.'s study [1], customers categorized a human-oriented robot as a robot and a product-oriented robot as one of the existing product categories, and a product-oriented robot was more effective than a human-oriented robot for consumers' evaluation and purchase intention toward robots. On the basis of this, we developed several intelligent products including intelligent slippers, intelligent Christmas tree blocks, an intelligent piggy bank, an intelligent clothespin, an intelligent grass protection mat, and an intelligent frame (see Figure 1).
Adventures of an Adolescent Trash Barrel BIBAFull-Text 303
  Stephen Yang; Brian Mok; David Sirkin; Wendy Ju
Our demonstration presents the roving trash barrel, a robot that we developed to understand how people perceive and respond to a mobile trashcan that offers its service in public settings. In a field study, we found that considerable coordination is involved in actively collecting trash, including capturing someone's attention, signaling an intention to interact, acknowledging the willingness -- or implicit signs of unwillingness -- to interact, and closing the interaction. In post-interaction interviews, we discovered that people believed that the robot was intrinsically motivated to collect trash, and attributed social mishaps to higher levels of autonomy.
Papetto: Crafting Embodied Co-Presence in Video Chat BIBAFull-Text 305
  Hidekazu Saegusa; Kerem Özcan; Daniela K. Rosner
In this paper, we describe Papetto, a lightweight robotic arm that moves according to face detection techniques in order to mirror facial movements such as head shaking, leaning and tilting. Using this system we examine the role of the "frame" in video chat and how embodied co-presence is defined and bounded through light-weight robotic mirroring to enable new forms of engagement in remote communication.

Demonstrations -- Session 3

Therabot™: A Robot Therapy Support System in Action BIBAFull-Text 307
  Christopher Collins; Dexter Duckworth; Zachary Henkel; Stephanie Wuisan; Cindy L. Bethel
Therabot™ is an assistive-robotic therapy system designed to provide support during counseling sessions and home therapy practice to patients diagnosed with conditions associated with trauma. It has the form factor of a floppy-eared dog with coloring similar to that of a beagle, and comfortably fits in a person's lap.
Performing Collaborative Tasks with Robotic Drawers BIBAFull-Text 309
  Brian Mok; Stephen Yang; David Sirkin; Wendy Ju
In this demonstration, we explore how everyday household robots-in particular, expressive robotic drawers' should behave when performing collaborative tasks with human users. We ran a Wizard of Oz study where participants assembled a cube while collaborating with the drawers, which contained the tools needed to complete the task. The demonstration will reproduce the setting for the study, augmented with other activities and drawers' contents, such as following a recipe using cooking utensils.
Empathic Robotic Tutors: Map Guide BIBAFull-Text 311
  Amol Deshmukh; Aidan Jones; Srinivasan Janarthanam; Mary Ellen Foster; Tiago Ribeiro; Lee Joseph Corrigan; Ruth Aylett; Ana Paiva; Fotios Papadopoulos; Ginevra Castellano
In this demonstration we describe a scenario developed in the EMOTE project. The overall goal of the project is to develop an empathic robot tutor for 11-13 year old school students in an educational setting. We are aiming to develop an empathic robot tutor to teach map reading skills with this scenario on a touch-screen device.