HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction

Fullname:Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction
Editors:Gerhard Sagerer; Michita Imai; Tony Belpaeme; Andrea Thomaz
Location:Bieldeld, Germany
Dates:2014-Mar-03 to 2014-Mar-06
Standard No:ISBN: 978-1-4503-2658-2; ACM DL: Table of Contents; hcibib: HRI14
Links:Conference Website
  1. Sociable robots
  2. Situated dialogue
  3. Keynote
  4. Anthropomorphism
  5. Human-robot teams
  6. Video session
  7. HRI2014 late breaking reports poster
  8. Demonstration session
  9. Keynote
  10. Social behaviour generation
  11. Living with robots
  12. Keynote
  13. Robot teachers and learners
  14. Motivation and assistive robotics
  15. Proxemics
  16. Workshops

Sociable robots

Robot responsiveness to human disclosure affects social impression and appeal BIBAFull-Text 1-8
  Guy Hoffman; Gurit E. Birnbaum; Keinan Vanunu; Omri Sass; Harry T. Reis
In human relationships, responsiveness -- behaving in a sensitive manner that is supportive of another person's needs -- plays a major role in any interaction that involves effective communication, caregiving, and social support. Perceiving one's partner as responsive has been tied to both personal and relationship well-being. In this work, we examine whether and how a robot's behavior can instill a sense of responsiveness, and the effects of a robot's perceived responsiveness on the human's perception of the robot. In an experimental between-subject study (n=34), a desktop non-anthropomorphic robot performed either positive or negative responsiveness behaviors across two modalities (simple gestures and written text) in response to participants' negative event disclosure. We found that perceived partner responsiveness, positive human-like traits, and robot attractiveness were higher in the positively responsive condition. This has design implications for interactive robots, in particular for robots in caregiving roles.
Would you like to play with me?: how robots' group membership and task features influence human-robot interaction BIBAFull-Text 9-16
  Markus Häring; Dieta Kuchenbrandt; Elisabeth André
In the present experiment, we investigated how robots' social category membership and characteristics of an HRI task affect humans' evaluative and behavioral reactions toward robots. Participants (N = 38) played a card game together with two robots, one belonging to participants' social in-group and the other one being a social out-group member. Furthermore, participants were either asked to cooperate with the in- and to compete with the out-group robot (congruent condition), or they were asked to cooperate with the out-group robot while competing with the in-group robot (incongruent condition). The results largely support our hypotheses: Participants showed more positive evaluative reactions toward the in-group (vs. the out-group) robot and they anthropomorphized it more strongly, independent of the congruency or incongruence of the HRI. Moreover, if required, participants cooperated with both the in- and the out-group robot, whereas their cooperativeness was more pronounced toward the in-group robot. Finally, participants indicated more difficulties with the HRI in the incongruent vs. the congruent condition. The theoretical and practical implications of the findings are discussed.
Culturally variable preferences for robot design and use in South Korea, Turkey, and the United States BIBAFull-Text 17-24
  Hee Rin Lee; Selma Sabanovic
Based on the results of an online survey conducted with participants in South Korea (N=73), Turkey (N=46), and the United States (N=99), we show that people's perceptions and preferences regarding acceptable designs and uses for robots are culturally variable on a number of dimensions, including general attitudes towards robots, preferences for robot form, interactivity, intelligence, and sociality. We also explore correlations between these design and use characteristics and factors cited as having an effect on user perceptions and acceptance of robots, such as religious beliefs and media exposure. Our research suggests that culturally variable attitudes and preferences toward robots are not simply reducible to these factors, rather they relate to more specific social dynamics and norms. In conclusion, we discuss potential design and research implications of culturally variable and universally accepted user preferences regarding robots.

Situated dialogue

Conversational gaze aversion for humanlike robots BIBAFull-Text 25-32
  Sean Andrist; Xiang Zhi Tan; Michael Gleicher; Bilge Mutlu
Gaze aversion-the intentional redirection away from the face of an interlocutor-is an important nonverbal cue that serves a number of conversational functions, including signaling cognitive effort, regulating a conversation's intimacy level, and managing the conversational floor. In prior work, we developed a model of how gaze aversions are employed in conversation to perform these functions. In this paper, we extend the model to apply to conversational robots, enabling them to achieve some of these functions in conversations with people. We present a system that addresses the challenges of adapting human gaze aversion movements to a robot with very different affordances, such as a lack of articulated eyes. This system, implemented on the NAO platform, autonomously generates and combines three distinct types of robot head movements with different purposes: face-tracking movements to engage in mutual gaze, idle head motion to increase lifelikeness, and purposeful gaze aversions to achieve conversational functions. The results of a human-robot interaction study with 30 participants show that gaze aversions implemented with our approach are perceived as intentional, and robots can use gaze aversions to appear more thoughtful and effectively manage the conversational floor.
Collaborative effort towards common ground in situated human-robot dialogue BIBAFull-Text 33-40
  Joyce Y. Chai; Lanbo She; Rui Fang; Spencer Ottarson; Cody Littley; Changsong Liu; Kenneth Hanson
In situated human-robot dialogue, although humans and robots are co-present in a shared environment, they have significantly mismatched capabilities in perceiving the shared environment. Their representations of the shared world are misaligned. In order for humans and robots to communicate with each other successfully using language, it is important for them to mediate such differences and to establish common ground. To address this issue, this paper describes a dialogue system that aims to mediate a shared perceptual basis during human-robot dialogue. In particular, we present an empirical study that examines the role of the robot's collaborative effort and the performance of natural language processing modules in dialogue grounding. Our empirical results indicate that in situated human-robot dialogue, a low collaborative effort from the robot may lead its human partner to believe a common ground is established. However, such beliefs may not reflect true mutual understanding. To support truly grounded dialogues, the robot should make an extra effort by making its partner aware of its internal representation of the shared world.
Situational context directs how people affectively interpret robotic non-linguistic utterances BIBAFull-Text 41-48
  Robin Read; Tony Belpaeme
This paper presents an experiment investigating the influence that a situational context has upon how people affectively interpret Non-Linguistic Utterances made by a social robot. Subjects were presented five video conditions showing the robot making both a positive and negative utterance, the robot being subject to an action (e.g. receiving a kiss, or a slap), and then two videos showing the combination of the action and the robot reacting with both the positive and negative utterances. For each video an affective rating of valence was provided based upon how the subjects thought the robot felt given what had happened in the video. This was repeated for 5 different action scenarios. Results show that the affective interpretation of an action appears to override that of an utterance, regardless of the affective charge of the utterance. Furthermore, it is shown that if the meaning of the action and utterance are aligned, the overall interpretation is amplified. These findings are considered with respect to the practical use of utterances during social HRI.
Deliberate delays during robot-to-human handovers improve compliance with gaze communication BIBAFull-Text 49-56
  Henny Admoni; Anca Dragan; Siddhartha S. Srinivasa; Brian Scassellati
As assistive robots become popular in factories and homes, there is greater need for natural, multi-channel communication during collaborative manipulation tasks. Non-verbal communication such as eye gaze can provide information without overloading more taxing channels like speech. However, certain collaborative tasks may draw attention away from these subtle communication modalities. For instance, robot-to-human handovers are primarily manual tasks, and human attention is therefore drawn to robot hands rather than to robot faces during handovers. In this paper, we show that a simple manipulation of a robot's handover behavior can significantly increase both awareness of the robot's eye gaze and compliance with that gaze. When eye gaze communication occurs during the robot's release of an object, delaying object release until the gaze is finished draws attention back to the robot's head, which increases conscious perception of the robot's communication. Furthermore, the handover delay increases peoples' compliance with the robot's communication over a non-delayed handover, even when compliance results in counterintuitive behavior.
Learning-based modeling of multimodal behaviors for humanlike robots BIBAFull-Text 57-64
  Chien-Ming Huang; Bilge Mutlu
In order to communicate with their users in a natural and effective manner, humanlike robots must seamlessly integrate behaviors across multiple modalities, including speech, gaze, and gestures. While researchers and designers have successfully drawn on studies of human interactions to build models of humanlike behavior and to achieve such integration in robot behavior, the development of such models involves a laborious process of inspecting data to identify patterns within each modality or across modalities of behavior and to represent these patterns as "rules" or heuristics that can be used to control the behaviors of a robot, but provides little support for validation, extensibility, and learning. In this paper, we explore how a learning-based approach to modeling multimodal behaviors might address these limitations. We demonstrate the use of a dynamic Bayesian network (DBN) for modeling how humans coordinate speech, gaze, and gesture behaviors in narration and for achieving such coordination with robots. The evaluation of this approach in a human-robot interaction study shows that this learning-based approach is comparable to conventional modeling approaches in enabling effective robot behaviors while reducing the effort involved in identifying behavioral patterns and providing a probabilistic representation of the dynamics of human behavior. We discuss the implications of this approach for designing natural, effective multimodal robot behaviors.


From interaction science to cognitive interaction technology BIBAFull-Text 65
  Helge Ritter
A cascade of revolutions has transformed robotics into a new science that begins to link physical concepts of control and interaction with qualities and concepts analyzed so far mainly in disciplines such as psychology, biology, linguistics or the social sciences.
   A visible manifestation of this transformation is the rapid evolution of robot body morphologies within the last decade: robots with different degrees of humanoid appearance -- ranging from a coarse analogy of humanoid body structure to amazingly detailed replications of human appearance -- are now being developed in numerous labs and not few of them have become commercially available.
   This has completely reshaped the interface between robots and people: expressive faces require a rethinking of the way robots and humans can interact. Human-like hands offer new levels of interaction provided we can find ways to coordinate their rich degrees of freedom to bring their skills closer to what we see in humans. And progress in processing power, storage density and sensing technologies allows to create and maintain rich representations to cope with contact at all levels from the physical to the social, thereby turning robots from the contact minimizers'' of the past into interactive agents that can assist people in natural ways and in natural environments.
   These developments have shaped much of CITEC's vision of cognitive interaction technology and the associated research agenda along the four topic pillars of motion intelligence, attentive systems, situated communication, and memory and learning. We describe some of the underlying methodology and how we have mapped it into a dedicated lab infrastructure to investigate cognitive interaction processes across a wide spectrum of spatial and temporal scales. We present selected examples of cognitive interaction that are in the focus of major CITEC research lines: the Bielefeld expressive robot head Flobi that offers a versatile platform to find new ways to make human robot interaction more natural and to enrich it with new layers of usefulness; the advancement of touch as the modality where social and physical contact come together, for instance, through sensors that are more skin-like and ways of how to use such sensors for haptic exploration of surfaces or as research tools for studying human haptics; and interdisciplinary work towards a deeper understanding of manual intelligence'' and its replication in bimanual robots to enable them to flexibly manipulate everyday objects and to cooperate more naturally with people. Finally, we report on the status of the newly started project FAMULA that connects 10 CITEC groups to study and replicate the process of familiarization with novel objects through combined use of embodied manipulation and language.
   Turning to the perspectives of our field into the future, we argue that we can take inspiration from the systematic elucidation of principles and mechanisms of interaction that enabled physics to advance to a new level. Taking guidance from this success story may put us on a similarly grand path: creating a science for elucidating the inner workings of cognitive interaction that arises when we take the step from physical particles to cognitive agents and using our insights to create better bridges between humans and tomorrow's technology.


Dimensions of anthropomorphism: from humanness to humanlikeness BIBAFull-Text 66-73
  Jakub Zlotowski; Ewald Strasser; Christoph Bartneck
In HRI anthropomorphism has been considered to be a uni-dimensional construct. However, social psychological studies of the potentially reverse process to anthropomorphisation -- known as dehumanization -- indicate that there are two distinct senses of humanness with different consequences for people who are dehumanized by deprivation of some of the aspects of these dimensions. These attributes are crucial for perception of others as humans. Therefore, we hypothesized that the same attributes could be used to anthropomorphize a robot in HRI and only a two-dimensional measures would be suitable to distinguish between different forms of making a robot more humanlike. In a study where participants played a quiz based on the TV show "Jeopardy!" we manipulated a NAO robot's intelligence and emotionality. The results suggest that only emotionality, not intelligence, makes robots be perceived as more humanlike. Furthermore, we found some evidence that anthropomorphism is a multi-dimensional phenomenon.
Marhaba, how may i help you?: effects of politeness and culture on robot acceptance and anthropomorphization BIBAFull-Text 74-81
  Maha Salem; Micheline Ziadee; Majd Sakr
How do politeness strategies and cultural aspects affect robot acceptance and anthropomorphization across native speakers of English and Arabic? Previous work in cross-cultural HRI studies has mostly focused on Western and East Asian cultures. In contrast, Middle Eastern attitudes and perceptions of robot assistants are a barely researched topic. We investigated culture-specific determinants of robot acceptance and anthropomorphization by conducting a between-subjects study in Qatar. A total of 92 native speakers of either English or Arabic interacted with a receptionist robot in two different interaction tasks. We further manipulated the robot's verbal behavior in experimental sub-groups to explore different politeness strategies. Our results suggest that Arab participants perceived the robot more positively and anthropomorphized it more than English speaking participants. In addition, the use of positive politeness strategies and the change of interaction task had an effect on participants' HRI experience. Our findings complement the existing body of cross-cultural HRI research with a Middle Eastern perspective that will help to inform the design of robots intended for use in cross-cultural, multi-lingual settings.

Human-robot teams

Comparative performance of human and mobile robotic assistants in collaborative fetch-and-deliver tasks BIBAFull-Text 82-89
  Vaibhav V. Unhelkar; Ho Chit Siu; Julie A. Shah
There is an emerging desire across manufacturing industries to deploy robots that support people in their manual work, rather than replace human workers. This paper explores one such opportunity, which is to field a mobile robotic assistant that travels between part carts and the automotive final assembly line, delivering tools and materials to the human workers. We compare the performance of a mobile robotic assistant to that of a human assistant to gain a better understanding of the factors that impact its effectiveness. Statistically significant differences emerge based on type of assistant, human or robot. Interaction times and idle times are statistically significantly higher for the robotic assistant than the human assistant. We report additional differences in participant's subjective response regarding team fluency, situational awareness, comfort and safety. Finally, we discuss how results from the experiment inform the design of a more effective assistant.
Human-swarm interactions based on managing attractors BIBAFull-Text 90-97
  Daniel S. Brown; Sean C. Kerman; Michael A. Goodrich
Leveraging the abilities of multiple affordable robots as a swarm is enticing because of the resulting robustness and emergent behaviors of a swarm. However, because swarms are composed of many different agents, it is difficult for a human to influence the swarm by managing individual agents. Instead, we propose that human influence should focus on (a) managing the higher level attractors of the swarm system and (b) managing trade-offs that appear in mission-relevant performance. We claim that managing attractors theoretically allows a human to abstract the details of individual agents and focus on managing the collective as a whole. Using a swarm model with two attractors, we demonstrate this concept by showing how limited human influence can cause the swarm to switch between attractors. We further claim that using quorum sensing allows a human to manage trade-offs between the scalability of interactions and mitigating the vulnerability of the swarm to agent failures.

Video session

Human -- robot swarm interaction for entertainment: from animation display to gesture based control BIBAFull-Text 98
  Javier Alonso-Mora; Roland Siegwart; Paul Beardsley
This work shows experimental results with three systems that take real-time user input to direct a robot swarm formed by tens of small robots. These are: real-time drawing, gesture based interaction with an RGB-D sensor and control via a hand-held tablet computer.
Daedalus: a sUAV for social environments BIBAFull-Text 99
  Dante Arroyo; Cesar Lucho; Silvia Julissa Roncal; Francisco Cuellar
Due to their wide functionality in military and civil applications, small unmanned aerial vehicles (sUAVs) are becoming more commonly operated in public places, causing direct and indirect human-robot interaction. As these particular robots promise widespread integration into humans' social contexts, it is important to understand how people will perceive them if positive interaction wants to be achieved. In this video, we introduce a sUAV for social environments. "Daedalus" is a quadcopter that has been designed with a special focus in expressing emotional states through interaction features, such as head movements and eye color variation. In order to analyze the robot's capacity to express emotions and people's perception of these states, an experiment was conducted. The sUAV is initially located in a stationary module, the subject approaches the robot, and sits down in front of it, maintaining an interaction distance between 0.5 -- 0.7 m. The robot performs 4 different expressions combining head movement and eye color display. After each expression, a psychologist asks the subject two questions: How do you feel? and: What do you think the robot is feeling? For evaluation, subjects must select one of seven cards associated with basic emotions to express and evaluate emotions according to the questions. The preliminary results of this experiment are presented in this video.
The development and real-world deployment of FROG, the fun robotic outdoor guide BIBAFull-Text 100
  Vanessa Evers; Nuno Menezes; Luis Merino; Dariu Gavrila; Fernando Nabais; Maja Pantic; Paulo Alvito; Daphne Karreman
This video details the development of an intelligent outdoor Guide robot. The main objective is to deploy an innovative robotic guide which is not only able to show information, but to react to the affective states of the users, and to offer location-based services using augmented reality. The scientific challenges concern autonomous outdoor navigation and localization, robust 24/7 operation, affective interaction with visitors through outdoor human and facial feature detection as well as engaging interactive behaviors in an ongoing non-verbal dialogue with the user.
Hello robot can you come here?: using ROS4iOS to provide remote perceptual capabilities for visual location, speech and speaker recognition BIBAFull-Text 101
  François Ferland; Ronan Chauvin; Dominic Létourneau; François Michaud
Mobile devices such as smartphones and tablets can provide additional sensing and interacting capabilities to a mobile robot, even extending its senses to remote locations. To do so, we developed ROS4iOS, a native port of ROS allowing to seamlessly use data from mobile iOS devices to be processed on a robot. To demonstrate this capability, this video presentation illustrates how ROS4iOS has been used to implement an assistance scenario: a person in a remote location asks our IRL-1 robot for assistance, and IRL-1 must recognize the person's voice, identify the remote location using images taken from the mobile device, navigate to the identified location and interact vocally with the person through the mobile device. When communication with the robot is established by launching a specific iOS application, audio from the mobile device is published on a single topic and directed toward two ROS nodes on IRL-1. PocketSphinx, a speech recognition toolkit, is used to obtain the person vocal commands. WISS, a speaker identification system, is used for speaker identification. From a vocal request made through the mobile device, the robot can identify the person and request to get images of the remote location using the rear-facing camera of the mobile device. RTABMap, a loop-closure detection system for visual location recognition, is used to locate the person. A map was previously built with the robot's laser range finder using the gmapping SLAM algorithm, thus permitting reuse of the ROS navigation stack to plan a path and follow it safely from its current location to the requested one. Depending on its current state, the robot can also display the most suited view (i.e., phone, camera, navigation) on the iOS application. ROS4iOS can also be used to teleoperate IRL-1 through the mobile device or to discuss remotely with the robot, using the same ROS modules. ROS4iOS opens up a rich set of possibilities for HRI, making ROS-compatible code accessible on mobile devices. In future work, we plan to integrate a dialog management system for other assistance scenarios, such as image-based object fetching and delivery.
Human-robot interaction through 3D vision and force control BIBAFull-Text 102
  Aleksandar Jevtic; Guillaume Doisy; Saša Bodiroxa; Yael Edan; Verena V. Hafner
The video shows the interaction with a customized Kompai robot. The robot consists of the Robosoft's robuLAB10 platform, tablet PC, and a Microsoft Kinect camera mounted on a pan-tilt system. A visual control algorithm provides continuous person tracking. The newly developed robot features include gesture recognition, person following, navigation with pointing, and force control, which were integrated with the Robosoft's robuBOX SDK and the Karto SLAM algorithms. The video demonstrates all the features and puts the robot in use in an everyday home scenario.
The chatbot strikes back BIBFull-Text 103
  James Kennedy; Joachim de Greeff; Robin Read; Paul Baxter; Tony Belpaeme
Semi-autonomous cooperative driving for mobile robotic telepresence systems BIBAFull-Text 104
  Andrey Kiselev; Giovanni Mosiello; Annica Kristoffersson; Amy Loutfi
Mobile robotic telepresence (MRP) has been introduced to allow communication from remote locations. Modern MRP systems offer rich capabilities for human-human interactions. However, simply driving a telepresence robot can become a burden especially for novice users, leaving no room for interaction at all. In this video we introduce a project which aims to incorporate advanced robotic algorithms into manned telepresence robots in a natural way to allow human-robot cooperation for safe driving. It also shows a very first implementation of cooperative driving based on extracting a safe drivable area in real time using the image stream received from the robot.
EFAA: a companion emerges from integrating a layered cognitive architecture BIBAFull-Text 105
  Stéphane Lallée; Vasiliki Vouloutsi; Sytse Wierenga; Ugo Pattacini; Paul Verschure
In this video, we present the human robot interaction generated by applying the DAC cognitive architecture on the iCub robot. We demonstrate how the robot reacts and adapts to its environment within the context a continuous interactive scenario including different games. We emphasize as well that the artificial agent is maintaining a self-model in terms of emotions and drives and how those are expressed in order affect the social interaction.
Integrating multi-modal interfaces to command UAVs BIBAFull-Text 106
  Valiallah (Mani) Monajjemi; Shokoofeh Pourmehr; Seyed Abbas Sadat; Fei Zhan; Jens Wawerla; Greg Mori; Richard Vaughan
We present an integrated human-robot interaction system that enables a user to select and command a team of two Unmanned Aerial Vehicles (UAV) using voice, touch, face engagement and hand gestures. This system integrates multiple human [multi]-robot interaction interfaces as well as a navigation and mapping algorithm in a coherent semi-realistic scenario. The task of the UAVs is to explore and map a simulated Mars environment.
Human-robot cooperation: fast, interactive learning from binary feedback BIBFull-Text 107
  Jawad Nagi; Hung Ngo; Jürgen Schmidhuber; Luca Maria Gambardella; Gianni A. Di Caro
Emotional cyborg: human extension with agency for emotional labor BIBAFull-Text 108
  Hirotaka Osawa
The author developed wearable eyeglasses robot that supports user's emotional labors. The glass displays user's eye gestures on the surface and produces expressions during user's communication.
Perceiving people from a low-lying viewpoint BIBFull-Text 109
  Armando Pesenti Gritti; Oscar Tarabini; Alessandro Giusti; Jerome Guzzi; Gianni A. Di Caro; Vincenzo Caglioti; Luca M. Gambardella
The NAO goes to camp BIBAFull-Text 110
  Noel Wigdor; Aafke Fraaije; Lara Solms; Joachim de Greeff; Joris Janssen; Olivier Blanson Henkemans
ALIZ-E is a Europe-wide project focusing on long-term child-robot interaction, specifically as a means of educating diabetic children on their condition. This video showcases a recent field study at "SugarKidsClub", a camp devoted to helping 7-12 year-olds handle type-1 diabetes. A wide range of CRI activities developed by ALIZ-E were employed, including a large "SandTray" touch table running a tile-sorting game and a "Handshake" touch-inducing activity designed to strengthen the child-robot bond. Apart from helping kids with their unfortunate affliction, the day at "SugarKidsClub" provided us a chance to use new technologies developed for the aforementioned activities as well as furthering our relationship with our primary stakeholders. This playful video highlights some of the footage taken that day within an entertaining story centered on Charlie, one of the NAO robots used for our field study with a pension for battery theft.
The fugitive: a robot in the wild BIBAFull-Text 111
  Mary-Anne Williams; Xun Wang; Pramod Parajuli; Shaukat Abedi; Michelle Youssef; Wei Wang
The aim of the movie is to highlight some of the key challenges facing social robots in the wild. The opening scene shows a PR2 leaving a research laboratory venturing into the real world alone in search of meaning. Each subsequent scene in the movie raises important research questions highlighting problems that need to be addressed in the field of social service robotics. When will robots wander around buildings unsupervised? How will they navigate and localize with glass walls: this research problem is exposed when a robot finds itself having to move around a real building.
   The robot is independent and has a sense of self. It wants to engage in society. It solves this problem by finding a job in a cafe where it is assigned menial tasks, but aspires to be a barista. Thus raising the question of whether PR2 robots are suited to working with hot steaming liquids. Still the robot can dream, why not.
   The robot realizes in order to progress it needs to learn some new skills and it is shown teaching itself a new skill and practicing to improve its performance. When it is time to put the new skill into practice, the robot has a revelation, discovering in the act of doing that there can be preconditions attached to the enaction of skills, i.e. people do not need peanut butter until they have bread to spread it on.
   The robot demonstrates his robust understanding of social etiquette by not only offering the peanut butter to the female-human first, but chastising a male-human for not observing this important social protocol.
   The story ends with the recaptured robot being dragged back to the lab. The robot appears to be mortified by its loss of freedom and looks utterly dejected and dispirited. The robot's behavior generates empathy the human minder, but the robot is pretending to be disheartened, and is deceitfully planning its next escapade as a Jedi Knight! Deception is a highly sophisticated cognitive skill: a capability enabled by a theory of mind which is necessary for communication, social interaction and collaboration, all critically important skills for a service robot.

HRI2014 late breaking reports poster

Human-robot collaborative tutoring using multiparty multimodal spoken dialogue BIBAFull-Text 112-113
  Samer Al Moubayed; Jonas Beskow; Bajibabu Bollepalli; Joakim Gustafson; Ahmed Hussen-Abdelaziz; Martin Johansson; Maria Koutsombogera; José David Lopes; Jekaterina Novikova; Catharine Oertel; Gabriel Skantze; Kalin Stefanov; Gül Varol
In this paper, we describe a project that explores a novel experimental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robot interaction setup is designed, and a human-human dialogue corpus is collected. The corpus targets the development of a dialogue system platform to study verbal and nonverbal tutoring strategies in multiparty spoken interactions with robots which are capable of spoken dialogue. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. Along with the participants sits a tutor (robot) that helps the participants perform the task, and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies, such as a microphone array, Kinects, and video cameras, were coupled with manual annotations. These are used build a situated model of the interaction based on the participants personalities, their state of attention, their conversational engagement and verbal dominance, and how that is correlated with the verbal and visual feed-back, turn-management, and conversation regulatory actions generated by the tutor. Driven by the analysis of the corpus, we will show also the detailed design methodologies for an affective, and multimodally rich dialogue system that allows the robot to measure incrementally the attention states, and the dominance for each participant, allowing the robot head Furhat to maintain a well-coordinated, balanced, and engaging conversation, that attempts to maximize the agreement and the contribution to solve the task.
   This project sets the first steps to explore the potential of using multimodal dialogue systems to build interactive robots that can serve in educational, team building, and collaborative task solving applications.
Speaker identification using three signal voice domains during human-robot interaction BIBAFull-Text 114-115
  Fernando Alonso Martín; Arnaud Ramey; Miguel Ángel Salichs
This LBR describes a novel method for user recognition in HRI, based on analyzing the peculiarities of users voices, and specially focused at being used in a robotic system. The method is inspired by acoustic fingerprinting techniques, and is made of two phases: a)enrollment in the system: the features of the user's voice are stored in files called voiceprints, b)searching phase: the features extracted in real time are compared with the voiceprints using a pattern matching method to obtain the most likely user (match).
   The audio samples are described thanks to features in three different signal domains: time, frequency, and time-frequency. Using the combination of these three domains has enabled significant increases in the accuracy of user identification compared to existing techniques. Several tests using an independent user voice database show that only half a second of user voice is enough to identify the speaker. The recognition is text-independent: users do not need to say a specific sentence (key-pass) to get identified for the robot.
Daedalus: a sUAV for human-robot interaction BIBAFull-Text 116-117
  Dante Arroyo; Cesar Lucho; Silvia Julissa Roncal; Francisco Cuellar
This paper presents an interface for enabling a sUAV robot (Daedalus) to communicate and simulate emotional states. As significance is the result of the interpretation of emotional states, an experimental study was performed to evaluate and recognize emotion-state responses from people when interacting with the sUAV. Four expressions were programmed combining Daedalus' interaction features such as head movement and eye color display. During the assessment, subjects were asked to recognize the significance of the expressions performed by the robot and assign an emotional state of the robot through card selection. Our research aims to investigate the impact of simulating emotional states in robots, and its potential during interaction. Results presented are part of an experimental study for setting future exploratory research concerning human-robot interaction.
Human and robot interaction based on safety zones in a shared work environment BIBAFull-Text 118-119
  Svante Augustsson; Linn Gustavsson Christiernin; Gunnar Bolmsjö
In this paper, early work on how to implement flexible safety zones is presented. In the case study an industrial robot cell emulates the environment at a wall construction site, with a robot performing nailing routines. Tests are performed with humans entering the safety zones of a SafetyEye system. The zone violation is detected, and new warning zones initiated. The robot retracts but continues its work tasks with reduced speed and within a safe distance of the human operator. Interaction is achieved through simultaneous work on the same work piece and the warning zones can be initiated and adjusted in a flexible way.
Intuitive error resolution strategies during robot demonstration BIBAFull-Text 120-121
  Maria Vanessa aus der Wieschen; Kerstin Fischer; Kamil Kuklinski
While robot learning from demonstration comes with great benefits [5], the intuitive interaction between naïve users and robots also poses challenges. For instance, users need to be prevented from causing damage and to be enabled to recover from errors. We studied the error resolution strategies of 28 lay users performing simple assembly tasks via teleoperation of a robotic arm in order to gain insight into the strategies users take. The two most common problems are too much pressure and singularity. Even though users were provided with instructions on how to undo singularity in an instruction video, they did not always recover successfully. In contrast, too much pressure, if noticed, was resolved mostly correctly by lifting the peg or by letting it drop into the hole rather than inserting it. Finally, users were quite clueless about how to resolve self-collision and over-rotation.
A new control architecture for physical human-robot interaction based on haptic communication BIBAFull-Text 122-123
  Yusuf Aydin; Nasser Arghavani; Cagatay Basdogan
In the near future, humans and robots are expected to perform collaborative tasks involving physical interaction in various different environments such as homes, hospitals, and factories. One important research topic in physical Human-Robot Interaction (pHRI) is to develop tacit and natural haptic communication between the partners. Although there are already several studies in the area of Human-Robot Interaction, the number of studies investigating the physical interaction between the partners and in particular the haptic communication are limited and the interaction in such systems is still artificial when compared to natural human-human collaboration. Although the tasks involving physical interaction such as the table transportation can be planned and executed naturally and intuitively by two humans, there are unfortunately no robots in the market that can collaborate and perform the same tasks with us. In this study, we propose a new controller for the robotic partner that is designed to a) detect the intentions of the human partner through haptic channel using a fuzzy controller b) adjust its contribution to the task via a variable impedance controller and c) resolve the conflicts during the task execution by controlling the internal forces. The results of the simulations performed in Simulink/Matlab show that the proposed controller is superior to the stand-alone standard/variable impedance controllers.
Engagement based on a customization of an iPod-LEGO robot for a long-term interaction for an educational purpose BIBAFull-Text 124-125
  Alex Barco; Jordi Albo-Canals; Carles Garriga
The aims of the study presented in this paper are to find evidence that the customization of a robot increases the engagement interacting with it, and the adaptation of a non-social robotic platform like LEGO Robotics is possible. The study has been done with 7 years old children from primary school level that have been doing daily homework activities conducted by the robot during one month. Results showed us a higher interaction and adaptability to a social robot with the customization.
Tracking gaze over time in HRI as a proxy for engagement and attribution of social agency BIBAFull-Text 126-127
  Paul Baxter; James Kennedy; Anna-Lisa Vollmer; Joachim de Greeff; Tony Belpaeme
In this contribution, we describe a method of analysing and interpreting the direction and timing of a human's gaze over time towards a robot whilst interacting. Based on annotated video recordings of the interactions, this post-hoc analysis can be used to determine how this gaze behaviour changes over the course of an interaction, following from the observation that humans change their behaviour towards the robot on the time-scale of individual interactions. We posit that given these circumstances, this measure may be used as a proxy (among others) for engagement in the interaction or the human's attribution of social agency to the robot. Application of this method to a sample of unstructured child-robot interactions demonstrates its use, and justifies its utilisation in future studies.
Intuitive control of small flying robots BIBAFull-Text 128-129
  Christian Blum; Oswald Berthold; Philipp Rhan; Verena V. Hafner
In this paper we show a new perspective on human-robot interaction by presenting a system for intuitive interaction with flying robots. This goes beyond the usual remote control of these robots, by having an interactive space where people can physically interact with flying robots in an intuitive and safe way. The presented system has various ways of interaction and provides the prerequisites for many interesting future applications.
Attentional top-down regulation and dialogue management in human-robot interaction BIBAFull-Text 130-131
  Riccardo Caccavale; Alberto Finzi; Lorenzo Lucignano; Silvia Rossi; Mariacarla Staffa
We propose a framework where the human-robot interaction is modeled as a multimodal dialogue which is regulated by an attentional system that guides the system towards the execution of structured tasks. We introduce a simple case study to illustrate the system at work in different conditions considering top-down regulations and dialogue flows in synergic and conflicting situations.
Head pose behavior in the human-robot interaction space BIBAFull-Text 132-133
  Sonja Caraian; Nathan Kirchner
Visual Focus of Attention is an important mechanism to support successful interactions. In order to communicate effectively and intentionally (issuing cues when a person is paying attention, for example), a robot must have an understanding of this Visual Focus of Attention behavior in the Human-Robot Interaction space. A real-world interaction study was conducted with 24 unsolicited participants to explore attention behavior towards robots in this space. The results suggest there is no generalizable attention pattern between people, and thus that online, in situ Visual Focus of Attention estimation would be advantageous to Human-Robot Interaction.
Effects of speech on perceived capability BIBFull-Text 134-135
  Elizabeth Cha; Anca Dragan; Jodi Forlizzi; Siddhartha Srinivasa
Pre-school children's first encounter with a robot BIBFull-Text 136-137
  Elizabeth Cha; Anca Dragan; Siddhartha Srinivasa
Are you embarrassed?: the impact of robot types on emotional engagement with a robot BIBAFull-Text 138-139
  Jung Ju Choi; Yunkyung Kim; Sonya S. Kwak
The objective of this study is to examine the effect of robot types on emotional communication between a person and a robot. We executed a 2 (robot types: an autonomous robot vs. a tele-operated robot) within-participants experiment (N=36). Participants were interviewed with either autonomous robot interviewers or tele-operated robot interviewers, and asked how much they felt social presence of robot interviewers and embarrassment toward robot interviewers. Participants felt more social presence to tele-operated robots than autonomous robots. Moreover, participants felt more embarrassment when they were interviewed with tele-operated robots than autonomous robots.
Mixing implicit and explicit probes: finding a ground truth for engagement in social human-robot interactions BIBAFull-Text 140-141
  Lee J. Corrigan; Christina Basedow; Dennis Küster; Arvid Kappas; Christopher Peters; Ginevra Castellano
In our work we explore the development of a computational model capable of automatically detecting engagement in social human-robot interactions from real-time sensory and contextual input. However, to train the model we need to establish ground truths of engagement from a large corpus of data collected from a study involving task and social-task engagement. Here, we intend to advance the current state-of-the-art by reducing the need for unreliable post-experiment questionnaires and costly time-consuming annotation with the novel introduction of implicit probes. A non-intrusive, pervasive and embedded method of collecting informative data at different stages of an interaction.
Recognizing gaze pattern for human robot interaction BIBAFull-Text 142-143
  Dipankar Das; Md. Golam Rashed; Yoshinori Kobayashi; Yoshinori Kuno
In this paper, we propose a human-robot interaction system in which the robot detects and classifies the target human's gaze pattern into either spontaneous looking or scene-relevant looking. If the gaze pattern is detected as the spontaneous looking, the robot waits for the target human without disturbing his/her attention. However, if the gaze pattern is detected as the scene-relevant looking, the robot establishes a communication channel with him/her in order to explain about the scene. We have implemented the proposed system into a robot, Robovie-R3 as a museum guide robot and tested the system to confirm its effectiveness.
Expectation setting and personality attribution in HRI BIBAFull-Text 144-145
  Maartje M. A. de Graaf; Somaya Ben Allouch
People tend to treat robots as social actors and assign personality attributes to them. This study investigates the influence of expectation setting on the users' attribution of personality traits to the robot and their impressions of that robot. Results show that personality attribution is depending on people's prior expectations of the interaction. Moreover, people evaluate a robot better when they assigned it with a complementary personality and when they had high prior expectations of that robot.
Users' preferences of robots for domestic use BIBAFull-Text 146-147
  Maartje M. A. de Graaf; Somaya Ben Allouch
This study identifies the design preferences of robots for domestic use. In an online survey, participants rated 16 robot pictures on several evaluation criteria. Results show that overall anthropomorphic robots are more positively rated than either zoomorphic, caricatured or functional robots. Moreover, negative feelings towards robot (e.g. negative attitudes and anxiety towards robots) negatively affected the design evaluations. Providing future users with positive information about domestic robots could improve their feelings towards robots, which, in turn, raises their ratings of robotic designs.
Child-robot interaction in the wild: field testing activities of the ALIZ-E project BIBAFull-Text 148-149
  Joachim de Greeff; Olivier Blanson Henkemans; Aafke Fraaije; Lara Solms; Noel Wigdor; Bert Bierman; Joris B. Janssen; Rosemarijn Looije; Paul Baxter; Mark A. Neerincx; Tony Belpaeme
A field study was conducted in which CRI activities developed by the ALIZ-E project were tested with the project's primary user group: children with diabetes. This field study resulted in new insights in the modalities and roles a robot aimed at CRI in a healthcare setting might utilise, while in addition (re-)assessed some practises and technologies established within the project. Furthermore, it served as a means of strengthening the bonds with the project's principal stakeholders. The study illustrates on the one hand the feasibility of the activities that were developed within the project, while on the other hand highlights the importance of engaging with primary users in an ongoing, incremental fashion.
Exploring socially intelligent recharge behaviour for human-robot interaction BIBAFull-Text 150-151
  Amol A. Deshmukh; Ruth Aylett
In this paper we try to highlight the need for social intelligence during the recharge activity of mobile robots and report a study performed to investigate people's attitude towards recharge behaviour of an office robot.
A week-long study on robot-visitors spatial relationships during guidance in a sciences museum BIBAFull-Text 152-153
  Marta Díaz; Dennys Paillacho; Cecilio Angulo; Oriol Torres; Jonathan González; Jordi Albo-Canals
In order to observe spatial relationships in social human-robot interactions, a field trial was carried out within the CosmoCaixa Science Museum in Barcelona. The follow me episodes studied showed that the space configurations formed by guide and visitors walking together did not always fit the robot social affordances and navigation requirements to perform the guidance successfully, thus additional communication prompts are considered to regulate effectively the walking together and follow me behaviours.
Hesitation signals in human-robot head-on encounters: a pilot study BIBAFull-Text 154-155
  Christian Dondrup; Christina Lichtenthäler; Marc Hanheide
We present a pilot study to identify hesitation signals in Human-Robot Spatial Interaction which we aim to employ to evaluate the quality of the robots executed behaviour. The presented study focuses on head-on encounters between a human and a robot in pass-by scenarios. Our results indicate that these hesitation signals can be found and therefore present a form a implicit feedback.
Analyzing human high-fives to create an effective high-fiving robot BIBAFull-Text 156-157
  Naomi T. Fitter; Katherine J. Kuchenbecker
Creating a robot that can teach humans simple interactive tasks such as high-fiving requires research at the intersection of physical human-robot interaction (PHRI) and socially assistive robotics. This paper shows how observation of natural human-human interaction can improve the design of requirements for social-physical robots and form a framework for autonomous execution of interactive physical tasks. Eleven pairs of human subjects were recruited to perform a set of high-fiving games; a magnetic motion tracker and an accelerometer were mounted to each person's hand for the duration of the experiment, and each subject completed several questionnaires about the experience. The results reveal valuable clues about the generally positive feelings of the participants and the movement of their hands during play. We discuss how we plan to use these results to create a robot that can teach humans similar high-fiving games.
Towards action selection under uncertainty for a socially aware robot bartender BIBAFull-Text 158-159
  Mary Ellen Foster; Simon Keizer; Oliver Lemon
We describe how the state representation of a socially aware robot is being extended to handle uncertainty. It incorporates the full range of information provided by the input sensors, including the confidence of all hypotheses. We also show how the Interaction Manager is being updated to make use of the extended representation.
Robot gossip: effects of mode of robot communication on human perceptions of robots BIBAFull-Text 160-161
  Marlena R. Fraune; Selma Šabanovic
With robots becoming more prevalent, it is important to understand human attitudes toward robots not only when humans directly interact with the robots as most research examines, but when robots are performing nonsocial tasks (e.g., cleaning) within sight and hearing of humans. This study examined how presumed robot communication style in such situations of human-robot co-location affects human perceptions of a group of robots. Results suggest that communication style of robots did not affect perceptions of robots, but further studies should use different techniques to manipulate supposed communication style.
Damping robot's head movements affects human-robot interaction BIBAFull-Text 162-163
  Guillaume Gibert; Florian Lance; Maxime Petit; Gregoire Pointeau; Peter Ford Dominey
A new research platform has been developed to study human-robot interaction and communication. In this setup, a humanoid robot is used as a proxy between two humans involved in dyadic interactions. An experimenter is bound with a humanoid robot. He can control in real-time and sensor free the eye and face/head movements performed by a humanoid robot with his own movements. The experimenter can perceive the scene as if he was the robot. Manipulations can be applied in real-time to any movement leaving the rest of the dynamics untouched. For instance, we have started investigating the effect of damping head movements during dyadic interaction. Preliminary results show that naive subjects' head nods increase when attenuation was applied on the robot's head movements.
Asking rank queries in pose learning BIBAFull-Text 164-165
  Víctor Gonzalez-Pacheco; Maria Malfaz; Miguel A. Salichs
This paper presents a system in which a robot uses Active Learning (AL) to improve its learning capabilities for pose recognition. We propose a sub-type of Feature Queries, Rank Queries (RQ), in which the user states the relevance of a characteristic of the learning space. In the case of pose learning, these queries refer to the relevance of a single limb for a certain pose. We test the use of RQ with 24 users to learn 3 pointing poses and compare the learning accuracy against a passive learning approach. Our results show that RQ can increase the robot's learning accuracy.
Online learning of exploratory behavior through human-robot interaction BIBAFull-Text 166-167
  Manabu Gouko; Yuichi Kobayashi; Chyon Hae Kim
Currently, many studies have been conducted on robot interactions with humans. Object recognition and feature extraction are essential functions for such robots. Discernment behavior is a type of exploratory behavior that supports object feature extraction. We have proposed an active perception model that autonomously learns discernment behaviors. We have shown the effectiveness of our model using a mobile robot simulation. In this study, we applied our model to a real humanoid robot and confirmed that the robot successfully learns exploratory behaviors. We show that the robot can learn suitable exploratory behaviors by online learning applicable to real-world environments.
Avoiding robot faux pas: using social context to teach robots behavioral propriety BIBAFull-Text 168-169
  Cory J. Hayes; Maria F. O'Connor; Laurel D. Riek
Contextual cues strongly in influence the behavior of people in social environments, and people are very adept at interpreting and responding to these cues. While robots are becoming increasingly present in these spaces, they do not yet share humans' essential sense of contextually-bounded social propriety. However, it is essential for robots to be able to modify their behavior depending on context so that they operate in an appropriate manner across a variety of situations. In our work, we are building models of context for social robots, that operate on real-world, naturalistic, noisy data, across multi-context and multi-person settings. In this paper, we discuss one aspect of this work, which concerns teaching a robot an appropriateness function for interrupting a person in a public space. We trained a support-vector machine (SVM) to learn an association between contextual cues and the reaction of people being interrupted by a robot across three different contexts. Overall, our results are promising, and further work on integrating context models into social robots could lead to interesting and impactful findings across the HRI community.
Human-swarm interaction: sources of uncertainty BIBAFull-Text 170-171
  Sean T. Hayes; Julie A. Adams
Human-swarm interaction seeks to significantly increase the number of robots, which also increases uncertainty. Most prior research ignores uncertainty. Sources of uncertainty based on biological swarms are presented.
Sky writer: sketch-based collaboration for UAV pilots and mission specialists BIBAFull-Text 172-173
  Zachary Henkel; Jesus Suarez; Brittany Duncan; Robin R. Murphy
Sky Writer is a collaborative communication medium that augments the traditional display of a UAV pilot and allows other stakeholders to communicate their needs and intentions to the pilot. UAV pilots engaging in time-critical missions, such as urban disaster responses, often must allocate most of their cognitive capacity towards flight tasks, making communication and collaboration with other stakeholders difficult or dangerous. Sky Writer addresses the needs of stakeholders while requiring minimal cognitive effort from the UAV pilot. The application presents stakeholders with an interface that provides contextual flight information and a live video stream of the flight. Stakeholders are able to sketch directly on the video stream or use a spotlight indicator that is mirrored across all displays in the system, including the pilot's display. The application can be used in any modern web browser and works with traditional and touch devices. Concept experimentation performed at Disaster City with two pilots indicated that the spotlight feature was particularly useful while the UAV was in motion, and the sketching features were most useful while the UAV was stationary. The system will be tested with professional responders soon to determine its efficacy in a simulated response, and to inform the ongoing design process.
A doll-type interface for real-time humanoid teleoperation in robot-assisted activity BIBAFull-Text 174-175
  Masakazu Hirokawa; Atsushi Funahashi; Yasushi Itoh; Kenji Suzuki
This paper introduces a doll-type interface for real-time teleoperation of a humanoid robot in Robot-Assisted Activity (RAA) for children with Autism Spectrum Disorders (ASD). We developed a prototype of the interface and verified its usability by conducting RAA sessions with children with ASD.
Controlling a high speed robotic hand using a brain computer interface BIBFull-Text 176-177
  Stephen Huang; Alejandro Ramirez-Serrano
Entrainment effect caused by joint attention of two robots BIBAFull-Text 178-179
  Takashi Ichijo; Nagisa Munekata; Kazuo Hiraki; Tetsuo Ono
In this study, we investigate the two effects of joint attention, building relationships and sharing recognition, in group interaction consisting of two robots and one person. The building relationship is focused on entrainment resulting from the two robots' gazing at one person, while the sharing recognition refers to an original effect of the joint attention in a group interaction featuring the shared focus of individuals on one object. Results of experiments on the building relationships aspect showed that when two robots gazed at a person, he/she tended to be more immersed in communication with the robots. In the case of sharing recognition, when the two robots gazed at the target synchronously, the person could share it more correctly than when it was done asynchronously.
Towards a serious game playing empathic robotic tutorial dialogue system BIBAFull-Text 180-181
  Srinivasan Janarthanam; Helen Hastie; Amol Deshmukh; Ruth Aylett
There are several challenges in applying conversational social robots to Technology Enhanced Learning and Serious Gaming. In this paper, we focus in particular on the dialogue management issues in building an empathic robotic tutor that plays a multi-person serious game with students to help them learn and understand the underlying educational concepts.
Building an automated engagement recognizer based on video analysis BIBAFull-Text 182-183
  Minsu Jang; Cheonshu Park; Hyun-Seung Yang; Jae-Hong Kim; Young-Jo Cho; Dong-Wook Lee; Hye-Kyung Cho; Young-Ae Kim; Kyoungwha Chae; Byeong-Kyu Ahn
This paper presents a process to build a classifier in a data-driven way for recognizing engagement of children in a robot-based math quiz game. The process consists of collecting video recordings from HRI experiments; annotating the social signals and engagement states via video analysis; extracting feature vectors from the annotations and training classifiers. We conducted an experiment with 7 participants of 10 -- 11 years of age using an android robot EveR-4. With three coders annotating the video recordings and extracting features by snapshot model with 1-second time window, we achieved 84.83% recall performance.
Sound over matter: the effects of functional noise, robot size and approach velocity in human-robot encounters BIBAFull-Text 184-185
  Michiel Joosse; Manja Lohse; Vanessa Evers
In our previous work we introduced functional noise as a modality for robots to communicate intent [6]. In this follow-up experiment, we replicated the first study with a robot which was taller in order to find out if the same results would apply to a tall vs. a short robot. Our results show a similar trend: a robot using functional noise is perceived more positively compared with a robot that does not.
Study of Nao's impact on a memory game BIBAFull-Text 186-187
  Céline Jost; Marine Grandgeorge; Brigitte Le Pévédic; Dominiue Duhaut
This paper presents an experiment which evaluated the added value of a robot in a memory game in three conditions: tablet and robot, robot alone, and tablet alone. Results show that robots may increase game interest. In our experiment, the presence of a robot did not imply additional workload. It seems that people judged themselves more positively when they interacted with the robot. Moreover, people displayed more positive facial expressions with the robot.
No joking aside: using humor to establish sociality in HRI BIBAFull-Text 188-189
  Peter H., Jr. Kahn; Jolina H. Ruckert; Takayuki Kanda; Hiroshi Ishiguro; Heather E. Gary; Solace Shen
This paper shows how humor can be used as an interaction pattern to help establish sociality in human-robot interaction. Drawing illustratively from our published research on people interacting with ATR's humanoid robot Robovie, we highlight four forms of humor that we successfully implemented: wit and the ice-breaker, corny joke, subtle humor, and dry humor and self-deprecation.
Will humans mutually deliberate with social robots? BIBAFull-Text 190-191
  Peter H., Jr. Kahn; Jolina H. Ruckert; Takayuki Kanda; Hiroshi Ishiguro; Heather E. Gary; Solace Shen
We offer three illustrative examples from one of our recent studies in HRI to suggest that it's possible for people to engage in mutual deliberation with a social robot. Each example illustrates discourse and argument, but ends through different means: Accepting the Robot as Arbitrator; Delegitimizing the Robot; and Agreeing to Disagree.
Learning hand-eye coordination for a humanoid robot using SOMs BIBAFull-Text 192-193
  Ivana Kajic; Guido Schillaci; Saša Bodiroza; Verena V. Hafner
Hand-eye coordination is an important motor skill acquired in infancy which precedes pointing behavior. Pointing facilitates social interactions by directing attention of engaged participants. It is thus essential for the natural flow of human-robot interaction. Here, we attempt to explain how pointing emerges from sensorimotor learning of hand-eye coordination in a humanoid robot. During a body babbling phase with a random walk strategy, a robot learned mappings of joints for different arm postures. Arm joint configurations were used to train biologically inspired models consisting of SOMs. We show that such a model implemented on a robotic platform accounts for pointing behavior while humans present objects out of reach of the robot's hand.
Interaction control for postural correction on a riding simulation system BIBAFull-Text 194-195
  Sangseung Kang; Kyekyung Kim; Suyoung Chi; Jaehong Kim
A horseback riding simulator is a robotic machine that simulates the motion of horseback riding. In this paper, we present an interaction control system for a horseback riding simulator. The proposed system provides a postural correction function suited to the user based on their historical log and specialized posture coaching data. The system has adopted certain schemes for posture detection and recognition of the identified user as a way to recommend customized exercise modes. Our experiments show that including these techniques will help users maintain good posture while riding.
Robot etiquette: how to approach a pair of people? BIBAFull-Text 196-197
  Daphne Karreman; Lex Utama; Michiel Joosse; Manja Lohse; Betsy van Dijk; Vanessa Evers
Research has been carried out on robots approaching one person [1, 3, 4]. However, further research is needed on robots approaching groups of people. In the study reported in this paper, we studied participants who were paired up for a task and assessed their perception and behaviors as they were approached by a robot from various angles. On an individual level, participants liked the frontal approaches, and they disliked being approached from the back. However, we found that the presence of a task-partner influenced participants' comfort with a robot approaching (i.e. when the robot approaches and one is standing behind the task-partner). Apart from the positioning of the individuals, the layout of the room, position of furniture and doors, also seemed to influence their experience. This pilot study was performed with a limited number of participants (N=30). However, the study offers preliminary insights into the factors that influence the choice for a robot approach direction when approaching a pair of people that are focused on a task.
Children comply with a robot's indirect requests BIBAFull-Text 198-199
  James Kennedy; Paul Baxter; Tony Belpaeme
Compliance studies in human-robot interaction (HRI) tend to consist of direct requests from the robot to the human. It is suggested that indirect requests are considered more polite, which has been positively correlated with learning gains. An experiment is conducted to explore compliance with indirect robot requests in teaching interactions. A comparison is made across embodiment conditions, but no significant differences are found. Overall, children comply with the robot's requests, which is used to support the hypothesis that given a well-defined context, children will infer the indirect meaning of a suggestion from a robot.
Probabilistic multiparty dialogue management for a game master robot BIBAFull-Text 200-201
  Casey Kennington; Kotaro Funakoshi; Yuki Takahashi; Mikio Nakano
We present our ongoing research on multiparty dialogue management for a game master robot which engages multiple human participants to play a quiz game. The robot invites passing people to join the game, instructs participants on the rules of the game, and leads them in the game. The robot has to manage people leaving and coming at arbitrary times. Our approach maintains a dialogue manager for each participant, and a module takes a final action with each decision cycle; responsible to decide "what/whom/when to say". We have implemented the dialogue manager with a probabilistic rules approach [4] and made preliminary evaluations with our multiparty human-robot game dialogue data that was collected in a WoZ fashion.
Inertial motion capture based reference trajectory generation for a mobile manipulator BIBAFull-Text 202-203
  Yerbolat Khassanov; Nursultan Imanberdiyev; Huseyin Atakan Varol
This paper presents a human-robot interaction system integrating a mobile manipulator with a full-body inertial motion capture suit. The framework aims to provide an intuitive, effective and easy-to-deploy teleoperation interface. User control intent is acquired by the motion capture system in real-time, processed and relayed to the mobile manipulator wirelessly. Specifically, body center of mass and the right hand kinematic data of the user are used to generate position and orientation references for the robot base and manipulator, respectively. The left arm is employed to provide high-level user commands such as "Manipulator On/Off", "Base On/Off" and "Manipulator Pause/Resume". The efficacy of the presented system was demonstrated in real-time teleoperation experiments using KUKA youBot mobile manipulator accomplishing pick-and-place tasks.
Real-time gesture recognition for the high-level teleoperation interface of a mobile manipulator BIBAFull-Text 204-205
  Yerbolat Khassanov; Nursultan Imanberdiyev; Huseyin Atakan Varol
This paper describes an inertial motion capture based arm gesture recognition system for the high-level control of a mobile manipulator. Left arm kinematic data of the user is acquired by an inertial motion capture system (Xsens MVN) in real-time and processed to extract supervisory user interface commands such as "Manipulator On/Off", "Base On/Off" and "Operation Pause/Resume" for a mobile manipulator system (KUKA youBot). Principal Component Analysis and Linear Discriminant Analysis are employed for dimension reduction and classification of the user kinematic data, respectively. The classification accuracy for the six class gesture recognition problem is 95.6 percent. In order to increase the reliability of the gesture recognition framework in real-time operation, a consensus voting scheme involving the last ten classification results is implemented. During the five-minute long teleoperation experiment, a total of 25 high-level commands were recognized correctly by the consensus voting enhanced gesture recognizer. The experimental subject stated that the user interface was easy to learn and did not require extensive mental effort to operate.
Behavioral analysis of a touch-based interaction between humans and an egg-shaped robot according to protrusions BIBAFull-Text 206-207
  Jintae Kim; Hyunsoo Song; Dong-Soo Kwon
For a service robot, touch is one of the major interaction components. However, because many robot designs and touch sensor arrangements are designed by a company, actual user interactions do not match the implemented sensors in the robot. So it is necessary to investigate practical human touch patterns without advanced user knowledge of the robot's touch sensor. This study seeks to measure the influence of pointed and smooth-shaped protrusion types on touch interactions with an egg-shaped robot. Based on the results, a pointed protrusion creates distributed touch interaction while a smooth-shaped protrusion concentrates touch interaction.
Is a robot better than video for initiating remote social connections among children? BIBAFull-Text 208-209
  Nuri Kim; Jeonghye Han; Wendy Ju
To investigate how children interact differently when interactions are mediated with screen-based video communication versus a robot-mediated communication, we conducted a study with elementary students in Korea, comparing the use of both technologies to introduce classroom students with peer-aged individuals in America. Our findings show that the classroom children showed more positive emotion during certain tasks and exhibited more interest to remote participants in the context of robot-mediated communication than with video-mediated communication.
The effect of robot appearance types on motivating donation BIBAFull-Text 210-211
  Ran Hee Kim; Yeop Moon; Jung Ju Choi; Sonya S. Kwak
According to the Hawthorne effect, when people feel someone staring at them, they tend to be moral because of psychological pressure. We applied this Hawthorne effect to the appearance design of robots and examined the effect of robot appearance types on motivating donation. We executed a 2 (appearance types: anthropomorphic vs. functional) within-participants experiment (N=20). Participants perceived more social presence to an anthropomorphic robot than a functional robot. In addition, participants were more willing to donate when interacting with an anthropomorphic robot than a functional robot.
Human embodiment creates problems for robot learning by demonstration using a control panel BIBAFull-Text 212-213
  Franziska Kirstein; Kerstin Fischer; Dorthe Sølvason
In this paper, problems in instructing an industrial robot by means of a control panel are investigated. In order for the robot to learn as much and as fast as possible from demonstration, the demonstration by the teacher needs to be as precise as possible. Usability studies constitute a useful methodology to investigate in which situations users provide the robot with exact trajectories and, if not, why they face difficulties. Results show that movements involving only the lowest joint of the robot arm are straight-forward and very exact. In contrast, fine movements that involve joints of the upper arm cause considerable problems. The analysis shows that users have to employ separate actions instead of focusing on the target and therefore need to consciously plan the action since they cannot match the robot's movements with those of their own embodiment.
The effect of field of view on social interaction in mobile robotic telepresence systems BIBAFull-Text 214-215
  Andrey Kiselev; Annica Kristoffersson; Amy Loutfi
One goal of mobile robotic telepresence for social interaction is to design robotic units that are easy to operate for novice users and promote good interaction between people. This paper presents an exploratory study on the effect of camera orientation and field of view on the interaction between a remote and local user. Our findings suggest that limiting the width of the field of view can lead to better interaction quality as it encourages remote users to orient the robot towards local users.
Embodied visual programming for robot control BIBAFull-Text 216-217
  Stasinos Konstantopoulos; Andreas Lydakis; Antonios-Emmanouil Gkikakis
In this paper we motivate and present our embodied visual programming research plan, aiming at allowing end-users without any technical expertise to define complex robot behaviours. The core idea is to combine visual programming with the sensing and actuation capabilities of robots and with teleoperated demonstrations of what needs to be achieved.
Does anthropomorphism reduce stress in HRI? BIBAFull-Text 218-219
  Dieta Kuchenbrandt; Nina Riether; Friederike Eyssel
In an experiment, we tested whether anthropomorphism would reduce psychological stress associated with human-robot interactions (HRI). Participants anticipated (vs. did not anticipate) an interaction with a humanlike versus machinelike robot type, and their electrodermal activity (EDA), a physiological indicator of psychological stress, was measured. Results indicate that anticipating an HRI increased psychological stress independent of robot type.
Can robots be sold?: the effects of robot designs on the consumers' acceptance of robots BIBAFull-Text 220-221
  Sonya S. Kwak; Jun San Kim; Jung Ju Choi
This study explores the effect of robot design approaches on the consumers' acceptance of robots. We executed a 2-level (robot design: human-oriented vs. product-oriented) between participants experiment (N=52). Participants categorized a human-oriented robot and a product-oriented robot differently, and a product-oriented robot was more effective than a human-oriented robot for consumers' evaluation and purchase intention toward robots.
A pilot study of using touch sensing and robotic feedback for children with autism BIBAFull-Text 222-223
  Jaeryoung Lee; Goro Obinata; Hirofumi Aoki
It is an advantage to use robots in autism therapy since they provide repetitive stimuli during the learning of social skills. A detailed exploration is required to design assistive robots for autism therapy with higher effectiveness. This study has investigated the effectiveness of an interactive robot's feedback during autism therapy for improving specific social skills of the children. The experiment described in this study is that children with autism engaged with an interactive system to perform affective touch behaviour and received the robot's visual and auditory feedback. Results showed that the presence of an interactive robot's feedback triggered more effective children's touch behaviour.
Boosting human-to-human interaction for collaborative learning: social exchange theory perspective BIBAFull-Text 224-225
  Ohbyung Kwon; Namyeon Lee
In this paper, a method of robot-assisted exchange between team members is proposed. This method is based on Social Exchange Theory, which originates from communication research. To boost human-to-human interaction in the context of collaborative learning, the robot collects activity data in order to determine the learners' psychological state and the perceived benefits and costs of interaction as indicators of intention to interact. Using an estimate of intention, the robot then selects strategies to improve the interaction quality.
The dynamics of anthropomorphism in robotics BIBFull-Text 226-227
  Séverin Lemaignan; Julia Fink; Pierre Dillenbourg
Goal-predictability vs. trajectory-predictability: which legibility factor counts BIBAFull-Text 228-229
  Christina Lichtenthäler; Alexandra Kirsch
With the work at hand we investigate the legibility factors goal-predictability and trajectory-predictability in a human-robot path crossing scenario and their correlation with other HRI properties like safety, comfort, and reliability in order to assess which factor is more important for a safe and comfortable interaction.
Towards automated execution and evaluation of simulated prototype HRI experiments BIBAFull-Text 230-231
  Florian Lier; Ingo Lütkebohle; Sven Wachsmuth
Autonomous robots are highly relevant targets for interaction studies, but can exhibit behavioral variability that confounds experimental validity. Currently, testing on real systems is the only means to prevent this, but remains very labour-intensive and often happens too late. To improve this situation, we are working towards early testing by means of partial simulation, with automated assessment, and based upon continuous software integration to prevent regressions. We will introduce the concept and describe a proof-of-concept that demonstrates fast feedback and coherent experiment results across repeated trials.
Useful and motivating robots: the influence of task structure on human-robot teamwork BIBAFull-Text 232-233
  Manja Lohse; Vanessa Evers
Robots have recently started to leave their safety cages to be used in close vicinity to humans. This also causes changes in the nature of the tasks that robots and humans solve together, i.e., in the degree of structure of the tasks. While traditional, industrial tasks were highly structured, the new tasks often have a low level of structure. We present a user study that compares a highly and a little structured task in a text-based computer game played by human-robot teams. The results suggest that users do not only find robots useful and motivating in highly structured tasks where they depend on their help, but also in little structured tasks that they could solve on their own.
Ghost-in-the-machine: initial results BIBAFull-Text 234-235
  Sebastian Loth; Manuel Giuliani; Jan P. de Ruiter
We describe the design of the newly developed Ghost-in-the-Machine paradigm and present initial results of an experiment addressing the initiation of service interactions at a bar. For developing policies for a robotic bartender, we investigated which sensor modalities were most informative to humans, and which actions they selected as a socially appropriate response. The results showed that participants used two nonverbal cues for their initial response to a new customer. Those were the distance to the bar and whether the customers' torso was directed to the bar. For acknowledging a new customer, the participants typically responded nonverbally by looking and smiling at the customers. All results can be directly transferred into robotic decision policies.
Real-time gender recognition based on 3D human body shape for human-robot interaction BIBAFull-Text 236-237
  Ren C. Luo; Xiehao Wu
Gender roles influence behavior in social interactions. Thus, real-time gender recognition is essential in Human-Robot Interaction (HRI) for providing timely gender information to improve the experience of HRI. Considering the HRI scenario, a 3D-human-body-shape-based gender recognition is investigated. The 3D information is obtained by processing the depth image from an RGB-D camera. In addition, a machine learning method based on a Support Vector Machine (SVM) was applied. The experimental results showed that our system could achieve real-time accurate gender recognition. It enriched the diversity of existing methods for HRI application.
Initial phases of design-based research into the educational potentials of NAO-robots BIBAFull-Text 238-239
  Gunver Majgaard; Lykke Brogaard Bertel
In this paper, we describe our initial research, using the humanoid robot NAO in primary and secondary schools. How does a programmable humanoid enrich teaching and how do we prepare the teachers' Ten school classes are using the robot for creative programming. So far we have experienced that the robot enriches the learning processes by combining the auditory, visual and kinaesthetic modalities.
Confidence metrics improve human-autonomy integration BIBAFull-Text 240-241
  Amar R. Marathe; Brent J. Lance; Kaleb McDowell; William D. Nothwang; Jason S. Metcalfe
Controls frameworks for human-autonomy integration (HAI) often treat human sources of information as highly reliable [1]. However, data from human sensing are quite variable, both because of human nature and limitations on sensor technologies. This paper focuses on estimating the degree of uncertainty in human sensed data, i.e., developing confidence metrics, and implementing those metrics in a HAI for target recognition. Here we demonstrate that applying such confidence estimates to sensed human data can mitigate effects of variability in human sensing and improve HAI performance.
Reasons for singularity in robot teleoperation BIBAFull-Text 242-243
  Ilka Marhenke; Kerstin Fischer; Thiusius Rajeeth Savarimuthu
In this paper, the causes for singularity of a robot arm in teleoperation for robot learning from demonstration are analyzed. Singularity is the alignment of robot joints, which prevents the configuration of the inverse kinematics. Inspired by users' own hypotheses, we investigated speed and delay as possible causes. The results show that delay causes problems during teleoperation though not in direct control with a control panel because users expect a different, more intuitive control in teleoperation. Speed on the other hand was not found to have an effect on the occurrence of singularity.
Progressive development of an autonomous robot for children through parallel comparison of two robots BIBAFull-Text 244-245
  Shizuko Matsuzoe; Hideaki Kuzuoka; Fumihide Tanaka
This study proposes and demonstrates a progressive development method for an autonomous robot that is used for childhood education. The main concept in this method is to iteratively explore new behavioral factors and use them to progressively develop the robot by concurrently using and comparing two robots in a classroom. Usually, in classrooms for young children, it is practically difficult to recruit a sufficient number of participants and conduct many experiments. The parallel comparison employed in the proposed method enables rapid development to deal with these difficulties.
Therapeutic robots for older adults: investigating the potential of paro BIBAFull-Text 246-247
  Sean McGlynn; Braeden Snook; Shawn Kemple; Tracy L. Mitzner; Wendy A. Rogers
As the population ages, there is an increasing need for socio-emotional support for older adults. Therapeutic pets have been used to meet this need, but there are limitations in the practicality of placing pets in older adults' living environments. More recently therapeutic robots have been proposed as a solution. However, there is limited research on the efficacy of deploying therapeutic robots to provide socio-emotional support to older adults. This study investigates the potential of the Paro robot seal to support older adults' needs while avoiding some of the limitations of current pet therapy approaches. Our results will provide insight about how robots can be designed and deployed to increase therapeutic efficacy.
Learning partial ordering constraints from a single demonstration BIBAFull-Text 248-249
  Anahita Mohseni-Kabir; Charles Rich; Sonia Chernova
Current approaches to learning partial ordering constraints by demonstration require demonstrating all (or almost all) possible completion orders. We have developed an algorithm that, for plans involving relative placement of objects, learns the partial ordering constraints from a single demonstration by letting the user specify naturally conceived reference frame information. This work is an example of a broader research agenda that involves applying principles of human collaboration to robot learning from demonstration.
Empathy: interactions with emotive robotic drawers BIBAFull-Text 250-251
  Brian K. Mok; Stephen Yang; David Sirkin; Wendy Ju
The role of human-robot interaction is becoming more important as everyday robotic devices begin to permeate into our lives. In this study, we video-prototyped a user's interactions with a set of robotic drawers. The user and robot each displayed one of five emotional states -- angry, happy, indifferent, sad, and timid. The results of our study indicated that the participants of our online questionnaire preferred empathetic drawers to neutral ones. They disliked robotic drawers that displayed emotions orthogonal to the user's emotions. This showed the importance of displaying emotions, and empathy in particular, when designing robotic devices that share our living and working spaces.
HRI in the sky: controlling UAVs using face poses and hand gestures BIBAFull-Text 252-253
  Jawad Nagi; Alessandro Giusti; Gianni A. Di Caro; Luca M. Gambardella
As a first step towards human and multiple-UAV interaction, we present a novel method for humans to interact with flying UAVs using locally on-board video cameras. Using machine vision techniques, our approach enables human operators to command and control Parrot drones by giving them directions to move, using simple hand gestures. When a direction to move is given, the robot controller estimates the angle and distance to move with the help of a face score system and the estimated hand direction. This approach offers mobile robots the ability localize with human operators and provides UAVs/UGVs with a better perception of the environment around the human.
Breatter: a simulation of living presence with breath that corresponds to utterances BIBAFull-Text 254-255
  Yukari Nakatani; Tomoko Yonezawa
We propose an expressive breathing method for a robot in accordance with its utterances. The synchronized expression is expected as a new physical modality to become like a living presence. Especially when the human-robot distance is near in intimate communications, anthropomorphized presences should naturally imitate activities of living beings, such as breathing. A stuffed-toy robot contains a speaker and a fan motor in its head to make simultaneous expressions of breathing and vocal sounds. The strength of the breath is determined by the total volume and power in the high frequency band.
Embodied computation in soft gripper BIBAFull-Text 256-257
  Stig Anton Nielsen; Alexandru Dancu
We designed, created and tested an underactuated soft gripper able to hold everyday objects of various shapes and sizes without using complex hardware or control algorithms, but rather by combining sheets of flexible plastic materials and a single servo motor. Starting with a prototype where simple actuation performs complex varied gripping operations solely through the material system and its inherent physical computation, the paper discusses how embodied computation might exist in a material aggregate by tuning and balancing its morphology and material properties.
Is an accelerating robot perceived as energetic or as gaining in speed? BIBAFull-Text 258-259
  Matthijs L. Noordzij; Martin Schmettow; Melle R. Lorijn
Previous studies have found that basic movement characteristics of a robot influence the emotional attributes people perceive independent of the embodiment of the motion (e.g. iCat vs. Roomba). Here, with a very simple LEGO robot, we replicate these associations between levels of acceleration and curvature and the extent to which positive and negative emotions are attributed. Importantly, we also show that these associations might not be valid. Prior to the emotional questionnaires participants were asked neutral questions on what they deemed relevant observations pertaining to the different robot motions. Only 3% of the remarks coincided with the emotional terms found in the questionnaires. HRI researchers interested in what people attribute to robot motion should be mindful of participant heuristics and experimenter biases. We provide some suggestions how to create experiments that are robust against these biases.
Empathy and yawn contagion: can we (humans) catch yawns from robots? BIBAFull-Text 260-261
  Mohammad Obaid; Dieta Kuchenbrandt; Christoph Bartneck
Empathy plays an important role in the interaction between humans and robots. The contagious effect of yawning is moderated by the degree of social closeness and empathy. We propose to analyse the contagion of yawns as an indicator for empathy. We conducted pilot studies to test different experimental procedures for this purpose. We hope to be able to report on experimental results in the near future.
Modeling and analysis of task complexity in single-operator multi-robot teams BIBAFull-Text 262-263
  Arif Tuna Özgelen; Elizabeth I. Sklar
A model is presented for analyzing the complexity of task assignment in a human/multi-robot team from the perspective of the human team member, or "operator". The focus is on complex domains where tasks have inter-dependencies and/or require tightly coupled coordination among robots. Preliminary validation of model metrics has been carried out through an experiment involving human subjects. Results suggest significant effect of key aspects of the model on human subjects' cognitive workload.
Encoding bi-manual coordination patterns from human demonstrations BIBAFull-Text 264-265
  Ana Lucia Pais; Aude Billard
Humans perform tasks such as bowl mixing bi-manually, but programming them on a robot can be challenging specially in tasks that require force control or on-line stiffness modulation. In this paper we first propose a user-friendly setup for demonstrating bi-manual tasks, while collecting complementary information on motion and forces sensed on a robotic arm, as well as the human hand configuration and grasp information. Secondly for learning the task we propose a method for extracting task constraints for each arm and coordination patterns between the arms. We use a statistical encoding of the data based on the extracted constraints and reproduce the task using a Cartesian impedance controller.
"You are green": a touch-to-name interaction in an integrated multi-modal multi-robot HRI system BIBAFull-Text 266-267
  Shokoofeh Pourmehr; Valiallah (Mani) Monajjemi; Seyed Abbas Sadat; Fei Zhan; Jens Wawerla; Greg Mori; Richard Vaughan
We present a multi-modal multi-robot interaction whereby a user can identify an individual or a group of robots using haptic stimuli, and name them using a voice command (e.g. "You two are green"). Subsequent commands can be addressed to the same robot(s) by name (e.g. "Green! Take off!"). We demonstrate this as part of a real-world integrated system in which a user commands teams of autonomous robots in a coordinated exploration task.
Older adults' reactions to a robot's appearance in the context of home use BIBAFull-Text 268-269
  Akanksha Prakash; Charles C. Kemp; Wendy A. Rogers
Robots are being designed to assist older adults in their homes. However, we lack a clear understanding of the aspects of robot appearance older adults pay attention to and consider important for accepting a robot in their homes. The goal of this study was to systematically assess older adults' reactions to a specific robot's appearance in the context of home use. Independent living older adults interacted with the PR2 and were interviewed about its appearance. In general, there was expectation for a small sized robot that fits with the home and is easy to control. There was variability for desired human-likeness, although some human-like characteristics might be acceptable to most.
Model driven software development for human-machine interaction systems BIBAFull-Text 270-271
  Arunkumar Ramaswamy; Bruno Monsuez; Adriana Tapus
In a typical Human-Machine Interaction (HMI) system, a task is performed by cooperation of the human and the automation component. The system adopts a cognitive architecture to model human psychology and makes optimum decisions on dynamic task allocation between human and the machine counterpart depending on the context. However, such architectures do not define how those systems are implemented in software. Various models involved in Model Driven Software Development (MDSD) approach in developing HMI systems is presented. This paper proposes a metamodel for modeling Non-Functional Properties (NFP) in HMI systems and provides a case study on assistive lane keeping in automobiles to demonstrate the approach.
Morphological gender recognition by a social robot and privacy concerns: late breaking reports BIBAFull-Text 272-273
  Arnaud Ramey; Miguel A. Salichs
An intuitive and robust user recognition system is at the key of a natural interaction between a social robot and its users. The gender of a new user can be guessed without explicitly asking it of her, which can then be used to personalize the interaction flow. In this LBR, a novel algorithm is used to estimate the gender of a person based on its morphological shape. More specifically, the vertical outline of the breast of the user is used to estimate his or her gender, based on similar shapes seen during training.
   On early benchmarks with databases that represent well the diversity of human body shapes, the accuracy rate is close to 90% and outperforms a state-of-the-art algorithm. Our algorithm provides a fast and seamless estimation flow and needs limited computation resources, which tailor it for HRI. Its usefulness has been proved by integrating it in a social robot. However, its use raises concerns among the users about their privacy, which will lead to further study.
Towards a social and mobile humanoid exercise coach BIBAFull-Text 274-275
  Darryl Ramgoolam; Elise Russell; Andrew B. Williams
In the near future, humanoid robots may be available to act as personal health coaches that can socially interact and exercise with people to increase their physical activity and improve their nutritional habits. Although there has been work to demonstrate the long-term effects of using a robot to motivate and record exercise and nutrition data, we are developing a social and mobile humanoid health coach that will explain and perform the physical exercises along with the human in an effort to increase their physical activity. In this paper, we describe a pilot study to compare the effects on young adults of coaching delivered by a social and mobile humanoid robot health coach versus a human health coach. While data analysis bore out no significant statistical effect of coach type on daily activity level, the results demonstrated encouraging trends and suggest further research with a larger sample size.
Non-linguistic utterances should be used alongside language, rather than on their own or as a replacement BIBAFull-Text 276-277
  Robin Read; Tony Belpaeme
This paper presents the results of a small experiment aimed at determining whether people are comfortable with a social robot that uses robotic Non-Linguistic Utterances alongside Natural Language, rather than as a replacement. The results suggest that while people have the most preference for a robot that uses only natural language, a robot that combines NLUs and natural language is seen as more preferable than a robot that only employs NLUs. This suggests that there is potential for NLUs to be used in combination with natural language. In light of this, potential utilities and motivations for using NLUs in such a manner are outlined.
Behavioral accommodation towards a dance robot tutor BIBAFull-Text 278-279
  Raquel Ros; Alexandre Coninx; Yiannis Demiris; Georgios Patsis; Valentin Enescu; Hichem Sahli
We report first results on children adaptive behavior towards a dance tutoring robot. We can observe that children behavior rapidly evolves through few sessions in order to accommodate with the robotic tutor rhythm and instructions.
Introducing a rasch-type anthropomorphism scale BIBAFull-Text 280-281
  Peter A. M. Ruijten; Diane H. L. Bouten; Dana C. J. Rouschop; Jaap Ham; Cees J. H. Midden
In human-robot interaction research, much attention is given to the extent to which people perceive humanlike attributes in robots. Generally, the concept anthropomorphism is used to describe this process. Anthropomorphism is defined in different ways, with much focus on either typical human attributes or uniquely human attributes. This difference has caused different measurement tools to be developed. We argue that anthropomorphism can best be described as a continuum ranging from low to high human likeness, and should be measured accordingly. We found that anthropomorphic characteristics can be invariantly ordered according to the ease with which these can be ascribed to robots.
Information-theoretic measures as a generic approach to human-robot interaction: application in CORBYS project BIBAFull-Text 282-283
  Christoph Salge; Cornelius Glackin; Danijela Ristic-Durrant; Martin Greaves; Daniel Polani
The objective of the CORBYS project is to design and implement a robot control architecture that allows the integration of high-level cognitive control modules, such as a semantically-driven self-awareness module and a cognitive framework for anticipation of, and synergy with, human behaviour based on biologically-inspired information-theoretic principles. CORBYS aims to provide a generic control architecture to benefit a wide range of applications where robots work in synergy with humans, ranging from mobile robots such as robotic followers to gait rehabilitation robots. The behaviour of the two demonstrators, used for validating this architecture, will each be driven by a combination of task specific algorithms and generic cognitive algorithms. In this paper we focus on the generic algorithms based on information theory.
Investigating the impact of gender development in child-robot interaction BIBAFull-Text 284-285
  Anara Sandygulova; Mauro Dragone; Gregory M. P. O'Hare
In order to inform the design of robotic applications for children, in this paper we describe and report the results of an experiment we conducted in a primary school. Our work investigates the effects of the robot's perceived gender and age on levels of engagement and acceptance of the robot by children across different age and gender groups. Our results show that children across ages relate differently toward perceived robot's age and gender.
Multiple robotic wheelchair system able to move with a companion using map information BIBAFull-Text 286-287
  Yoshihisa Sato; Ryota Suzuki; Masaya Arai; Yoshinori Kobayashi; Yoshinori Kuno; Mihoko Fukushima; Keiichi Yamazaki; Akiko Yamazaki
In order to reduce the burden of caregivers facing an increased demand for care, particularly for the elderly, we developed a system whereby multiple robotic wheelchairs can automatically move alongside a companion. This enables a small number of people to assist a substantially larger number of wheelchair users effectively. This system utilizes an environmental map and an estimation of position to accurately identify the positional relations among the caregiver (or a companion) and each wheelchair. The wheelchairs are consequently able to follow along even if the caregiver cannot be directly recognized. Moreover, the system is able to establish and maintain appropriate positional relations.
Mobile teleoperation interfaces with adjustable autonomy for personal service robots BIBAFull-Text 288-289
  Max Schwarz; Jörg Stückler; Sven Behnke
Personal service robots require a comprehensive set of perception, control, and planning skills to perform everyday tasks autonomously. While achieving full autonomy is an ongoing research topic, first real-world applications of personal robots may come into reach, if state-of-the-art autonomous capabilities are combined with the intelligence of the users in a complementary way. We report on handheld user interfaces for personal robots that allow for teleoperating the robot on three levels of autonomy: body, skill, and task control. On the higher levels, autonomous behavior of the robot relieves the user from significant workload. If autonomous execution fails, or autonomous functionality is not provided by the robot system, the user can select a lower level of autonomy to solve a task. The benefits of providing adjustable autonomy in teleoperation have been successfully demonstrated at RoboCup@Home competitions.
Development of perception of weight from human or robot lifting observation BIBAFull-Text 290-291
  Alessandra Sciutti; Laura Patanè; Francesco Nori; Giulio Sandini
Human interaction is based, among other factors, on non verbal and implicit communication. By observing the action of someone else we can automatically infer several non obvious details of what's happening, as the goal of the agent, his mood and even some features of the object he is using, e.g., if it is heavy or light. This action reading skill is developed very early in life and constitutes a fundamental basis for the development of collaboration. A similar capacity to implicitly communicate weight would be desirable also in a humanoid robot, to allow for a natural preparation for hand-over between the robot and the human partner. Here we have investigated how the ability to infer weight from human action observation develops during childhood and whether such ability generalizes to the observation of a humanoid robot. In particular, the robot was not programmed to perform human-like lifting actions, but just to replicate a property of human lifting deemed as determinant for weight reading: exhibiting a velocity proportional to object weight. Our results suggest that although 6-year-olds can already judge weight from the observation of a lifting action, they cannot generalize this skill to simplified robotic actions as the ones proposed here.
Kinesthetic human/robot coordination: the coordination of movements for interaction BIBAFull-Text 292-293
  Anders Stengaard Sørensen; Gitte Rasmussen; Dennis Day
Training with a robotic training partner, is a physical, powerful, and yet intimate form of robot/human interaction (RHI). In this paper we report on the early stages of a project, that aims to study and use the human kinesthetic "language" of co-motion, used in physical human/robot interaction such as training.
Towards using a generic robot as training partner: off-the-shelf robots as a platform for flexible and affordable rehabilitation BIBAFull-Text 294-295
  Anders Stengaard Sørensen; Thiusius Rajeeth Savarimuthu; Jacob Nielsen; Ulrik Pagh Schultz
In this paper, we demonstrate how a generic industrial robot can be used as a training partner, for upper limb training. The motion path and human/robot interaction of a non-generic upper-arm training robot is transferred to a generic industrial robot arm, and we demonstrate that the robot arm can implement the same type of interaction, but can expand the training regime to include both upper arm and shoulder training. We compare the generic robot to two affordable but custom-built training robots, and outline interesting directions for future work based on these training robots.
Remote control of quadrotor teams, using hand gestures BIBAFull-Text 296-297
  Adrian Stoica; Federico Salvioli; Caitlin Flowers
This paper addresses the human-centered control of a team (pack) of quadrotors. The control is remote, over the internet. Using simple telepresence tools (video in Skype, showing remote imagery from a ground camera, and from the quadrotors' own livestreaming cameras) the operator uses a command language, UAVL (standing for Unmanned Aerial Vehicle Language) to control quadrotors' behaviors. Seventeen hand gestures are used to express the commands; these are decoded by processing EMG signals collected with an array of EMG electrodes embedded in a BioSleeve mounted on the forearm. Extending prior work with an Unmanned Ground Vehicle Language (UGVL) used with ground robots, UAVL encodes commands used to control quadrotors, individually, or as a team, in direct teleoperated flight, or in preprogrammed flight patterns triggered by the commands.
Predicting human performance during teleoperation BIBAFull-Text 298-299
  Justin Storms; Steven Vozar; Dawn Tilbury
Humans are an integral part of many tasks performed by mobile robots and human operator ability can vary greatly between users. Understanding operator skill level can allow for user interfaces to adapt giving novice users additional assistance and expert users more freedom. In this paper, a method is proposed for predicting a human teleoperator's performance based on user behavior observed during the first few minutes of teleoperation. Preliminary analysis indicates a trend between operator performance and the proportion of time an operator spends thinking during a particular segment of time.
Perception of humanoid social mediator in two-person dialogs BIBAFull-Text 300-301
  Yasir Tahir; Umer Rasheed; Shoko Dauwels; Justin Dauwels
In this work we present a humanoid robot (Nao) that provides real-time sociofeedback to participants taking part in two-person dialogs. The sociofeedback system quantifies speech mannerism and social behavior of participants in an ongoing conversation, determines whether feedback is required, and delivers feedback through Nao. For example, Nao alarms the speaker(s) when the voice is too high or too low, or when the conversation is not proceeding well due to disagreements or numerous interruptions. In this study, participants are asked to engage in two-person conversations while the Nao robot acts as mediator. They then assess the received sociofeedback with respect to various aspects, including its content, appropriateness, and timing. Participants also evaluate their overall perception of Nao as social mediator via the Godspeed questionnaire.
Modeling of human attention based on analysis of magic BIBAFull-Text 302-303
  Yusuke Tamura; Shiro Yano; Hisashi Osumi
In this study, we developed a human attention model for smooth human-robot interaction. The model consists of the saliency map generation module and manipulation map generation module. The manipulation map describes top-down factors, such as human face, hands and gaze in the input image. To evaluate the proposed model, we applied the model to a magic video, and measured human gaze points during watching the video. Based on the experimental results, the proposed model can better explain human attention than the saliency map.
An emotional model for social robots: late-breaking report BIBAFull-Text 304-305
  Martina Truschzinski; Nicholas Müller
We developed an emotional model, which could help supporting robots to accommodate humans during a working task inside an industrial setting. The robot would recognize when a human is experiencing increased stress and decides whether it should assist the human or should do other tasks. We propose the model as a framework which was developed as part of "The Smart Virtual Worker"-project within the context of human-robot interactions. The emotional model is able to estimate a worker's emotional valence throughout a typical work task by applying a hierarchical reinforcement learning algorithm. Since emotions are generated by the human brain based on an individual's interpretation of a stimulus, we linked the genesis of emotions to empirical findings of the sports sciences in order to infer an emotional reaction. Furthermore, the model reproduces sympathetic reactions of the human body and is capable of remembering past actions in order to include possible future time constraints as an initiator for emotional responses in the upcoming iterations. This capability is crucial for accommodating long-term experiences since the emotional reaction is not only based on the present situation, but on the whole experimental setting.
Task fidelity for robot-mediated interaction in 3D worlds BIBAFull-Text 306-307
  Michael Vallance; Catherine Naamani; Malcolm Thomas
In any task design it is important to consider its difficulty for the intended learners. Task designers such as teachers and Higher Education practitioners need to provide tasks commensurate with the expected successful outcomes that will, it is anticipated, be developed by the learners. In this paper we will demonstrate how tasks can be quantified within the particular context of communicating the programming of robots in a 3D virtual world. Circuit Task Complexity and Robot Task Complexity are calculated alongside immersivity to determine a new metric for measuring tasks involving robots, which we have termed Task Fidelity.
First validation of a generic method for emotional body posture generation for social robots BIBAFull-Text 308-309
  Greet Van de Perre; Michaël Van Damme; Dirk Lefeber; Bram Vanderborght
Gestures for social robots are often preprogrammed off-line or generated by mapping motion capture data to the robot. Since these gestures are dependent on the robot's joint configuration, new joint trajectories to reach the desired postures need to be implemented when using a new robot platform with a different morphology. The method proposed here aims to minimize the workload when implementing gestures on a new robot platform and facilitate the sharing of gestures between different robots. The innovative aspect of this method is that it is constructed independently of any robot configuration, and therefore it can be used to generate gestures for different robot platforms. To calculate a posture for a certain configuration, the developed method uses a set of target gestures listed in a database and maps them to that specific configuration. The method was validated on a series of configurations, including those of existing robots.
Learning interactions with and about robotic models BIBAFull-Text 310-311
  Igor M. Verner; Dan Cuperman
This paper considers learning interactions, in which the student inquires into a biological system and develops its representation in the form of a robotic model using the PicoCricket robot construction kit. Our study follows-up the learning interactions and students' perceptions of the modeling experience.
Puzzling exercises for spatial training with robot manipulators BIBAFull-Text 312-313
  Igor M. Verner; Dalia Leibowitz; Sergei Gamer
Robot operation requires spatial reasoning and can be used for spatial training. This paper proposes exercises for training spatial skills through operating robot manipulators in virtual and real environments. The exercises were implemented in a robotics workshop, and the follow-up indicated a significant advance of the participants in performing mental rotation tasks.
Increasing the expressivity of humanoid robots with variable gestural expressions BIBAFull-Text 314-315
  Andre Viergutz; Tamara Flemisch; Raimund Dachselt
This work aims at establishing a more variable, and thus attractive, communication with humanoids. For this purpose we developed a method of dynamically expressing emotions and intentions associated with parameterised gestures. First, gestures and possible parameters, which are necessary for the generation of a whole-body gesture set, were analyzed. A gesture's inner and outer expressivity is thereby defined which leads to the differentiation of single gestures and variable gestural expressions. Gestures are subsequently classified into (feedback) categories and related depending on their expressivity. We developed a gesture set consisting of six categories including over 50 variable gestures. As proof of concept, the gesture set has been implemented to allow for an easy and flexible authoring process of gesture-supported communication.
Field study: can androids be a social entity in the real world? BIBAFull-Text 316-317
  Miki Watanabe; Kohei Ogawa; Hiroshi Ishiguro
In this paper, we discussed an autonomous android robot that is recognized as a social entity by observers in the real world. We conducted field experiments to investigate what kind of function the android needs in order to be recognized as a social and humanlike entity by visitors. In the field experiment, the android made conversation with multiple visitors by using several touch displays. The result showed that the visitors evaluate the android as humanlike even though this interaction differs from normal human-human interaction. Moreover, the experiment suggested that the android has a social influence -- advertisement effect on visitors.
Designing a service robot for public space: an "action and experiences" -- approach BIBAFull-Text 318-319
  Astrid Weiss; Markus Bader; Markus Vincze; Gert Hasenhütl; Stefan Moritsch
When we think of service robots for public spaces, we often consider human-like systems that socially interact with users in short-term and dynamically changing scenarios. Many design assumptions exist for this type of robot, but what about for a service robot that needs to interact with people in a very task/role-oriented manner? This paper presents a research-through-design approach, which explores the idea of a "luggage-carrying robot guide" for train stations which supports travellers. Contrary to the classical paper describing the requirements, design, implementation, and evaluation of the robot, this work presents an exploratory design study. An interdisciplinary team composed of two industrial designers, a roboticist, and a social scientist performed a study with users to establish a design space for the aforementioned service robot.
Don't bother me: users' reactions to different robot disturbing behaviors BIBAFull-Text 320-321
  Astrid Weiss; Markus Vincze; Paul Panek; Peter Mayer
When living together in a household with a socially assistive robot, it can happen that the robot disturbs its owner by offering a service. One might argue that a social robot should act according to the social norms people expect of each other, but still then disturbances of daily routines are a challenging endeavor to address. We conducted a preliminary user study in which we explored four different disturbing behaviors the socially assistive robot HOBBIT showed, while the user was focusing on a different primary task. We used the BEHAVE measurement set to evaluate the attitudinal and behavioral responses of the users, which disturbance distracted the user the most from his/her primary task and how the disturbance affected the overall attitudinal response towards the robot. Interestingly, our results showed that the disturbing behavior did not heavily negatively impact the assessment of the robot and that not all types of disturbance did distract the users with the same intensity.
Involuntary expression of embodied robot adopting goose bumps BIBAFull-Text 322-323
  Tomoko Yonezawa; Xiaoshun Meng; Naoto Yoshida; Yukari Nakatani
In this paper, we propose an involuntary expression of embodied robots by adopting goose bumps. The goose bumps are caused by not only external stimuli such as cold temperature but also the internal state of the robot such as fear. For more natural anthropomorphism, the combination of involuntary and voluntary expressions should enable realistic animacy and life-like agency. The bumps on the robot's skin are generated by changing lengths of thin rods from each hole. The lengths are controlled by a servo motor which pulls nylon strings connected to the base of thin rods.
Social exploration: Mars rovers BIBAFull-Text 324-325
  Karolina Zawieska; Brian R. Duffy
Mars rovers are an extension of the human body, which brings the human presence to another planet. In addition to human-centric senses such as vision and audio, they also extend human cognition into unknown spaces. This paper argues that Mars rovers should embrace not only the embodied but also the social aspect of a human being and judiciously embrace anthropomorphism in rover design in order to augment our exploration of distant locations.

Demonstration session

Spontaneous spoken dialogues with the furhat human-like robot head BIBAFull-Text 326
  Samer Al Moubayed; Jonas Beskow; Gabriel Skantze
Furhat [1] is a robot head that deploys a back-projected animated face that is realistic and human-like in anatomy. Furhat relies on a state-of-the-art facial animation architecture allowing accurate synchronized lip movements with speech, and the control and generation of non-verbal gestures, eye movements and facial expressions.
   Furhat is built to study, implement and validate patterns and models of human-human and human-machine situated and multi-party multimodal communication, a study that demands the co-presence of the talking head in the interaction environment, some-thing that cannot be achieved using virtual avatars displayed on flat screens [2,3]. In Furhat, the animated face is back-projected on a translucent mask that is a printout of the animated model. The mask is then rigged on a 2DOF neck to allow for the control of head movements. Figure 1 shows a snapshot of Furhat in interaction.
   We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is an anthropomorphic robot head that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations. The dialogue design is performed using the IrisTK [4] dialogue authoring toolkit developed at KTH. The system will also be able to perform a moderator in a quiz-game showing different strategies for regulating spoken situated interactions.
A new creation environment for learning through interaction with robots BIBAFull-Text 327
  Hye-Kyung Cho; Jae-Sung Ryu; Hyo-Yong Kim; Dong-Hoon Lee; Yong-Gyu Jin; Jung-Yun Sung; Hyun-Sung Jung; Soo-Hee Han; Sang-Hoon Ji
This demonstration introduces SiCi (Smart ideas for Creative interplay) that brings a single-body robot to life by delivering a variety of contents and then fosters children's creativity and innovation in education.
Easy authoring of variable gestural expressions for a humanoid robot BIBAFull-Text 328
  Tamara Flemisch; André Viergutz; Raimund Dachselt
This demonstration on the Nao robot uses the selection of behavior based on a feedback type and an expressivity value to generate and execute appropriate feedback. It aims to ease the authoring process by using variable gestures.
The receptionist robot BIBAFull-Text 329
  Patrick Holthaus; Sven Wachsmuth
In this demonstration, a humanoid robot interacts with an interlocutor through speech and gestures in order to give directions on a map. The interaction is specifically designed to provide an enhanced user experience by being aware of non-verbal social signals. Therefore, we take spatial communicative cues into account and to react to them accordingly.
Demonstration of emotion modeling using Flobi BIBAFull-Text 330
  Andreas Kipp; Birte Carlmeyer; Oliver Damm
Social robots are designed to interact in a more human and intuitive manner. In this hands-on demo we present an emotional system which is capable to produce human like emotional expressions. Via touchscreen given input results in a realtime facial expression on a physical and a simulated robot head.
Reality check!: a physical robot versus its simulation BIBAFull-Text 331
  Florian Lier; Simon Schulz; Sven Wachsmuth
Simulated environments usually provide the most frequent test environment for robotic systems, often due to their cost and availability advantages. The crucial question is: how precisely must a simulation match the real world in order to produce realistic results? In this demonstration the humanoid robot head Flobi, as well as its simulation, will react to diverse visual stimuli in order to indicate visual attention which is an important cue in social interaction. We directly compare the behavior of the virtual robot, triggered by simulated stimuli, to the physical robot, triggered by real world stimuli. Therefore, we are able to assess and demonstrate the current limitations and advantages of our simulation compared to real world interactions.
A new dimension for RoboCup @home: human-robot interaction between virtual and real worlds BIBAFull-Text 332
  Jeffrey Too Chuan Tan; Tetsunari Inamura; Yoshinobu Hagiwara; Komei Sugiura; Takayuki Nagai; Hiroyuki Okada
This work proposes a new approach to realize embodied and multimodal HRI between virtual robot and real world human for HRI challenges in RoboCup @Home.


Socially assistive robotics: human-robot interaction methods for creating robots that care BIBAFull-Text 333
  Maja Mataric
Socially assistive robotics (SAR) is a new subfield of robotics that bridges HRI, rehabilitation robotics, social robotics, and service robotics. SAR focuses on developing machines capable of assisting users, typically in health and education contexts, through social rather than physical interaction. The robot's physical embodiment is at the heart of SAR's effectiveness, as it leverages the inherently human tendency to engage with lifelike (but not necessarily humanlike or otherwise biomimetic) social behavior. This talk will describe research into embodiment, modeling and steering social dynamics, and long-term user adaptation for SAR. The research will be grounded in projects involving analysis of multi-modal activity data, modeling personality and engagement, formalizing social use of space and non-verbal communication, and personalizing the interaction with the user over a period of months. The presented methods and algorithms will be validated on implemented SAR systems evaluated by human subject cohorts from a variety of user populations, including stroke patients, children with autism spectrum disorder, and elderly with Alzheimers and other forms of dementia.

Social behaviour generation

Meet me where i'm gazing: how shared attention gaze affects human-robot handover timing BIBAFull-Text 334-341
  AJung Moon; Daniel M. Troniak; Brian Gleeson; Matthew K. X. J. Pan; Minhua Zeng; Benjamin A. Blumer; Karon MacLean; Elizabeth A. Croft
In this paper we provide empirical evidence that using humanlike gaze cues during human-robot handovers can improve the timing and perceived quality of the handover event. Handovers serve as the foundation of many human-robot tasks. Fluent, legible handover interactions require appropriate nonverbal cues to signal handover intent, location and timing. Inspired by observations of human-human handovers, we implemented gaze behaviors on a PR2 humanoid robot. The robot handed over water bottles to a total of 102 naïve subjects while varying its gaze behaviour: no gaze, gaze designed to elicit shared attention at the handover location, and the shared attention gaze complemented with a turn-taking cue. We compared subject perception of and reaction time to the robot-initiated handovers across the three gaze conditions. Results indicate that subjects reach for the offered object significantly earlier when a robot provides a shared attention gaze cue during a handover. We also observed a statistical trend of subjects preferring handovers with turn-taking gaze cues over the other conditions. Our work demonstrates that gaze can play a key role in improving user experience of human-robot handovers, and help make handovers fast and fluent.
Robot deictics: how gesture and context shape referential communication BIBAFull-Text 342-349
  Allison Sauppé; Bilge Mutlu
As robots collaborate with humans in increasingly diverse environments, they will need to effectively refer to objects of joint interest and adapt their references to various physical, environmental, and task conditions. Humans use a broad range of deictic gestures-gestures that direct attention to collocated objects, persons, or spaces-that include pointing, touching, and exhibiting to help their listeners understand their references. These gestures offer varying levels of support under different conditions, making some gestures more or less suitable for different settings. While these gestures offer a rich space for designing communicative behaviors for robots, a better understanding of how different deictic gestures affect communication under different conditions is critical for achieving effective human-robot interaction. In this paper, we seek to build such an understanding by implementing six deictic gestures on a humanlike robot and evaluating their communicative effectiveness in six diverse settings that represent physical, environmental, and task conditions under which robots are expected to employ deictic communication. Our results show that gestures which come into physical contact with the object offer the highest overall communicative accuracy and that specific settings benefit from the use of particular types of gestures. Our results highlight the rich design space for deictic gestures and inform how robots might adapt their gestures to the specific physical, environmental, and task conditions.
Evaluating directional cost models in navigation BIBAFull-Text 350-357
  Thibault Kruse; Alexandra Kirsch; Harmish Khambhaita; Rachid Alami
A common approach to social distancing in robot navigation are spatial cost functions around humans that cause the robot to prefer paths that do not come too close to humans. However, in unpredictably dynamic scenarios, following such paths may produce robot behavior that appears confused. The concept of directional costs in cost functions [9] is supposed to alleviate this problem without incurring the problem of combinatorial explosions using temporal planning. With directional cost functions, a robot attempts to solve spatial conflicts by adjusting the velocity instead of the path, where possible. To complement results from simulations, in this paper we describe a user study we conducted with a PR2 robot and human participants to evaluate the new cost function type. The study shows that the real robot behavior is similar to the observations in simulation, and that participants rate the robot behavior less confusing with the adapted cost model. The study also shows other important behavior cues that can influence motion legibility.
Communication of intent in assistive free flyers BIBAFull-Text 358-365
  Daniel Szafir; Bilge Mutlu; Terrence Fong
Assistive free-flyers (AFFs) are an emerging robotic platform with unparalleled flight capabilities that appear uniquely suited to exploration, surveillance, inspection, and telepresence tasks. However, unconstrained aerial movements may make it difficult for colocated operators, collaborators, and observers to understand AFF intentions, potentially leading to difficulties understanding whether operator instructions are being executed properly or to safety concerns if future AFF motions are unknown or difficult to predict. To increase AFF usability when working in close proximity to users, we explore the design of natural and intuitive flight motions that may improve AFF abilities to communicate intent while simultaneously accomplishing task goals. We propose a formalism for representing AFF flight paths as a series of motion primitives and present two studies examining the effects of modifying the trajectories and velocities of these flight primitives based on natural motion principles. Our first study found that modified flight motions might allow AFFs to more effectively communicate intent and, in our second study, participants preferred interacting with an AFF that used a manipulated flight path, rated modified flight motions as more natural, and felt safer around an AFF with modified motion. Our proposed formalism and findings highlight the importance of robot motion in achieving effective human-robot interactions.
Familiarization to robot motion BIBAFull-Text 366-373
  Anca Dragan; Siddhartha Srinivasa
We study the effect of familiarization on the predictability of robot motion. Predictable motion is motion that matches the observer's expectation. Because of the difficulty robots have in learning motion from user demonstrations, we explore the idea of having users learn from robot demonstrations -- how accurate do users get at predicting how the robot will move? We find that although familiarization significantly increases predictability, its utility depends on how natural the motion is. Overall, familiarization shows great promise, but needs to be combined with other methods that generate appropriate motion with which to be familiarized.

Living with robots

A mixed-method approach to evoke creative and holistic thinking about robots in a home environment BIBAFull-Text 374-381
  Praminda Caleb-Solly; Sanja Dogramadzi; David Ellender; Tina Fear; Herjan van den Heuvel
Discovering older adults' perceptions and expectations of domestic care service robots are vital in informing the design and development of new technologies to ensure acceptability and usability. This paper identifies issues that were elicited from older adults using different methods to promote creative thinking about domestic robots at an emotional level, as well as pragmatic level. These included exploring people's ideal embodiment preferences and requirements for a domestic care service robot, and also what embodiments and functional aspects will not be acceptable. We analysed our findings using relevant constructs from the Unified Theory of Acceptance and Use of Technology, and Technology Acceptance models. In addition to some already well-established findings, we discovered some surprising aspects concerning interaction, behaviour and appearance and the ability for the robot to fit the relevant context, both physically and conceptually.
Social engagement in public places: a tale of one robot BIBAFull-Text 382-389
  Lilia Moshkina; Susan Trickett; J. Gregory Trafton
In this paper, we describe a large-scale (over 4000 participants) observational field study at a public venue, designed to explore how social a robot needs to be for people to engage with it. In this study we examined a prediction of Computers Are Social Actors (CASA) framework: the more machines present human-like characteristics in a consistent manner, the more likely they are to invoke a social response. Our humanoid robot's behavior varied in the amount of social cues, from no active social cues to increasing levels of social cues during story-telling to human-like game-playing interaction. We found several strong aspects of support for CASA: the robot that provides even minimal social cues (speech) is more engaging than a robot that does nothing, and the more human-like the robot behaved during story-telling, the more social engagement was observed. However, contrary to the prediction, the robot's game-playing did not elicit more engagement than other, less social behaviors.


Automatic analysis of facial expressions BIBAFull-Text 390
  Maja Pantic
Facial behaviour is our preeminent means to communicating affective and social signals. This talk discusses a number of components of human facial behaviour, how they can be automatically sensed and analysed by computers, what is the past research in the field conducted by the iBUG group at Imperial College London, and how far we are from enabling computers to sense and recognise human facial expressions and behaviour.

Robot teachers and learners

Spatial and other social engagement cues in a child-robot interaction: effects of a sidekick BIBAFull-Text 391-398
  Marynel Vázquez; Aaron Steinfeld; Scott E. Hudson; Jodi Forlizzi
In this study, we explored the impact of a co-located sidekick on child-robot interaction. We examined child behaviors while interacting with an expressive furniture robot and his robot lamp sidekick. The results showed that the presence of a sidekick did not alter child proximity, but did increase attention to spoken elements of the interaction. This suggests the addition of a co-located sidekick has potential to increase engagement but may not alter subtle physical interactions associated with personal space and group spatial arrangements. The findings also reinforce existing research by the community on proxemics and anthropomorphism.
Telepresence robot helps children in communicating with teachers who speak a different language BIBAFull-Text 399-406
  Fumihide Tanaka; Toshimitsu Takahashi; Shizuko Matsuzoe; Nao Tazawa; Masahiko Morita
This study reports the advantages of using a child-operated telepresence robot system for the purpose of remote education. Video conferencing is already common in educational settings, where a foreign language is taught by a native teacher from a remote location; however, there is a serious issue in that children tend to have difficulties or freeze when facing teachers who speak a different language over a monitor. We hypothesize that a child-operated telepresence robot that offers physical participation and operability will help to address this issue. To investigate this hypothesis, we conduct a field experiment with 52 participants (4-8 years old) in classroom environments, and the use of a telepresence robot system is compared with a baseline Skype condition. The results show the advantages of the telepresence robot system for both children and teachers.
Adaptive emotional expression in robot-child interaction BIBAFull-Text 407-414
  Myrthe Tielman; Mark Neerincx; John-Jules Meyer; Rosemarijn Looije
Expressive behaviour is a vital aspect of human interaction. A model for adaptive emotion expression was developed for the Nao robot. The robot has an internal arousal and valence value, which are influenced by the emotional state of its interaction partner and emotional occurrences such as winning a game. It expresses these emotions through its voice, posture, whole body poses, eye colour and gestures. An experiment with 18 children (mean age 9) and two Nao robots was conducted to study the influence of adaptive emotion expression on the interaction behaviour and opinions of children. In a within-subjects design the children played a quiz with both an affective robot using the model for adaptive emotion expression and a non-affective robot without this model. The affective robot reacted to the emotions of the child using the implementation of the model, the emotions of the child were interpreted by a Wizard of Oz. The dependent variables, namely the behaviour and opinions of the children, were measured through video analysis and questionnaires. The results show that children react more expressively and more positively to a robot which adaptively expresses itself than to a robot which does not. The feedback of the children in the questionnaires further suggests that showing emotion through movement is considered a very positive trait for a robot. From their positive reactions we can conclude that children enjoy interacting with a robot which adaptively expresses itself through emotion and gesture more than with a robot which does not do this.
Effects of social presence and social role on help-seeking and learning BIBAFull-Text 415-422
  Iris Howley; Takayuki Kanda; Kotaro Hayashi; Carolyn Rosé
The unique social presence of robots can be leveraged in learning situations to reduce student evaluation anxiety, while still providing instructional guidance on multiple levels of communication. Furthermore, social role of the instructor can also impact the prevalence of evaluation apprehension. In this study, we examine how human and robot social role affects help-seeking behaviors and learning outcomes in a one-on-one tutoring setting. Our results show that help-seeking is a moderator of the significant relationship between condition and learning, with the "human teacher" condition resulting in significantly less learning (and marginally less help-seeking) than the "human assistant" and both robot conditions.
Personalizing robot tutors to individuals' learning differences BIBAFull-Text 423-430
  Daniel Leyzberg; Samuel Spaulding; Brian Scassellati
In education research, there is a widely-cited result called "Bloom's two sigma" that characterizes the differences in learning outcomes between students who receive one-on-one tutoring and those who receive traditional classroom instruction. Tutored students scored in the 95th percentile, or two sigmas above the mean, on average, compared to students who received traditional classroom instruction. In human-robot interaction research, however, there is relatively little work exploring the potential benefits of personalizing a robot's actions to an individual's strengths and weaknesses. In this study, participants solved grid-based logic puzzles with the help of a personalized or non-personalized robot tutor. Participants' puzzle solving times were compared between two non-personalized control conditions and two personalized conditions (n=80). Although the robot's personalizations were less sophisticated than what a human tutor can do, we still witnessed a "one-sigma" improvement (68th percentile) in post-tests between treatment and control groups. We present these results as evidence that even relatively simple personalizations can yield significant benefits in educational or assistive human-robot interactions.
Teaching people how to teach robots: the effect of instructional materials and dialog design BIBAFull-Text 431-438
  Maya Cakmak; Leila Takayama
Allowing end-users to harness the full capability of general purpose robots, requires giving them powerful tools. As the functionality of these tools increase, learning how to use them becomes more challenging. In this paper we investigate the use of instructional materials to support the learnability of a Programming by Demonstration tool. We develop a system that allows users to program complex manipulation skills on a two-armed robot through a spoken dialog interface and by physically moving the robot's arms. We present a user study (N=30) in which participants are left alone with the robot and a user manual, without any prior instructions on how to program the robot. Instead, they are asked to figure it out on their own. We investigate the effect of providing users with an additional written tutorial or an instructional video. We find that videos are most effective in training the user; however, this effect might be superficial and ultimately trial-and-error plays an important role in learning to program the robot. We also find that tutorials can be problematic when the interaction has uncertainty due to speech recognition errors. Overall, the user study demonstrates the effectiveness and learnability of the our system, while providing useful feedback about the dialog design.

Motivation and assistive robotics

Which robot behavior can motivate children to tidy up their toys?: design and evaluation of "ranger" BIBAFull-Text 439-446
  Julia Fink; Séverin Lemaignan; Pierre Dillenbourg; Philippe Rétornaz; Florian Vaussard; Alain Berthoud; Francesco Mondada; Florian Wille; Karmen Franinovic
We present the design approach and evaluation of our prototype called "Ranger". Ranger is a robotic toy box that aims to motivate young children to tidy up their room. We evaluated Ranger in 14 families with 31 children (2-10 years) using the Wizard-of-Oz technique. This case study explores two different robot behaviors (proactive vs. reactive) and their impact on children's interaction with the robot and the tidying behavior. The analysis of the video recorded scenarios shows that the proactive robot tended to encourage more playful and explorative behavior in children, whereas the reactive robot triggered more tidying behavior. Our findings hold implications for the design of interactive robots for children, and may also serve as an example of evaluating an early version of a prototype in a real-world setting.
Can two-player games increase motivation in rehabilitation robotics? BIBAFull-Text 447-454
  Domen Novak; Aniket Nagle; Robert Riener
Rehabilitation robots have the potential to greatly improve motor rehabilitation. However, the patient must be properly motivated to actively participate in therapy. Several strategies have been suggested to improve patient motivation, but one element has not yet been explored: playing with other people. We designed a two-player rehabilitation game played by two people using two ARMin III robots. We tested three game modes: single-player (competing against a computer), competitive (competing against a human), and cooperative (cooperating with a human against a computer). All modes were played by 24 healthy subjects who filled out questionnaires about their personality and in-game motivation. Almost all subjects preferred playing the two-player game modes to the single-player one, as they enjoyed being able to talk and interact with another person. However, there were two distinct player groups. One group liked the competitive mode but not the cooperative mode while the other liked the cooperative but not the competitive mode. Subjects who liked the competitive mode also put more effort into it. Finally, subjects' personalities partially predicted what mode they would like. This emphasizes that two-player rehabilitation games have advantages over single-player ones, but that the right game needs to be chosen for each subject. An extended patient study is planned for the near future.
Extending myoelectric prosthesis control with shapable automation: a first assessment BIBAFull-Text 455-462
  Matthew Derry; Brenna Argall
For many users of myoelectric prostheses there is a set of functionality which remains out of reach with current technology. In this work, we provide a first assessment of an extension to classical myoelectric prostheses control approaches that introduces simple automation that is shapable, using EMG signals. The idea is not to replace classical techniques, but to introduce automation for tasks, like those which require the coordination of multiple degrees of freedom, for which automation is well-suited. A prototype system is developed in simulation and an exploratory user study is performed to provide a first assessment, that evaluates our proposed approach and provides guidance for future development. A comparison is made between different formulations for the shaping controls, as well as to a classical control paradigm. Results from the user study are promising: showing significant performance improvements when using the automated controllers, and also unanimous preference for the use of automated controllers on this task. Additionally, some questions about the optimal user interaction with the system are revealed. All of these results support the case for continued development of the proposed approach, including more extensive user studies.
A remote social robot to motivate and support diabetic children in keeping a diary BIBAFull-Text 463-470
  Esther J. G. van der Drift; Robbert-Jan Beun; Rosemarijn Looije; Olivier A. Blanson Henkemans; Mark A. Neerincx
Children with diabetes can benefit from keeping a diary, but seldom keep one. Within the European ALIZ-E project a robot companion is being developed that, among other things, will be able to support and motivate diabetic children to keep a diary. This paper discusses the study of a robot supporting the use of an online diary. Diabetic children kept an online diary for two weeks, both with and without remote support from the robot via webcam. The effect of the robot was studied on children's use of the diary and their relationship with the robot. Results show that children shared significantly more personal experiences in their diaries when they were interacting with the robot. Furthermore, they greatly enjoyed working with the robot and came to see it as a helpful and supportive friend.


Destination unknown: walking side-by-side without knowing the goal BIBAFull-Text 471-478
  Ryo Murakami; Luis Yoichi Morales Saiki; Satoru Satake; Takayuki Kanda; Hiroshi Ishiguro
Walking side by side is a common situation when we go from one place to another with another person while talking. Our previous study reported a basic mechanism for side-by-side walking, but in the previous model it was crucial that each agent knew where he or she was going, i.e. the route to the destination. However, we have recognized the need to model the situation where one of the agents does not know the destination. The extended model presented in this work has two states: leader-follower state and collaborative state. Depending on whether the follower agent has obtained a reliable estimate of the route to follow, the walking agents transition between the two states. The model is calibrated with trajectories acquired from pairs of people walking side by side, and then it is tested in a human-robot interaction scenario. The results demonstrate that the new extended model achieves better side-by-side performance than a standard method without knowledge of the subgoal.
Let me tell you! investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance BIBAFull-Text 479-486
  Megan Strait; Cody Canning; Matthias Scheutz
Recent proposals for how robots should talk to people when they give advice suggest that the same strategies humans employ with other humans are effective for robots as well. However, the evidence is exclusively based on people's observation of robot giving advice to other humans. Hence, it is not clear whether the results still apply when people actually participate in real interactions with robots. We address this shortcoming in a novel systematic mixed-methods study where we employ both survey-based subjective and brain-based objective measures (using functional near infrared spectroscopy). The results show that previous results from observation conditions do not transfer automatically to interaction conditions, and that robot appearance and interaction distance are important modulators of human perceptions of robot behavior in advice-giving contexts.
May i talk about other shops here?: modeling territory and invasion in front of shops BIBAFull-Text 487-494
  Satoru Satake; Hajime Iba; Takayuki Kanda; Michita Imai; Yoichi Morales Saiki
This paper models the concept of the "territory" of shops. First, we interviewed three shopkeepers and found that they perceived the space near their shop as their territory and that they interpreted some types of behaviors as invasive. Second, we confirmed that potential visitors share this notion of territory. We also confirmed that the size of the territory depends on the characteristics of a shop's facade. While there is little territory in front of walls, there is more territory in front of shelves and entrances. Our robot traversed around two real shopping malls that included 50 shops and took 3-D scans of their environment shapes. Each shop's facade was analyzed and the shop territory was computed. The computation results match people's perception. The recognition rate accuracy reached 93.5% for the territory areas. User evaluations in a virtual shop environment confirmed that a robot with a territory model behaves better than one without it.


Applications for emotional robots BIBFull-Text 495-496
  Oliver Damm; Christian Becker-Asano; Manja Lohse; Frank Hegel; Britta Wrede
Socially assistive robots for the aging population: are we trapped in stereotypes? BIBAFull-Text 497-498
  Astrid Weiss; Jenay Beer; Takanori Shibata; Markus Vincze
Robots caring for the older population in care facilities and at home is an ongoing theme in HRI research. Research projects on this topic exist all over the globe in the USA, Europe, and Asia. All of these projects have the overall ambitious goal to increase the well-being of older adults and to enable them to stay at home as long as possible. In this workshop we want to reflect whether the HRI community is trapped in stereotypes when it comes to socially assistive robots for older adults' Therefore we want to gather and compare findings from user needs analysis, user evaluation studies, as well as interaction scenarios and functionalities of existing care robots. Are our results suggesting similar scenarios? Do older end users in all countries have similar needs and desires when it comes to assistive robots? What are the challenges and opportunities for future assistive robots (maybe for those we develop for ourselves when we belong to the older population?) also on an ethical and legal level? In this workshop we want to escape the stereotype trap what socially assistive robots should do. Can socially assistive robots solve the aging population problem on a societal and individual level? Are older people in general technology opponents? Will robotic helpers be accepted in the home as long as they pretend to be social actors.
Workshop on attention models in robotics: visual systems for better HRI BIBAFull-Text 499-500
  Michael Zillich; Simone Frintrop; Fiora Pirri; Ekaterina Potapova; Markus Vincze
Attention is a concept of human perception that enables human subjects to select the potentially relevant parts out of the huge amount of sensory data and that enables interactions with other human subjects by sharing attention with each other. These abilities are also of large interest for autonomous robots, therefore, interest in modeling concepts of human attention computationally has increased strongly in the robotics community during the last decade. Especially in human-robot interaction, the ability to detect what a human partner is attending to and to act in a similar way to enable intuitive communication, are important skills for a robotic system.
   Still, there exists a gap in knowledge transfer between researchers in human attention and robotic researchers with their specific, often task-related, problems. Both communities can mutually benefit from each other by sharing ideas. In the workshop, researchers in visual and multi-modal attention can profit from the rapidly growing field of robotics, which offers new and challenging research questions with very concrete applicability to challenging problems. Robotic researchers can learn how to integrate attention to support natural and real-time HRI.
HRI: a bridge between robotics and neuroscience BIBAFull-Text 501-502
  Alessandra Sciutti; Katrin Lohan; Yukie Nagai
A fundamental challenge for robotics is to transfer the human natural social skills to the interaction with a robot. At the same time, neuroscience and psychology are still investigating the mechanisms behind the development of human-human interaction. HRI becomes therefore an ideal contact point for these different disciplines, as the robot can join these two research streams by serving different roles. From a robotics perspective, the study of interaction is used to implement cognitive architectures and develop cognitive models, which can then be tested in real world environments. From a neuroscientific perspective, robots could represent an ideal stimulus to establish an interaction with human partners in a controlled manner and make it possible studying quantitatively the behavioral and neural underpinnings of both cognitive and physical interaction. Ideally, the integration of these two approaches could lead to a positive loop: the implementation of new cognitive architectures may raise new interesting questions for neuroscientists, and the behavioral and neuroscientific results of the human-robot interaction studies could validate or give new inputs for robotics engineers. However, the integration of two different disciplines is always difficult, as often even similar goals are masked by difference in language or methodologies across fields. The aim of this workshop will be to provide a venue for researchers of different disciplines to discuss and present the possible point of contacts, to address the issues and highlight the advantages of bridging the two disciplines in the context of the study of interaction.
Workshop on algorithmic human-robot interaction BIBAFull-Text 503
  Brenna Argall; Sonia Chernova; Kris Hauser; Chad Jenkins
Intelligent behavior in robots is implemented through algorithms. Historically, much of algorithmic robotics research strives to compute outputs that achieve mathematically rigid conditions, such as minimizing path length. But today's robots are increasingly being used to empower the daily lives of people, and experience shows that traditional algorithmic approaches are poorly suited for the unpredictable, idiosyncratic, and adaptive nature of human-robot interaction. This raises a need for entirely new computational, mathematical, and technical approaches for robots to better understand and react to humans. The human-friendly robots of the future will need new algorithms, informed from the ground up by HRI research, to generate interpretable, ethical, socially-acceptable behavior, ensure safety around humans, and execute tasks of value to society.
Cognitive architectures for human-robot interaction BIBAFull-Text 504-505
  Paul Baxter; J. Gregory Trafton
Developments in autonomous agents for Human-Robot Interaction (HRI), particularly social, are gathering pace. The typical approach to such efforts is to start with an application to a specific interaction context (problem, task, or aspect of interaction) and then try to generalise to different contexts. Alternatively however, the application of Cognitive Architectures emphasises generality across contexts in the first instance. While not the "silver-bullet" solution, this perspective has a number of advantages both in terms of the functionality of the resulting systems, and indeed in the process of applying these ideas. Centred on invited talks to present a range of perspectives, this workshop provides a forum to introduce and discuss the application (both existing and potential) of Cognitive Architectures to HRI, particularly in the social domain. Participants will gain insight into how such a consideration of Cognitive Architectures complements the development of autonomous social robots.
Humans and robots in asymmetric interactions BIBAFull-Text 506-507
  Anna-Lisa Vollmer; Lars Schillingmann; Katharina J. Rohlfing; Britta Wrede
Robots are not human. They might in some cases have a similar appearance but different behavioral and cognitive strengths and limitations. In this sense, an interaction with a robot is asymmetric. When interacting with a robot one is unsure what behavior to expect as the appearance does not necessarily make the abilities of the robot transparent. In human-human interaction, we can also find asymmetric interactions to occur. For example, in an interaction with a child, adults have to adapt to the learner's capabilities and understanding. Similarly, in interactions with special populations such as persons with autistic spectrum disorders (ASD), asymmetry occurs as specific information seems to be processed differently.
   In this half-day interdisciplinary workshop, we will discuss how partners cope with asymmetry to succeed in interaction. Our discussion will be motivated by linguistic and non-linguistic interaction strategies that are developed online in human-human as well as human-robot asymmetric interaction. Persons for example are often guided by their expectations and expected abilities in the interaction partners resulting in difficulties in asymmetric interactions with robots and even toward humans. Our aim is to determine factors as well as methods that are capable of supporting communication in asymmetric interactions. By bringing together researchers working on the area of asymmetric interaction concerning both human-human and human-robot interaction, this workshop aims to develop novel views on interaction understanding and modeling as well as insights into alignment processes.
Culture-aware robotics (CARs) BIBFull-Text 508
  Matthias Rehm; Maja J. Mataric; Bilge Mutlu; Tatsuya Nomura
Timing in human-robot interaction BIBAFull-Text 509-510
  Guy Hoffman; Maya Cakmak; Crystal Chao
Timing plays a role in a range of human-robot interaction scenarios, as humans are highly sensitive to timing and interaction fluency. It is central to spoken dialogue, with turn-taking, interruptions, and hesitation influencing both task efficiency and user affect. Timing is also an important factor in the interpretation and generation of gestures, gaze, facial expressions, and other nonverbal behavior. Beyond communication, temporal synchronization is functionally necessary for sharing resources and physical space, as well as coordinating multi-agent actions. Timing is thus crucial to the success of a broad spectrum of HRI applications, including but not limited to situated dialogue; collaborative manipulation; performance, musical, and entertainment robots; and expressive robot companions. Recent years have seen a growing interest in the HRI community in the various research topics related to human-robot timing. The purpose of this workshop is to explore and discuss theories, computational models, systems, empirical studies, and interdisciplinary insights related to the notion of timing, fluency, and rhythm in human-robot interaction.
Experimenting in HRI for priming real world set-ups, innovations and products BIBAFull-Text 511-512
  Paolo Barattini; Gurvinder Singh Virk; Nicole Mirnig; Maria Elena Giannaccini; Adriana Tapus; Fabio Bonsignorio
Robotics is moving towards real world applications, beyond the well-structured environment of industrial robotics. In the world of assistant robots and medical robots, Human-Robot Interaction is essential. Also in emerging industrial scenarios there is a need of the human to be closely included in the loop. The companies are confronted with the lack of guidelines and of standards on how the higher features of HRI may be safely incorporated. Although the scientific research is burgeoning and worthy of praise, it is clear that its results are scattered and not capable of giving a clear input to be easily taken up by companies and standardization organizations like ISO and IEC. The workshop aims at the integration of empirical findings into complex real world robot systems by focusing on three typical sectors (industrial, service and medical) to develop systematic approaches to benchmark and evaluate experimental systems so that normative results can be realized rapidly. The present workshop focuses on bringing together scientists, representative of robotics companies and of standardization working groups to foster discussion in the definition of experimental scenarios and protocols in HRI, so to be able to prime real world set-ups and help realize the robotic products of the future.