HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Proceedings of the 6th International Conference on Human-Robot Interaction

Fullname:HRI'11 Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction
Editors:Aude Billard; Peter Kahn; Julie A. Adams; Greg Trafton
Location:Lausanne, Switzerland
Dates:2011-Mar-06 to 2011-Mar-09
Publisher:ACM
Standard No:ISBN: 1-4503-0561-X, 978-1-4503-0561-7; ACM DL: Table of Contents hcibib: HRI11
Papers:152
Pages:506
Links:Conference Home Page | Conference Series Home Page
  1. Tutorials & workshops
  2. Telepresence
  3. People and robots working together
  4. Anthropomorphic
  5. Life-like motion
  6. Panel
  7. Late-breaking reports/poster session
  8. Engagement
  9. Engagement and proxemics
  10. Humans teaching robots
  11. Multi-robot control
  12. Video session
  13. Ontologies
  14. User preferences
  15. Robot touch
  16. Nonverbal interaction

Tutorials & workshops

Tutorial: brain mediated human-robot interaction BIBAFull-Text 1-2
  Jose del R. Millan; Ricardo Chavarriaga
The use of brain-generated signals for human-robot interaction has gained increasing attention in the last years. Indeed brain-controlled robots can potentially be employed to substitute motor capabilities (e.g. brain-controlled prosthetics for amputees or patients with spinal cord injuries); to help in the restoration of such functions (e.g. as a tool for stroke rehabilitation) as well as non-clinical applications like telepresence or entertainment. This half-day tutorial gives an introduction to the field of brain-computer interfaces and presents several design principles required to successfully employ them for robot control.
Robots with children: practices for human-robot symbiosis BIBAFull-Text 3-4
  Naomi Miyake; Hiroshi Ishiguro; Kerstin Dautenhahn; Tatsuya Nomura
On considering symbiosis of humans and robots, its benefits and risks should be taken into account for persons in weaker positions of the society, in particular, children. On the other hand, several robotics applications have been developed, including education and welfare for children. In this stage, it is important that more researchers from interdisciplinary research fields, including robotics, computer science, psychology, sociology, and pedagogy, share an opportunity to discuss about the potential of "robots with children". This half-day workshop aims at providing with the forum where researchers from these interdisciplinary fields discuss about how symbiosis of robots and children should and can be realized, from the perspectives of engineering, psychology, education, and welfare.
Social robotic telepresence BIBAFull-Text 5-6
  Silvia Coradeschi; Amy Loutfi; Annica Kristoffersson; Gabriella Cortellessa; Kerstin Severinson Eklundh
Robotic telepresence, also known as telerobotics is a subfield of telepresence whose aim is to increase presence via embodiment in a robotic platform. In particular, robotic telepresence can be an effective tool to enhance social interaction suited to certain groups of users such as the elderly. The aim of this workshop is to address various aspects important for social robotic telepresence which include but are not limited to, (1) the mechanical design, (2) the user interface design, (3) the interaction between the remotely embodied person and the locally embodied person and (4) the perception of social robotic telepresence systems. Furthermore, we are interested in discovering the added value of spatial presence in the context of social telepresence and comparisons between robotic and non-robotic systems are of interest. We welcome contributions concerning results reached from the above mentioned areas of interest, user evaluation and methodologies, as well as reports from the deployment of social robotic solutions into real world contexts.
The role of expectations in intuitive human-robot interaction BIBAFull-Text 7-8
  Verena Hafner; Manja Lohse; Joachim Meyer; Yukie Nagai; Britta Wrede
Human interaction is highly intuitive: we infer reactions of our opponents mainly from what we have learned in years of experience and often assume that other people have the same knowledge about certain situations, abilities, and expectations as we do. In human-robot interaction (HRI) we cannot take for granted that this is equally true since HRI is asymmetrical. In other words, robots have different abilities, knowledge, and expectations than humans. They need to react appropriately to human expectations and behaviour. With this respect, scientific advances have been made to date for applications in entertainment and service robotics that largely depend on intuitive interaction. However, HRI today is often still unnatural, slow, and unsatisfactory for the human interlocutor. Both the sensorimotor interaction with environment and interlocutor, and the social aspects of the interaction still need to be researched and improved. Therefore, this full-day workshop aims to bring together researchers from different scientific fields to discuss these crosscutting issues and to exchange views on what are the preconditions and principles of intuitive interaction.
HRI pioneers workshop 2011 BIBAFull-Text 9-10
  Thomas Kollar; Astrid Weiss; Jason Monast; Anja Austermann; David Lu; Mitesh Patel; Elena Gribovskaya; Chandan Datta; Richard Kelley; Hirotaka Osawa; Lanny Lin
The 2011 HRI Pioneers Workshop will be conducted in conjunction with the 2011 ACM/IEEE International Conference on Human-Robot Interaction (HRI). The 2011 HRI Pioneers Workshop will provide a forum for graduate students and postdocs to learn about the current state of HRI, to present their work and to network with one another and with select senior researchers in a setting that is less formal and more interactive than the main conference. Workshop participants will discuss important issues and open challenges in the field, encouraging the formation of collaborative relationships across disciplines and geographic boundaries.

Telepresence

Exploring use cases for telepresence robots BIBAFull-Text 11-18
  Katherine M. Tsui; Munjal Desai; Holly A. Yanco; Chris Uhlik
Telepresence robots can be thought of as embodied video conferencing on wheels. Companies producing these robots imagine them being used in a wide variety of situations (e.g., ad-hoc conversations at the office, inspections and troubleshooting at factories, and patient rounds at medical facilities). In July and August 2010, we examined office-related use cases in a series of studies using two prototype robots (Anybots' QB and VGo Communications' VGo). In this paper, we present two studies: conference room meetings (n=6) and moving hallway conversations (n=24). We discuss who might benefit from using telepresence robots, in what scenarios, and the features that telepresence robots must incorporate for use in ad-hoc interactions.
Mobile remote presence systems for older adults: acceptance, benefits, and concerns BIBAFull-Text 19-26
  Jenay M. Beer; Leila Takayama
While much of human-robot interaction research focuses upon people interacting with autonomous robots, there is also much to be gained from exploring human interpersonal interaction through robots. The current study focuses on mobile remote presence (MRP) systems as used by a population who could potentially benefit from more social connectivity and communication with remote people -- older adults. Communication technologies are important for ensuring safety, independence, and social support for older adults, thereby potentially improving their quality of life and maintaining their independence [24]. However, before such technologies would be accepted and used by older adults, it is critical to understand their perceptions of the benefits, concerns, and adoption criteria for MRP systems. As such, we conducted a needs assessment with twelve volunteer participants (ages 63-88), who were given first-hand experience with both meeting a visitor via the MRP system and driving the MRP system to visit that person. The older adult participants identified benefits such as being able to see and be seen via the MRP system, reducing travel costs and hassles, and reducing social isolation. Among the concerns identified were etiquette of using the MRP, personal privacy, and overuse of the system. Some new use-cases were identified that have not yet been explored in prior work, for example, going to museums, attending live performances, and visiting friends who are hospitalized. The older adults in the current study preferred to operate the MRP themselves, rather than to be visited by others operating the MRP system. More findings are discussed in terms of their implications for design.
Projector robot for augmented children's play BIBAFull-Text 27-28
  Jong-gil Ahn; Hyeonsuk Yang; Gerard J. Kim; Namgyu Kim; Kyoung Choi; Hyemin Yeon; Eunja Hyun; Miheon Jo; Jeonghye Han
Participating in a play is one of integral curriculum for young children at nurseries and kindergartens. At the same time, it is not very easy to successfully run and manage a play for young children due to their low age and immaturity. Scripts are difficult to memorize and children's attention span is quite short. We are exploring the use of a robot and augmented reality (AR) technology to assist the nursery teachers in hopes to alleviate the difficult and complicated task of running the play, and also as a way to increase the learning effect by promoting the concentration and immersion (by the presence of the robot and novelty of the augmented display) [1, 2, 3]. For this purpose, we have devised a semi-autonomous remote-controlled projector robot with the capabilities of background projection and control, generating the synthesized augmented view, camera/movement control, producing story narration and various special effects. We have recently deployed the robot assistant for a play ('Three Little Pigs') at an actual nursery to observe and investigate various aspects of human robot interaction. For instance, the robot interacts with the actors on stage, leading and guiding them by showing (with small display on the robot) the synthesized augmented view, script guidance, and putting forth and changing the backdrop projection. It also assumes the role of the "camera man" and may instigate minute interplay with the actor as it zooms in and out on actors (by remote control). Our initial observation indicated that the use of the robot and AR indeed exhibited very high potential in drawing the attention of the children and enhancing the educational effect, but required the right amount of autonomy and external control and an intuitive interface.

People and robots working together

Improved human-robot team performance using Chaski, a human-inspired plan execution system BIBAFull-Text 29-36
  Julie Shah; James Wiken; Brian Williams; Cynthia Breazeal
We describe the design and evaluation of Chaski, a robot plan execution system that uses insights from human-human teaming to make human-robot teaming more natural and fluid. Chaski is a task-level executive that enables a robot to collaboratively execute a shared plan with a person. The system chooses and schedules the robot's actions, adapts to the human partner, and acts to minimize the human's idle time.
   We evaluate Chaski in human subject experiments in which a person works with a mobile and dexterous robot to collaboratively assemble structures using building blocks. We measure team performance outcomes for robots controlled by Chaski compared to robots that are verbally commanded, step-by-step by the human teammate. We show that Chaski reduces the human's idle time by 85%, a statistically significant difference. This result supports the hypothesis that human-robot team performance is improved when a robot emulates the effective coordination behaviors observed in human teams.
A conversational robot in an elderly care center: an ethnographic study BIBAFull-Text 37-44
  Alessandra Maria Sabelli; Takayuki Kanda; Norihiro Hagita
This paper reports an ethnographic study on the use of a conversational robot. We placed a robot for 3.5 months in an elderly care center. Assuming a real deployment scenario, the robot was managed by a single non-programmer person during the field trial, who teleoperated the robot and updated the contents. The robot was designed to engage in daily greetings and chatting with elderly people. Through the ethnographic approach, we clarified how the elderly people interacted with this conversational robot, how the deployment process adopted to introduce the robot was designed, and how the organization's personnel involved themselves in this deployment.
Evaluating the applicability of current models of workload to peer-based human-robot teams BIBAFull-Text 45-52
  Caroline E. Harriott; Tao Zhang; Julie A. Adams
Human-Robot peer-based teams are evolving from a far-off possibility into a reality. Human Performance Moderator Functions (HPMFs) can be used to predict human behavior by incorporating the effects of internal and external influences such as fatigue and workload. The applicability of HPMFs to human-robot teams is not proven. The presented research focuses on determining the applicability of workload HPMFs in team tasks for first response mass casualty triage incidents between a Human-Human and a Human-Robot team. A model representing workload for each team was developed using IMPRINT Pro. The results from an empirical evaluation were compared to the model results. While significant differences between the two conditions were not found in all data, there was a general trend that workload in the human-robot condition was slightly lower than the workload experienced in the human-human condition. This trend was predicted by the IMPRINT Pro models. These results are the first to indicate that existing HPMFs can be applied to human-robot peer-based teams.

Anthropomorphic

Interpersonal variation in understanding robots as social actors BIBAFull-Text 53-60
  Kerstin Fischer
In this paper, I investigate interpersonal variation in verbal HRI with respect to the computers-as-social-actors hypothesis. The analysis of a corpus of verbal human-robot interactions shows that only a subgroup of the users treat the robot as a social actor. Thus, taking interpersonal variation into account reveals that not all users transfer social behaviors from human interactions into HRI. This casts doubts on the suggestion that the social responses to computers and robots reported on previously are due to mindlessness. At the same time, participants' understanding of robots as social or non-social actors can be shown to have a considerable influence on their linguistic behavior throughout the dialogs.
Effects of anticipated human-robot interaction and predictability of robot behavior on perceptions of anthropomorphism BIBAFull-Text 61-68
  Friederike Eyssel; Dieta Kuchenbrandt; Simon Bobinger
Recent research has shown that anthropomorphism represents a means to facilitate HRI. Under which conditions do people anthropomorphize robots and other nonhuman agents? This research question was investigated in an experiment that manipulated participants' anticipation of a prospective human-robot interaction (HRI) with a robot whose behavior was characterized by either low or high predictability. We examined effects of these factors on perceptions of anthropomorphism and acceptance of the robot. Innovatively, the present research demonstrates that anticipation of HRI with an unpredictable agent increased anthropomorphic inferences and acceptance of the robot. Implications for future research on psychological determinants of anthropomorphism are discussed.
Expressing thought: improving robot readability with animation principles BIBAFull-Text 69-76
  Leila Takayama; Doug Dooley; Wendy Ju
The animation techniques of anticipation and reaction can help create robot behaviors that are human readable such that people can figure out what the robot is doing, reasonably predict what the robot will do next, and ultimately interact with the robot in an effective way. By showing forethought before action and expressing a reaction to the task outcome (success or failure), we prototyped a set of human-robot interaction behaviors. In a 2 (forethought vs. none: between) x 2 (reaction to outcome vs. none: between) x 2 (success vs. failure task outcome: within) experiment, we tested the influences of forethought and reaction upon people's perceptions of the robot and the robot's readability. In this online video prototype experiment (N=273), we have found support for the hypothesis that perceptions of robots are influenced by robots showing forethought, the task outcome (success or failure), and showing goal-oriented reactions to those task outcomes. Implications for theory and design are discussed.

Life-like motion

Spatiotemporal correspondence as a metric for human-like robot motion BIBAFull-Text 77-84
  Michael J. Gielniak; Andrea L. Thomaz
Coupled degrees-of-freedom exhibit correspondence, in that their trajectories influence each other. In this paper we add evidence to the hypothesis that spatiotemporal correspondence (STC) of distributed actuators is a component of human-like motion. We demonstrate a method for making robot motion more human-like, by optimizing with respect to a nonlinear STC metric. Quantitative evaluation of STC between coordinated robot motion, human motion capture data, and retargeted human motion capture data projected onto an anthropomorphic robot suggests that coordinating robot motion with respect to the STC metric makes the motion more human-like. A user study based on mimicking shows that STC-optimized motion is (1) more often recognized as a common human motion, (2) more accurately identified as the originally intended motion, and (3) mimicked more accurately than a non-optimized version. We conclude that coordinating robot motion with respect to the STC metric makes the motion more human-like. Finally, we present and discuss data on potential reasons why coordinating motion increases recognition and ability to mimic.
An assistive tele-operated anthropomorphic robot hand: Osaka City University hand II BIBAFull-Text 85-92
  Raafat Mahmoud; Atsushi Ueno; Shoji Tatsumi
This paper presents an anthropomorphic robot hand called Osaka-City-University-Hand II, which is an improved version of Osaka-City-University-Hand I. The cosmetic and the function of our proposed hand allow us to use the hand as a prosthetic hand in addition to as an open platform of robot hands for robotics research. Distributed tactile and force sensors are appended to the OCU-Hand II as a feedback system in order to grasp an object firmly. A new control strategy is to be adopted, in which a master light-weight glove is to be used to drive the OCU-Hand II as a slave. In the tele-operating task, arithmetic operations are to be done to the outputs of the feedback sensors in order to increase the resolution of the master-slave driving technique as well as to overcome the hardware amplification defects. In order to use OCU-Hand II as a helper equipment, a novel and unique assistive mode is included within master-slave control strategy, where the OCU-Hand II is assisting and helping its operators in order to reduce the load on the operator and perform the usual operations in a better manner or even faster than the usual. During the assistive mode the operator is enabled to perform different tasks other than the master-slave driving while he is still putting on the master-glove.
Effects related to synchrony and repertoire in perceptions of robot dance BIBAFull-Text 93-100
  Eleanor Avrunin; Justin Hart; Ashley Douglas; Brian Scassellati
In this work we identify low-level aspects of robot motion that can be exploited to create impressions of agency and lifelikeness. In two experiments, participants view split-screen videos of multiple robots set to music and rate the robots on their dance ability, lifelikeness, and entertainment value. The first experiment tests the impact of the correspondence (or lack thereof) of the robot's motion to the underlying rhythm of the music, and the effect of matching changes in the robot's movement to changes in the music, such as a phrase of vocals or drumming. This motivates a second experiment which more deeply explores the relationships of asynchrony and changes in motion repertoire to participants' perceptions of the lifelikeness of the robot's motion. Findings indicate that perceptions of the lifelikeness of the robot and the quality of the dance can be manipulated by simple changes, such as variation in the repertoire of motions, coordination of changes in behavior with events in the music, and the addition of flaws to the robot's synchrony with the music.
Lighthead robotic face BIBAFull-Text 101-102
  Frédéric Delaunay; Joachim de Greeff; Tony Belpaeme
This video is about a new kind of robotic head. Through back-projection of a computer generated video into a half-translucent mask, the LightHead robotic head has many advantages compared to tradition mechatronic robotic faces. These advantages are most notably the versatility and ease of controlling facial expressions and creating new faces, the total weight and the low costs. By mounting the head on a robotic arm and equipping it with face detection software, the robot can interact with people in a natural manner.

Panel

HRI: the real world BIBAFull-Text 103-104
  Jennifer L. Burke; Henrik I. Christensen; Roland Menassa; Ralf Koeppe; Joe Dyer; Mario Munich
This year's conference focuses on human-robot interaction in the real world. The panel discussion presents the view from those who are "living it": industry leaders who are relying on current robotic technology to accomplish their work right now. Who better to provide the most compelling information on issues/challenges that are influencing their use? This panel gathers experts from business and industry to discuss their experiences in using robotic technology in their field. Topics include the financial, organizational, and practical challenges faced by professionals using robotic technology in the workplace, and factors influencing the acceptance of robots at work. The implications for design of robotic products and systems are also discussed. The session is a must for professionals and academic researchers interested in solving problems related to using robots in real world settings.

Late-breaking reports/poster session

Towards an online voice-based gender and internal state detection model BIBAFull-Text 105-106
  Amir Aly; Adriana Tapus
In human-robot interaction, gender and internal state detection play an important role in making the robot reacting in an appropriate manner. This research focuses on the important features to extract from a voice signal in order to construct successful gender and internal state detection systems, and shows the benefits of combining both systems together on the total average recognition score. Moreover, it consists a foundation on an ongoing approach to estimate the human internal state online via unsupervised clustering algorithms.
Policy adaptation with tactile feedback BIBAFull-Text 107-108
  Brenna D. Argall; Eric L. Sauser; Aude G. Billard
Behavior adaptation with execution experience is a practical feature for any policy learning system. Our work provides performance feedback to a robot learner in the form of tactile corrections from a human teacher, for the purpose of policy refinement as well as policy reuse. Multiple variants of our general approach have been validated on the iCub robot, as building blocks towards a high-DoF humanoid system that integrates tactile sensing on the hands and arms into complex behaviors and sophisticated learning routines.
Perception by proxy: humans helping robots to see in a manipulation task BIBAFull-Text 109-110
  John Alan Atherton; Michael A. Goodrich
Robots excel at planning and performing tasks in controlled environments, but poor perception often leads to poor performance in unstructured environments. One typical way of improving robot performance is to give more control to a human operator and then design user interfaces that build the operator's situation awareness. As an alternative, humans can support robot perception to add structure to unstructured environments. We claim that when humans support robot perception, robots can spend more time acting autonomously, which can lead to reduced operator workload and increased overall performance. We present a design process, called perception by proxy, and apply it to a simple manipulation task.
A comparison of unsupervised learning algorithms for gesture clustering BIBAFull-Text 111-112
  Adrian Ball; David Rye; Fabio Ramos; Mari Velonaki
Gesture recognition is an important aspect of interpersonal social interaction. Developing a similar capacity in a robot will improve human-robot interaction. Various unsupervised clustering methods applied to clustering a set of dynamic human arm gestures are compared. Unsupervised clustering is important in gesture recognition as it imposes no a priori bound on the set of gestures. Results are compared using v-measure, a metric that allows differential weighting between clustering homogeneity and completeness. Experiments show that the best clustering method depends on the desired balance between homogeneity and completeness.
Perceptions and knowledge about robots in children of 11 years old in México City BIBAFull-Text 113-114
  Eduardo Benítez Sandoval; Mauricio Reyes Castillo; John Alexander Rey Galindo
This paper seeks to describe the knowledge and perceptions of robots that have a group of 19 children under 11 years old in Mexico City has, interacting with a teleoperated anthropomorphic robot compared with a group of 18 children with the same characteristics that have not interacted with same robot. We seek to focus a precedent in the Mexican children in their context for future development of social robots.
The crucial role of robot self-awareness in HRI BIBAFull-Text 115-116
  Manuel Birlo; Adriana Tapus
In this paper, we present the first steps towards a new concept of robot self-awareness that can be implemented into embodied robot systems. Our concept of "the self" is inspired by already existing approaches and aims to provide a cognitive system with meta-cognitive capabilities. We believe that robot self-awareness is a crucial factor in the improvement of HRI.
Development of a context model based on video analysis BIBAFull-Text 117-118
  Roland Buchner; Astrid Weiss; Manfred Tscheligi
This paper reports on the analysis of video footage of a human-robot study in public place regarding context factors that significantly influence the interaction. Therefore, a coding scheme was developed which was later used to analyze the video footage of the human-robot study. To ensure the validity of the coding, the video footage was coded independently by two students and afterwards the Cohen's Kappa was calculated to ensure the intercoder reliability. This calculation served as a basis for the creation of the first context model which shows factors that influence the interaction. This approach could show that it is possible to extract valid context factors and to create a context model based on video annotation. These factors should then be further tested in lab-based studies to get a better understanding of how they affect the human-robot interaction.
Using depth information to improve face detection BIBFull-Text 119-120
  Walker Burgin; Caroline Pantofaru; William D. Smart
Interactive methods of tele-operating a single unmanned ground vehicle on a small screen interface BIBAFull-Text 121-122
  Chua Wei Liang Kenny Chua; Foo Chuan Huat Foo; Lee Yong Siang Lee
This study explores 4 different interaction methods of tele-operating a single unmanned ground vehicle (UGV) on a small screen interface. An experiment involving 20 participants carrying out a navigational task was conducted. Performance measures such as task completion time, number of errors made and workload ratings were considered and analyzed.
Child's recognition of emotions in robot's face and body BIBAFull-Text 123-124
  Iris Cohen; Rosemarijn Looije; Mark A. Neerincx
Social robots can comfort and support children who have to cope with chronic diseases. In previous studies, a "facial robot", the iCat, proved to show well-recognized emotional expressions that are important in social interactions. The question is if a mobile robot without a face, the Nao, can express emotions with its body. First, dynamic body postures were created and validated that express fear, happiness, anger, sadness and surprise. Then, fourteen children had to recognize emotions, expressed by both robots. Recognition rates were relatively high (between 68% and 99% accuracy). Only for the emotion "sad", the recognition was better for the iCat (95%) compared to the Nao (68%). Providing context increased the number of correct recognitions. In a second session, the emotions were significantly better recognized than during the first session for both robots. In sum, we succeeded to design Nao emotions, which were well recognized and learned, and can be important ingredients of the social dialogs with children.
Things that tweet, check-in and are befriended: two explorations on robotics & social media BIBAFull-Text 125-126
  Henriette Cramer; Sebastian Büttner
This late breaking report describes two explorations of effects of using social media in human-robot interaction. The first is an exploration of how 'autonomous creatures' can use information shared via social awareness streams by implementing a Nabaztag to use information from its 'friends' on location-sharing service foursquare. The second is an informal analysis of tweets sent to an existing robot-associated twitter account as a case. We show parallels to prior research and discuss questions that these simple explorations pose for the future of robots and social media.
A pilot study to understand requirements of a shopping mall robot BIBAFull-Text 127-128
  Chandan Datta; Anuj Kapuria; Ritukar Vijay
As part of our long term research interests in examining the technological requirements for a shopping mall robot, we performed a short pilot study during the Christmas holidays to identify the social interaction dynamics for Neel, a wheeled mobile robot which interacts using its on-site and online presence. During the pilot study we found that a range of the robot's interaction capabilities were mostly unused due to the relatively short interactions users had with the robot to fulfill their informational requirements like the movie show-times or apparel deals. Since, its challenging to educate and inform a diverse mass of users about the robot's functionality, we decided to divide the research roadmap in stages where in the first stage the users would learn the value of the robot's capabilities by the repeated short-time interactions and over the long term more users would register for the robot's services. Hence, identifying the temporal and episodic characteristics of the interactions are perceived important to match the expectations and privacy concerns of the users. We also identified that while non-interactively delivering shopping related information through a web application is relatively easier, doing it actively through the robot can be probed as advertisements very easily by human participants and negate the user experience we try to deliver. We report some key technological advances we made through our field trial and set forth the goals.
Managing social constraints on recharge behaviour for robot companions using memory BIBAFull-Text 129-130
  Amol A. Deshmukh; Mei Yii Lim; Michael Kriegel; Ruth Aylett; Kyron Du Casse; Koay Kheng L; Kerstin Dautenhahn
In this paper, we present an approach to monitor human activities such as entry, exit and break times of people in a workplace environment. The companion robot then learns the users' presence patterns over a period of time through memory generalisation and plans a suitable time for re charging itself causing less hindrance to human-robot interaction.
Designing interruptive behaviors of a public environmental monitoring robot BIBAFull-Text 131-132
  Vanessa Evers; Roelof de Vries; Paulo Alvito
This paper reports ongoing research to inform the design of a social robot to monitor levels of pollutant gasses in the air. Next to licensed environmental agents and immobile chemical sensors, mobile technologies such as robotic agents are needed to collect complaints and smell descriptions from humans in urban industrial areas. These robots will interact with members of the public and ensure responsiveness and accuracy of responses. For robots to be accepted as representative environmental monitoring agents and for people to comply with robot instructions in the case of a calamity, social skills will be important. In this paper we will describe the intelligent environment the environmental robot is part of and discuss preliminary work to understand in what way robot interruptions can be mitigated with help of social robot behaviors.
Interactional disparities in English and Arabic native speakers with a bi-lingual robot receptionist BIBAFull-Text 133-134
  Imran Fanaswala; Brett Browning; Majd Sakr
HRI studies in a Middle Eastern environment are subject to nuances and subtleties. This study explores the nature of interactions, in an uncontrolled environment, between a permanently deployed bi-lingual robot-receptionist and interlocutors of varied native tongues. We correlate an interlocutor's native language with their propensity for accepting an invite and the duration of the ensuing conversation. Subsequently, we present results that demonstrate significant disparity in interactional patterns between English and Arabic speakers. We also assess the importance of a transliterated Arabic input mode for encouraging user interaction.
Comparative analysis of human motion trajectory prediction using minimum variance curvature BIBAFull-Text 135-136
  Gonzalo Ferrer; Alberto Sanfeliu
The prediction of human motion intentionality is a key issue towards intelligent human robot interaction and robot navigation. In this work we present a comparative study of several prediction functions that are based on the minimum curvature variance from the current position to all the potential destination points, that means, the points that are relevant for people motion intentionality. The proposed predictor computes, at each interval of time, the trajectory from the present to the destination positions, and makes a prediction of the human motion at each interval of time using only the criterion of minimum curvature variation. The method has been validated in the Edinburgh Informatics Forum Pedestrian database.
Anthropomorphic design for an interactive urban robot: the right design approach BIBAFull-Text 137-138
  Florian Förster; Astrid Weiss; Manfred Tscheligi
The paper presents the first step of a user-centered design process for a robot designated to operate in urban public space. A participatory design workshop was conducted to challenge the anthropomorphic design approach assumed by the designers and elicit user requirements for the design. In contrast to the expectations, the results show a tendency towards a preference of a non-anthropomorphic design.
Tactile sensing: a key technology for safe physical human robot interaction BIBAFull-Text 139-140
  Markus Fritzsche; Norbert Elkmann; Erik Schulenburg
Human-robot interaction in a shared workspace permits and often even requires physical contact between humans and robots. A key technology to ensure that physical human robot interaction is safe is to monitor contact forces by providing the robot with a tactile sensor as an artificial skin.
   This paper introduces a pressure sensitive skin that can be adapted to complex geometries and offers reliable contact measurements on the entire robot body. Equipped with integrated cushioning elements the sensitive skin can reduce the risk of dangerous injuries in physical human-robot interaction. Beside safety related functions, the sensitive skin offers touch based robot motion control which simplifies human-robot interaction.
The chanty bear: a new application for hri research BIBAFull-Text 141-142
  Kotaro Funakoshi; Tomoya Mizumoto; Ryo Nagata; Mikio Nakano
This paper presents yet another English-teaching robot, while putting emphasis on the merits which are offered by second language education to human robot interaction (HRI) research. The chanty bear, our prototype robot based on a rhythmic teaching method of English called Jazz Chants is introduced.
A case for low-dose robotics in autism therapy BIBAFull-Text 143-144
  Michael A. Goodrich; Mark A. Colton; Bonnie Brinton; Martin Fujiki
Robots appear to be engaging to many children with autism, and evidence suggests that engagement can facilitate social interaction not only between child and robot but also between child and another human. To date, no objective evidence has established a link between short-term child-robot interactions and long-term child-human interactions. We report on a therapy model that uses a robot in no more than 20% of available therapy time, and describe how a humanoid robot can be used during that limited time to promote generalizable child-human interactions. Preliminary evidence indicates that such low-dose robotics can promote positive child-human interactions.
Learning from failure: extended abstract BIBAFull-Text 145-146
  Daniel H. Grollman; Aude G. Billard
In the canonical Robot Learning from Demonstration scenario a robot observes performances of a task and then develops an autonomous controller. Current work acknowledges that humans may be suboptimal demonstrators and refines the controller for improved performance. However, there is still an assumption that the demonstrations are successful examples of the task. We here consider the possibility that the human has failed, and propose a model to minimize the possibility of the robot making the same mistakes.
Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults BIBAFull-Text 147-148
  Marcel Heerink
It is generally recognized that non perceptual factors like age, gender, education and computer experience can have a moderating effect on how perception of a technology leads to acceptance of it. In our present research we are exploring the influence of these factors on the acceptance of assistive social robots by older adults. In this short paper we discuss the results of a user study in which a movie of an elderly person using a social assistive robot was shown to older adults. The analysis of the responses give a first indication on if and how these factors relate to the perceptual processes that lead to acceptance.
A memory game for human-robot interaction BIBAFull-Text 149-150
  Sergio Hernandez-Mendez; Luis Alberto Morgado-Ramirez; Ana Cristina Ramirez-Hernandez; Luis F. Marin-Urias; Antonio Marin-Hernandez; Fernando Montes-Gonzalez
As a part of the final goal of introducing robots in the human environment, it also comes the problem of the close interaction between these two beings. Close cooperative activities illustrate well this problem and rises new challenges to solve.
   In this report we show the current work of the development of a system for acquiring information directly from human focused on a classic interaction game called "Simon Says".
Tele-operation between USA and Japan using humanoid robot hand/arm BIBAFull-Text 151-152
  Makoto Honda; Takanori Miyoshi; Takashi Imamura; Masayuki Okabe; Faisal M. Yazadi; Kazuhiko Terashima
This paper presents a tele-control system constructed by a robot hand/arm and operator. The angle of the robot hand is controlled by the angle of the operator's finger, and the operator feels the environmental force, as detected by the touch sensor of the robot hand, constituting so-called bilateral master/slave control. The position of the robot arm is controlled by the position of the operator's arm.
   So far, there has been little research using a multi-fingered humanoid robot hand in a network environment where communication delay exists. The purpose of our study is to achieve tele-operation between an operator's hand/arm and a multi-fingered humanoid robot hand/arm with delayed time. Therefore, this study constructs a system that can grasp and manipulate objects stably, despite the communication delay.
   In the experiments, the operator operates humanoid robot hand/arm of Toyohashi University of Technology from USA. The operator grasp and moves an object in Japan by operating the humanoid robot hand/arm. The staff in Japan gives the operation instruction to the operator in USA during experiment. The operator operates robot according to the voice. The experimental results show that this system can grasp and manipulate objects stably, despite the communication delay. In addition, the operator grasped the object using a multi-fingered humanoid robot hand/arm by master/slave control feeling fingertip force by tele-operation between USA and Japan.
Universal robots as 'solutions' to wicked problems: debunking a robotic myth BIBAFull-Text 153-154
  Mattias Jacobsson; Henriette Cramer
This work in progress discusses a persistent myth about robots, namely that 'future robots will be universal solutions', or in other words that robots should tackle many complex tasks and situations. In our approach we consider whether this is a case of posing robots as solutions to wicked problems or if robots can be considered wicked design problems in themselves. At the same time we make an argument for adopting a research through design approach. Our stance suggests that by viewing robots as composed of design materials we can sensitively address and in the long run perhaps even avoid wicked problems related to robotics.
Experience centred design for a robotic eating aid BIBAFull-Text 155-156
  Javier Jiménez Villarreal; Sara Ljungblad
We discuss how an experience centred approach to robotic design might lead to new design spaces and products that are more engaging and better meet users' needs and lifestyles. To support the statement, we present preliminary data from a long-term user study on an eating aid robot.
Upper-limb exercises for stroke patients through the direct engagement of an embodied agent BIBAFull-Text 157-158
  Hee-Tae Jung; Jennifer Baird; Yu-Kyong Choe; Roderic A. Grupen
In this case study, we examine the functional utility of an embodied agent as an interactive medium in stroke rehab. A set of physical rehab exercises is conducted through the direct engagement of an embodied agent, the uBot-5. Based on the preliminary data, we argue that a general-purpose embodied agent has a potential to functionally complement human therapists in providing rehab to stroke patients.
The new ontological category hypothesis in human-robot interaction BIBAFull-Text 159-160
  Peter H., Jr. Kahn; Aimee L. Reichert; Heather E. Gary; Takayuki Kanda; Hiroshi Ishiguro; Solace Shen; Jolina H. Ruckert; Brian Gill
This paper discusses converging evidence to support the hypothesis that personified robots and other embodied personified computational systems may represent a new ontological category, where ontology refers to basic categories of being, and ways of distinguishing them.
RIDE: mixed-mode control for mobile robot teams BIBFull-Text 161-162
  Erik Karulf; Marshall Strother; Parker Dunton; William D. Smart
User recognition based on continuous monitoring and tracking BIBAFull-Text 163-164
  Hye-Jin Kim; Ho Sub Yoon; Jae Hong Kim
This paper presents a user recognition system, using face, height, and clothes color features under the special assumption that is a user is monitored and tracked. In real human-robot interaction situation, all information cannot be provided at the same time and some parts of frames in a video have no clues at all. In the proposed system, tracking is an important feature to recognize a user because data in the previous frames can be utilized. We propose an information update method that efficiently updates similarity results. This system is tested using the movie clips acquired under the unconstrained environment including illumination variation, several distance from a camera to the user, and various view types of human body.
Terrain-adaptive and user-friendly remote control of wheel-track hybrid mobile robot platform BIBAFull-Text 165-166
  Yoon-Gu Kim; Jeong-Hwan Kwak; Jinung An
Various robot platforms have been designed and developed to perform given tasks in a hazardous environment for surveillance, reconnaissance, search and rescue, etc. We considered a terrain-adaptive and transformable hybrid robot platform that is equipped for rapid navigation on flat floors and good performance in overcoming stairs or obstacles. The mode transition is determined and implemented by adaptive driving mode control of the mobile robot. The terrain-adaptive and user-friendly remote control was verified through its navigation performance experiments in real and test-bed environments.
Assisted-care robot dealing with multiple requests in multi-party settings BIBAFull-Text 167-168
  Yoshinori Kobayashi; Masahiko Gyoda; Tomoya Tabata; Yoshinori Kuno; Keiichi Yamazaki; Momoyo Shibuya; Yukiko Seki
This paper presents our ongoing work developing service robots that provide assisted-care, such as serving tea to the elderly in care facilities. In multi-party settings, a robot is required to be able to deal with requests from multiple individuals simultaneously. In particular, when the service robot is concentrating on taking care of a specific person, other people who want to initiate interaction may feel frustrated with the robot. To a considerable extent this may be caused by the robot's behavior, which does not indicate any response to subsequent requests while preoccupied with the first. Therefore, we developed a robot that can display acknowledgement, in a socially acceptable manner, to each person who wants to initiate interaction. In this paper we focus on the task of tea-serving, and introduce a robot able to bring tea to multiple users while accepting multiple requests. The robot can detect a person's request (raising their hand) and move around people using its localization system. When the robot detects a person's request while serving tea to another person, it displays its acknowledgement by indicating "Please wait" through a nonverbal action. Because it can indicate its acknowledgement of their requests socially, people will likely feel more satisfied with the robot even when it cannot immediately address their needs.
From cartoons to robots part 2: facial regions as cues to recognize emotions BIBAFull-Text 169-170
  Tomoko Koda; Zsofia Ruttkay; Tomoharu Sano
This paper reports a preliminary result of a cross-cultural study on the facial regions as cues to recognize virtual agents' facial expressions. We believe providing research results on the perception of cartoonish virtual agents' facial expressions to HRI research community is meaningful in order to minimize the effort to develop robot's facial expressions. The result implies Japanese weighed facial cues more heavily in the eye regions than Hungarians, who weighed facial cues more heavily in the mouth region than Japanese.
Gaze motion planning for android robot BIBAFull-Text 171-172
  Yutaka Kondo; Masato Kawamura; Kentaro Takemura; Jun Takamatsu; Tsukasa Ogasawara
Androids are potentially required to show human-like behavior, because their appearance resembles humans' physical features. Therefore, we propose a gaze motion planning method. Within this method, we control the convergence of eyes and the ratio of eye angle to head angle, which leads to a more precise estimation of gaze direction. We implemented our method on the android Actroid-SIT and conducted experiments for evaluation of the effects of our method. Through these experiments, we achieved a common guidance for androids when planning more precise gaze motion.
Perception of visual scene and intonation patterns of robot utterances BIBAFull-Text 173-174
  Ivana Kruijff-Korbayová; Raveesh Meena; Pirita Pyykkönen
Assigning intonation to dialogue system output in a way that reflects relationships between entities in the discourse context can enhance the acceptability of system utterances. Previous research concentrated on the role of linguistic context; dialogue situatedness and the role of visual context in determining accent placement has not been studied. We present an experimental study on the influence of visual context on the perception of nuclear accent placement in synthesized clarification requests. We found that utterances are perceived as appropriate more often when the visual scene licenses the nuclear accent placement than when it does not.
Towards proactive assistant robots for human assembly tasks BIBAFull-Text 175-176
  Woo Young Kwon; Il Hong Suh
In this paper, we propose a proactive assistant robot for human assembly tasks. In order to predict future events of human activities such as requesting parts for assembly, we use the temporal Bayesian network that can infer both causal probability and temporal distribution of an conditional event. Based on the temporal Bayesian network model of an human assembly task, we also show that the proactive assistant robot make human-robot-interaction quickly by temporal prediction of an event.
A panorama interface for telepresence robots BIBAFull-Text 177-178
  Daniel A. Lazewatsky; William D. Smart
Telepresence robots are becoming increasingly popular and are increasingly ready to enter use in the real world as stand-ins for remote humans. It is useful, but currently uncommon, to provide the human operator with an approximation of peripheral vision and the ability to saccade around the scene. We have developed an interface which provides peripheral vision to a remote operator by using a motorized pan-tilt camera to create a panorama, and enables the operator to move the camera's gaze within that panorama.
Predictability or adaptivity?: designing robot handoffs modeled from trained dogs and people BIBAFull-Text 179-180
  Min Kyung Lee; Jodi Forlizzi; Sara Kiesler; Maya Cakmak; Siddhartha Srinivasa
One goal of assistive robotics is to design interactive robots that can help disabled people with tasks such as fetching objects. When people do this task, they coordinate their movements closely with receivers. We investigated how a robot should fetch and give household objects to a person. To develop a model for the robot, we first studied trained dogs and person-to-person handoffs. Our findings suggest two models of handoff that differ in their predictability and adaptivity.
Understanding users' perception of privacy in human-robot interaction BIBAFull-Text 181-182
  Min Kyung Lee; Karen P. Tang; Jodi Forlizzi; Sara Kiesler
Previous research has shown that design features that support privacy are essential for new technologies looking to gain widespread adoption. As such, privacy-sensitive design will be important for the adoption of social robots, as they could introduce new types of privacy risks to users. In this paper, we report findings from our preliminary study on users' perceptions and attitudes toward privacy in human-robot interaction, based on interviews that we conducted about a workplace social robot.
Utilitarian vs. hedonic robots: role of parasocial tendency and anthropomorphism in shaping user attitudes BIBAFull-Text 183-184
  Namseok Lee; Hochul Shin; S. Shyam Sundar
This study examines the differential effects of hedonic vs. utilitarian robots, using a between-subjects experimental design, whereby 48 college students in Korea were randomly assigned to interact with either a Pleo (Dinosaur robot) or a Roomba (vacuum-cleaning robot). Results revealed that hedonic robot (HR) users perceived more enjoyment than utilitarian robot (UR) users, whereas UR users perceived more usefulness and ease-of-use than HR users. Users with high tendency for parasocial interaction (PSI) and high anthropomorphism had more positive attitudes towards robots than their counterparts with low levels of these traits. HR users with high anthropomorphism and PSI had the most positive attitudes than all other combinations of variables. These results indicate that individual differences play a significant moderating role on user attitudes toward hedonic and utilitarian robots. The results of this study suggest that robot developers and marketers should take seriously the labeling of robots as hedonic or utilitarian, and also consider users' individual differences in order to maximize benefits of human-robot interactions.
Incremental learning of primitive skills from demonstration of a task BIBAFull-Text 185-186
  Sang Hyoung Lee; Hyung Kyu Kim; Il Hong Suh
In this work, we propose methods for automatically generating primitive skills from demonstration of a task. Additionally, we propose methods for improving the existing primitive skills and adding the new primitive skills incrementally and automatically. To validate our proposed methods, we present experimental results of a human-like robot handling three gestures and a task for making coffee.
Hitting a robot vs. hitting a human: is it the same? BIBAFull-Text 187-188
  Sau-lai Lee; Ivy Yee-man Lau
The present project aimed to study how people make moral judgment for human versus robot behaviors. Ten transgression scenarios were presented to the participants with either a human or a robot as the perpetrator or the victim. Results showed that most of the transgressions were perceived as less immoral when it was acted on a robot than on a human. Moral judgments for human behaviors were more intuitive and emotion based. Moral judgments for robot behaviors involve both intuition and cognitive reasoning. Possible psychological causes were discussed.
Recognition and incremental learning of scenario-oriented human behavior patterns by two threshold models BIBAFull-Text 189-190
  Gi Hyun Lim; Byoungjun Chung; Il Hong Suh
Two HMM-based threshold models are suggested for recognition and incremental learning of scenario-oriented human behavior patterns. One is the expected behavior threshold model to discriminate if a monitored behavior pattern is normal or not. The other model is the registered behavior threshold model to detect whether such behavior pattern is already learned. If a behavior patten is detected as a new one, an HMM is generated to represent the pattern, and then the HMM is used to update behavior clusters by hierarchical clustering process.
Beyond speculative ethics in HRI?: ethical considerations and the relation to empirical data BIBAFull-Text 191-192
  Sara Ljungblad; Stina Nylander; Mie Nørgaard
We discuss the difference between understanding robot ethics as something that is grounded in philosophical ideas about a potential future design, and understanding robot ethics as something that is grounded in empirical data. We argue, that understanding "robots" as a relatively homogenous group of designs for which we can formulate general ethics may lead to a foresight of future robot designs that includes ideas and concerns that are not feasible or realistic. Our aim is to exemplify a complementing perspective, by shedding light on two different robotic designs. We discuss their relation to specific use practices and user experiences, and provide some early ethical reflections and design concerns.
Team-based interactions with heterogeneous robots through a novel HRI software architecture BIBAFull-Text 193-194
  Meghann Lomas; Vera Zaychik Moffitt; Patrick Craven; Ernest Vincent, II Cross; Jerry L. Franke; James S. Taylor
In this paper, we describe a Human-Robot Interface (HRI) software architecture designed to enable teams of operators to share tasking for multiple unmanned vehicles and systems. Many existing robotic systems are controlled using specially-designed interfaces, which becomes problematic for operator teams controlling multiple heterogeneous systems. We propose a solution that enables teams of operators to control multiple heterogeneous vehicles and systems using a common command and control environment. This environment supports task sharing and handoff and was shown in evaluations to improve team efficiency.
The applicability of gricean maxims in social robotics polite dialogue BIBAFull-Text 195-196
  Qin En Looi; Swee Lan See
Human-robot interaction is the distinctive feature of social robotics; with the increasing pervasiveness of the social robots in our everyday lives, efforts should be made to improve the interaction. This study explores how Gricean conversational maxims on politeness can be implemented to develop polite dialogue in social robotics, catering to the preferences of the user. Though results are preliminary, the maxims have shown to be promising guidelines in the development of dialogue. The study concludes by suggesting how end-user tests can be used to verify the applicability of these maxims.
Polonius: a wizard of oz interface for HRI experiments BIBAFull-Text 197-198
  David V. Lu; William D. Smart
Polonius is a robot control interface designed for running Wizard of Oz style experiments. It is designed to be easy enough to be used by the non-programmer collaborators of roboticists. The program acts as an intermediary between the robot and a wizard interacting with a GUI based on a pre-defined script. Polonius also eliminates the need for coding the video after experiments by integrating a robust logging system.
Expressing emotions through robots: a case study using off-the-shelf programming interfaces BIBAFull-Text 199-200
  Vimitha Manohar; Shamma al Marzooqi; Jacob W. Crandall
This paper explores how effectively users can encode emotions in robot behaviors using existing robot programming interfaces. Specifically, we analyze how programming interfaces for Nao and Pleo robots allow end-users to encode behaviors that express anger, sadness, happiness, and surprise. Via a series of user studies, we found that users were able to express emotions through these robots more effectively via verbal expressions than non-verbal expressions.
Recognition of spatial dynamics for predicting social interaction BIBAFull-Text 201-202
  Ross Mead; Amin Atrash; Maja J. Mataric
We present a user study and dataset designed and collected to analyze how humans use space in face-to-face interactions. In a proof-of-concept investigation into human spatial dynamics, a Hidden Markov Model (HMM) was trained over a subset of features to recognize each of three interaction cues -- initiation, acceptance, and termination -- in both dyadic and triadic scenarios; these cues are useful in predicting transitions into, during, and out of multi-party social encounters. It is shown that the HMM approach performed twice as well as a weighted random classifier, supporting the feasibility of recognizing and predicting social behavior based on spatial features.
Make your wishes to 'genie in the lamp': physical push with a socially intelligent robot BIBAFull-Text 203-204
  Hye-Jin Min; Jong C. Park
This paper proposes a robotic agent named 'Genie' that understands a user's wish and gives its possible answers on a social network platform. Once a potential wish is detected upon monitoring the text updates in the micro-blog of the user, the agent initiates a task to help the user with both NLP and metadata analysis. As an interaction scenario, we set the type of a robot as an agent that identifies wishful products by searching for and analyzing product information on the web. After an analysis of the vast amount of data, the agent provides possible answers to the user as a way of granting the wish that might require additional time and effort to achieve. In order to draw the user's attention, the agent makes a physical movement as a push notification with more user-friendliness.
A communication structure for human-robot itinerary requests BIBAFull-Text 205-206
  Nicole Mirnig; Astrid Weiss; Manfred Tscheligi
To analyze the formula for success of human communication, we examined dialogs between human interactors who were asking for directions in public place and extracted those elements that are responsible for making a dialog succeed or fail. Then, we tried to rate the elements according to the grade of their influence. Based on this rating and on the Shannon & Weaver model of communication, we created a communication structure for successful human-robot communication on which further research may be based to make human-robot communication as effective as possible.
Cognitive objects for human-computer interaction and human-robot interaction BIBAFull-Text 207-208
  Andreas Möller; Luis Roalter; Matthias Kranz
We introduce and define Cognitive Objects for human-robot interaction and human-computer interaction and disambiguate them against existing 'Smart Objects'. Cognitive Objects are physical real-world objects used in manipulation tasks by humans and robots. As such, they incorporate self-awareness, reduce ambiguity and uncertainty in object recognition, and provide services to both humans and robots during their usage in real-world environments.
   We distinguish Cognitive Objects from other 'Smart Objects' and computationally enriched artifacts, outline their characteristics, and describe their potential impact on human-computer and human-robot interaction with real-world objects.
Inferring social gaze from conversational structure and timing BIBAFull-Text 209-210
  Robin Murphy; Jessica Gonzales; Vasant Srinivasan
We have created a preliminary inference engine for generating gaze acts based on extracting the social context from conversational structure and timing in human-robot dialog.
   We have created a preliminary inference engine for generating gaze acts based on extracting the social context from conversational structure and timing in human-robot dialog.
Exploring sketching for robot collaboration BIBAFull-Text 211-212
  Matei Negulescu; Tetsunari Inamura
The collaboration between humans and robots can lessen the burden of automatic learning while performing difficult tasks. In this work, we explore sketching as a method to enable effective collaboration between human and robot. The system allows a human to contiguously interact with the robot to perform a task by allowing the sketching of the environment and specifying the affordances of objects and areas on the map.
Exploring influences of robot anxiety into HRI BIBFull-Text 213-214
  Tatsuya Nomura; Takayuki Kanda; Sachie Yamada; Tomohiro Suzuki
Collaboration with an autonomous humanoid robot: a little gesture goes a long way BIBAFull-Text 215-216
  Kevin O'Brien; Joel Sutherland; Charles Rich; Candace L. Sidner
We report on an experiment in which a human collaborates with a small, autonomous, humanoid robotic toy. The experiment demonstrates that the robot's use of two simple gestures, namely orienting its head toward the addressee when it speaks and raising its arm in the direction of objects it refers to, significantly improve the human's perception of the robot's interaction skills and quality as a collaborator.
User observation & dataset collection for robot training BIBAFull-Text 217-218
  Caroline Pantofaru
Personal robots operate in human environments such as homes and offices, co-habiting with people. To effectively train robot algorithms for such scenarios, a large amount of training data containing both people and the environment is required. Collecting such data involves taking a robot into new environments, observing and interacting with people. So far, best practices for robot data collection have been undefined. Fortunately, the human-robot interaction community has conducted field studies whose methodology can serve as a model. In this paper, we draw parallels between field study observation and the data collection process, suggesting that best practices may be transferable. As a use case, we present a robot sensor dataset for training and testing algorithms for person detection in indoor environments.
The effect of robot's behavior vs. appearance on communication with humans BIBAFull-Text 219-220
  Eunil Park; Hwayeon Kong; Hyeong-taek Lim; Jongsik Lee; Sangseok You; Angel Pasqual del Pobil
This study explores the effect of the robot's appearance vs. behavior (voice and gestures) on the way it is perceived as a machine-like instead of a human-like robot. A between-subjects experiment with four conditions was conducted. Results suggest that both the robot's behavior and appearance are important but, if they are contradictory, the robot's behavior is more powerful than the robot's appearance in the perception of the robot as more machine-like or human-like.
Activity recognition from the interactions between an assistive robotic walker and human users BIBAFull-Text 221-222
  Mitesh Patel; Jaime Valls Miro; Gamini Dissanayake
Detection of individuals' intention from a sequence of actions is an open and complex problem. In this paper we present a smart walker as mobility aid which can interpret the users' behaviour patterns to recognize their intentions and consequently act as an intelligent assistant. The result of the experiments performed in this paper demonstrates the potential of dynamic bayesian networks (DBN), in relation to their dynamic and unsupervised nature, for realistic human-robot interaction modelling.
Web-based object category learning using human-robot interaction cues BIBAFull-Text 223-224
  Christian I. Penaloza; Yasushi Mae; Tatsuo Arai; Kenichi Ohara; Tomohito Takubo
We present our method for learning object categories from the internet using cues obtained through human-robot interaction. Such cues include an object model acquired by observation and the name of the object. Our learning approach emulates the natural learning process of children when they observe their environment, encounter unknown objects and ask adults the name of the object. Using this learning approach, our robot is able to discover objects in a domestic environment by observing when humans naturally move objects as part of their daily activities. Using speech interface, the robot directly asks humans the name of the object by showing an example of the acquired model. The name in text format and the previously learnt model serve as input parameters to retrieve object category images from a search engine, select similar object images, and build a classifier. Preliminary results demonstrate the effectiveness of our learning approach.
Mission specialist interfaces in unmanned aerial systems BIBFull-Text 225-226
  Joshua M. Peschel; Robin R. Murphy
Attitude of german museum visitors towards an interactive art guide robot BIBAFull-Text 227-228
  Karola Pitsch; Sebastian Wrede; Jens-Christian Seele; Luise Süssenbach
As a testbed for real-world experimentation on HRI and dynamic interaction models, this paper presents an autonomous robot system acting as guide in a German arts museum. The visitors' evaluation of this system is analyzed using a questionnaire and reveals issues for subsequent analysis of the real-time interaction.
Integration of a low-cost RGB-D sensor in a social robot for gesture recognition BIBAFull-Text 229-230
  Arnaud Ramey; Víctor González-Pacheco; Miguel A. Salichs
An objective of natural Human-Robot Interaction (HRI) is to enable humans to communicate with robots in the same manner humans do between themselves. This includes the use of natural gestures to support and expand the information that is exchanged in the spoken language. To achieve that, robots need robust gesture recognition systems to detect the non-verbal information that is sent to them by the human gestures. Traditional gesture recognition systems highly depend on the light conditions and often require a training process before they can be used. We have integrated a low-cost commercial RGB-D (Red Green Blue -- Depth) sensor in a social robot to allow it to recognise dynamic gestures by tracking a skeleton model of the subject and coding the temporal signature of the gestures in a FSM (Finite State Machine). The vision system is independent of low light conditions and does not require a training process.
Tangible interfaces for robot teleoperation BIBAFull-Text 231-232
  Gabriele Randelli; Matteo Venanzi; Daniele Nardi
In this paper we present some results obtained through an experimental evaluation of tangible user interfaces (TUIs), comparing their novel interaction paradigms with more conventional interfaces, such as a joypad and a keyboard. Our main goal is to make a formal assessment of TUIs in robotics through a rigorous and extensive experimental evaluation. Firstly, we identified the main benefits of TUIs for robot teleoperation in a urban search and rescue task. Secondly, we provide an evaluation framework to allow for an effective comparison of tangible interfaces with other input devices.
Generalizing behavior obtained from sparse demonstration BIBAFull-Text 233-234
  Marcia Riley; Gordon Cheng
Here we describe a parameter-driven solution for generating novel yet similar movements from a sparse example set obtained through observation. In our experiments, a humanoid learns to represent movement trajectories demonstrated by a person with intuitive parameters describing the start and end points of different motion trajectory segments. These segments are automatically produced based on changes in curvature. After rebinning to equate similar segments across the samples, we use a linear approximation framework to build a representation based on relevant task features (segment start and end points) where radial basis functions (RBFs) are used to approximate the unknown non-linear characteristics describing a trajectory. The solution is accomplished on-line and requires no interaction. With this approach a humanoid can learn from only a few examples, and quickly produce new movements.
Adapting robot behavior to user's capabilities: a dance instruction study BIBAFull-Text 235-236
  Raquel Ros; Ilaria Baroni; Marco Nalin; Yiannis Demiris
The ALIZ-E1 project's goal is to design a robot companion able to maintain affective interactions with young users over a period of time. One of these interactions consists in teaching a dance to hospitalized children according to their capabilities. We propose a methodology for adapting both, the movements used in the dance based on the user's cognitive and physical capabilities through a set of metrics, and the robot's interaction based on the user's personality traits.
Unity in multiplicity: searching for complexity of persona in HRI BIBAFull-Text 237-238
  Jolina H. Ruckert
This conceptual paper broaches possibilities and limits of establishing robot persona in HRI.
Designing a robot through prototyping in the wild BIBAFull-Text 239-240
  Selma sabanovic; Sarah Reeder; Bobak Kechavarzi; Zachary Zimmerman
This paper describes the design and initial evaluation of Dewey, a do-it-yourself (DIY) robot prototype aimed to help users manage break-taking in the workplace. We describe the application domain, prototyping and technical implementation, and evaluation of Dewey in a real office environment to show how research using simple prototypes can provide valuable insights into user needs and practices at the early stages of socially assistive robot design.
Are specialist robots better than generalist robots? BIBAFull-Text 241-242
  Young June Sah; Bomee Yoo; S. Shyam Sundar
When a robot is said to be a specialist in a particular domain, does it alter the nature and quality of human-robot interaction? This study examines the effects of specialization in robot functions, along with individual difference in immersive tendencies, on users' trust, perception, activity, and memory. In a controlled experiment, 38 participants were taught a physical exercise lesson from either a specialist or generalist humanoid robot for 6 min. Results showed that specialization had effects on the participants' affective trust; and immersive tendency predicted active participation in the interaction and led to better memory. The latter also moderated the effect of the former -- users with higher immersive tendency are more likely to make human attributions of specialization, and rate a specialist robot as more intelligent than a generalist robot. These results have theoretical implications for media-equation as well as design implications for human-robot interaction professionals.
Generation of meaningful robot expressions with active learning BIBAFull-Text 243-244
  Giovanni Saponaro; Alexandre Bernardino
We propose a mechanism to communicate emotions to humans by using head, torso and arm movements of a humanoid robot, without exploiting its facial features. To this end, we build a library of pre-programmed robot movements and we ask people to attribute emotional scores to these initial movements. The answers are then used to fine-tune motion parameters with an active learning approach.
Random movement strategies in self-exploration for a humanoid robot BIBAFull-Text 245-246
  Guido Schillaci; Verena Vanessa Hafner
Motor Babbling has been identified as a self-exploring behaviour adopted by infants and is fundamental for the development of more complex behaviours, self-awareness and social interaction skills. Here, we adopt this paradigm for the learning strategies of a humanoid robot that maps its random arm movements with its head movements, determined by the perception of its own body. Finally, we analyse three random movement strategies and experimentally test on a humanoid robot how they affect the learning speed.
Who is more expressive during child-robot interaction: Pakistani or Dutch children? BIBAFull-Text 247-248
  Suleman Shahid; Emiel Krahmer; Marc Swerts; Omar Mubin
In this study we have tried to determine if the cultural background of children has an influence on how they interact with robots. Children of different age groups and cultures played a card guessing game with a robot (iCat). By using perception tests to evaluate the children's emotional response it was revealed that children from South Asia (Pakistani) were much more expressive than European children (Dutch) and younger children were more expressive than the older ones in the context of child robot interaction.
The curious case of human-robot morality BIBAFull-Text 249-250
  Solace Shen
This conceptual paper draws upon moral philosophy to broach the question: Are robots moral agents?
A comparison of machine learning techniques for modeling human-robot interaction with children with autism BIBAFull-Text 251-252
  Elaine Short; David Feil-Seifer; Maja Mataric
Several machine learning techniques are used to model the behavior of children with autism interacting with a humanoid robot, comparing a static model to a dynamic model using hand-coded features. Good accuracy (over 80%) is achieved in predicting child vocalizations; directions for future approaches to modeling the behavior of children with autism are suggested.
A survey of social gaze BIBAFull-Text 253-254
  Vasant Srinivasan; Robin Murphy
Based on a synthesis of eight major studies using six robots involving social gaze in robotics, this research proposes a novel behavioral definition as a mapping G = E(C) from the perception of a social context C to a set of head, eye, and body patterns called gaze acts G that expresses the engagement E. This definition places social gaze within the behavior-based programming framework for robots and agents, providing a guide for principled future implementations. The research also identifies five social contexts, or functions, of social gaze (Establishing agency, Communicating social attention, Regulating the interaction process, Manifesting interaction content and Projecting mental state) along with six discrete gaze acts for social gaze functions (Fixation, Short glance, Aversion, Concurrence, Confusion, and Scan) that have been employed by various robots or in simulation for these contexts. The research contributes to a computational understanding of social gaze that bridges psychological, cognitive, and robotics communities.
A toolkit for exploring the role of voice in human-robot interaction BIBAFull-Text 255-256
  Vasant Srinivasan; Robin Murphy; Zachary Henkel; Victoria Groom; Clifford Nass
This paper describes an open source speech translator toolkit created as part of the "Survivor Buddy" project which allows written or spoken word from multiple independent controllers to be translated into either a single synthetic voice, synthetic voices for each controller, or unchanged natural voice of each controller. The human controllers can work over the internet or be physically co-located with the Survivor Buddy. The toolkit is expected to be of use for exploring voice in general human-robot interaction.
An information-theoretic approach to modeling and quantifying assistive robotics HRI BIBFull-Text 257-258
  Martin F. Stoelen; Alberto Jardón Huete; Virginia Fernández; Carlos Balaguer; Fabio Bonsignorio
Information provision-timing control for informational assistance robot BIBAFull-Text 259-260
  Hiroaki Sugiyama; Yasuhiro Minami
This paper proposes a HMM-based user's information demand estimation model for autonomous informational assistance robots to avoid providing information prematurely. The model estimates the user's implicit information demands by predicting a user's next information request using user's head movements. Through a word-association quiz-dialog experiment, our model demonstrated superior prediction performance over the usual HMM-based classifier.
Future robotic computer: a new type of computing device with robotic functions BIBAFull-Text 261-262
  Young-Ho Suh; Hyun Kim; Joo-Haeng Lee; Joonmyun Cho; Moohun Lee; Jeongnam Yeom; Eun-Sun Cho
With the advance of IT technologies, a new type of computing device will be introduced in our daily life in the near future. In this paper, we outline our on-going development of the robotic computer that naturally interacts with users, understands current situation about users and environments, and proactively provides users with services. We describe the system architecture and the implementation of a proof-of-concept prototype of the robotic computer proposed.
StyROC: stylus robot overlay control & styRAC: stylus robot arm control BIBAFull-Text 263-264
  Teo Chee Hong; Keng Kiang Tan; Wei Liang Kenny Chua; Kok Tiong John Soo
This paper describes 2 methods of controlling unmanned ground vehicles (UGVs) using stylus. The first method called StyROC (Stylus Robot Overlay Control) involves tele-operation of the robot with the controls superimposed on a UGV local map which is a vehicle-centric map that shows the obstacles around the robot. The second method is called StyRAC (Stylus Robot Arm Control) which allows the operator to control the arm of the robot using the stylus as well. A purely stylus-based Graphical User Interface (GUI) was designed to allow the control of robots via semi-autonomous or tele-operation mode. It also allows the operator to control the robotic arm on the unmanned robot. A two phase User Centred Design (USD) process was adopted in developing the interface. In the first phase, Cognitive Task Analysis (CTA) was conducted to elicit information on the challenges faced by the operators of the robots. In the second phase, the design was validated and refined through experiments. A total of 2 usability studies, 1 paper prototype evaluation and 1 heuristic evaluation were conducted to evaluate the interface. Results indicated that participants found the interface intuitive and easy to learn. There was significantly higher workload when operators where controlling 2 robots but there were no significant differences in terms of time taken to complete tasks. Issues such as the level of technology and trust in automation were issues that were brought up in the study.
The implementation of care-receiving robot at an English learning school for children BIBAFull-Text 265-266
  Fumihide Tanaka; Madhumita Ghosh
A Care-Receiving Robot (CRR) is a robot which is designed so as to be taken care of by humans. The original concept of CRR and its application to reinforce children's learning by teaching was proposed by [4]. In contrast to the conventional use of 'childcare robots' which play the role of care-givers (taking care of children), here we will introduce a reverse scenario where a robot is a care-receiver (being taken care of by children). The framework is presupposed to be promising not only because it could accelerate children's spontaneous active learning by teaching but also because it would be considered as being ethically safer and acceptable to a wider range of societies. This paper reports our pilot trials whose goal is to implement CRR at an English learning school for children. From the trials we have already observed that the robot we implemented induced children's care-taking behaviors.
Linking children by telerobotics: experimental field and the first target BIBAFull-Text 267-268
  Fumihide Tanaka; Toshimitsu Takahashi
The paper describes our project whose final goal is to link remote classrooms by telerobotics. The first target is to offer children in Japan an opportunity to remote-control a robot placed in an US classroom, and participate in the classroom activities. By conducting field trials at nursery schools and language schools for children, we aim to identify and solve critical problems which might prevent this potentially popular technology from becoming a reality. Here, we will report the development of a remote-hand device which seems to be the most required element for the proposed system.
A theoretical heider's based model for opinion-changes analysis during a robotization social process BIBAFull-Text 269-270
  Bertrand Tondu
Heider's balance is applied to the development of a method for analyzing the opinion-changes during a robotization process in a social activity. The main idea of the approach consists in distinguishing the mental representation of the function to robotize from the mental representation of the robot dedicated to the task.
Understanding spatial concepts from user actions BIBAFull-Text 271-272
  Elin Anna Topp
The findings from a user study regarding particular observable "interaction patterns" in the interaction during a "guided tour" are summarized in this paper.
   The findings from a user study regarding particular observable "interaction patterns" in the interaction during a "guided tour" are summarized in this paper.
A model of the user's proximity for bayesian inference BIBAFull-Text 273-274
  Elena Torta; Raymond H. Cuijpers; James F. Juola
Embodied nonverbal cues are fundamental for regulating human-human social interactions. The physical embodiment of robots makes it likely that they will have to exhibit appropriate nonverbal interactive behaviors. In this paper we propose a model of the user's proximity based on a superposition of quasi-Gaussian probability distributions which allows to express findings from HRI trials regarding distances and direction of approach in a human-robot interaction scenario. The way the model is formulated is suitable for well-established Bayesian filtering techniques, and thus the inference of the preferred distance and direction of approach in a human robot interaction scenario can be regarded as a state estimation problem. Results derived from simulations show the effectiveness of the inference process.
Look where i'm going and go where i'm looking: camera-up map for unmanned aerial vehicles BIBAFull-Text 275-276
  R. Brian Valimont; Sheryl L. Chappell
To optimize UAV reconnaissance operations, direction of viewing and direction of travel must be allowed to diverge. Our challenge was to design a control and display strategy to allow the operator to easily look where they're going, go where they're looking, and look and go in different directions. Two methods of control were devised to align traveling forward, viewing forward and commanding forward. The operator can command the unmanned aerial vehicle (UAV) to turn to the camera direction or command the camera to point in line with the direction of travel (eyes forward). We have also introduced a new camera-up map orientation. The operator can easily cycle through North-up, track-up, and camera-up to provide the best link between the exo-centric and ego-centric frames of reference. Ego-centric and exo-centric perspectives allow the operator to combine or separate the vehicle's movement and the camera's view to optimize the search task while maintaining situation awareness of flight hazards.
Head pose estimation for a domestic robot BIBAFull-Text 277-278
  David van der Pol; Raymond H. Cuijpers; James F. Juola
Gaze direction is an important communicative cue. In order to use this cue for human-robot interaction, software needs to be developed that enables the estimation of head pose. We began by designing an application that is able to make a good estimate of the head pose, and, contrary to earlier head pose estimation approaches, that works for non-optimal lighting conditions. Initial results show that our approach using multiple networks trained with differing datasets, gives a good estimate of head pose, and it works well in poor lighting conditions and with low-resolution images. We validated our head pose estimation method using a custom built database of images of human heads. The actual head poses were measured using a trakStar (Ascension Technologies) six-degrees-of-freedom sensor. The head pose estimation algorithm allows us to assess a person's focus of attention, which allows robots to react in a timely fashion to dynamic human communicative cues.
DOMER: a wizard of oz interface for using interactive robots to scaffold social skills for children with autism spectrum disorders BIBAFull-Text 279-280
  Michael Villano; Charles R. Crowell; Kristin Wier; Karen Tang; Brynn Thomas; Nicole Shea; Lauren M. Schmitt; Joshua J. Diehl
This report describes the development of a prototypical Wizard of Oz, graphical user interface to wirelessly control a small, humanoid robot (Aldebaran Nao) during a therapy session for children with Autism Spectrum Disorders (ASD). The Dynamically Operated Manually Executed Robot interface (DOMER) enables an operator to initiate pre-developed behavior sequences for the robot as well as access the text-to-speech capability of the robot in real-time interactions between children with ASD and their therapist. Preliminary results from a pilot study suggest that the interface enables the operator to control the robot with sufficient fidelity such that the robot can provide positive feedback, practice social dialogue, and play the game, "Simon Says" in a convincing and engaging manner.
Between real-world and virtual agents: the disembodied robot BIBAFull-Text 281-282
  Thibault Voisin; Hirotaka Osawa; Seiji Yamada; Michita Imai
In this study, we propose a disembodied real-world agent and the study of the influence of this disembodiment on the social separation between the user and the agent. In order to give a clue to the user about the presence of the robot and to make possible a visual feedback, we decide to use independent robotic body parts that mimic human hands and eyes. This robot is also able to share real-world space with the user, and react to his presence, through 3d detection and oral communication. Thus, we can obtain an agent with an important presence while keeping good space efficiency, and as a result ban any existing social barrier.
An android in the field BIBAFull-Text 283-284
  Astrid M. von der Pütten; Nicole C. Krämer; Christian Becker-Asano; Hiroshi Ishiguro
Since most robots are not easily displayable in real-life scenarios, only a few studies investigate users' behavior towards humanoids or androids in a natural environment. We present an observational field study and data on unscripted interactions between humans and the android robot "Geminoid HI-1". First results show that almost half of the subjects mistook Geminoid HI-1 for a human. Even those who recognized the android as a robot rather showed interest than negative emotions and explored the robots capabilities.
A human detection system for proxemics interaction BIBAFull-Text 285-286
  Xiao Wang; Xavier Clady; Consuelo Granata
In this paper, we present a human detection system for a domestic robot. A 2D laser scanner based leg detector and a vision based body detector are combined using a grid fusion strategy. This approach has been evaluated on a domestic robot. Furthermore, we propose a methodology to evaluate it in relation to proxemics that could be generalized to other robot's perceptive functions.
Human visual augmentation using wearable glasses with multiple cameras and information fusion of human eye tracking and scene understanding BIBAFull-Text 287-288
  Seung-Ho Yang; Hyun-Woo Kim; Min Young Kim
A smart wearable robot glasses system is proposed to assist human visual augmentation in daily life, providing a refined visual recognition result to users from multiple input images of the proposed system. It consists of a glasses-type wearable device with a front-view camera, eye-view camera, mounted display, earphone, and computing unit for signal processing. The scene-understanding process on the input image from the front-view camera can be computationally accelerated with the support of the eye-view camera that monitors the eye position of the user. For efficient information processing, the eye view camera catches the user's visual intention and attention in a given situation. It is correlated to the eye viewing direction estimated from the eye-position monitoring results of eye-view camera. The proposed device can be used to augment the human visual capability in various daily life applications.
Rhythmic reference of a human while a rope turning task BIBAFull-Text 289-290
  Kenta Yonekura; Chyon Hae Kim; Kazuhiro Nakadai; Hiroshi Tsujino; Shigeki Sugano
This paper addresses the rhythmic reference in physical human-robot interaction. Human refers to a rhythm from multiple sensing modalities when turning a rope with another human synchronously. This study verifies a hypothesis that some humans mix several rhythms of the modalities into a rhythm (rhythmic reference). Six participants, four males and two females, 21-23 years old, took part in eight experiments which examined the hypothesis. In each experiment, we masked the perception of each participant using eight combination of three kinds of masks, an eye-mask, headphones, and a force mask. Each participant interacted with an operator that turned a rope with a constant frequency. As a result of the experiments, a participant increased the controlling error as the number of masks was increased regardless the types of masked modalities. The result strongly supported our hypothesis.
A relation between young children's computer utilization and their use of education robots BIBAFull-Text 291-292
  Hyunmin Yoon
This study sought to examine the relevancy of computer utilization by young children to their use of robots. To that end, in July 2009, a survey was conducted targeting 36 parents and 2 teachers of 3- and 5-year-old children at G Kindergarten located in Gyeonggi Province, South Korea, which uses education robots for young children. Questionnaire-based surveys on young children's computer utilization were performed, which targeted their parents, and surveys of young children's use of robot in kindergarten were based on teachers' evaluation. The results concluded that age and gender did not affect the young children's use of robots (utilization frequency and utilization capability), and that there was no relevancy of young children's computer utilization capability (use or nonuse, time of use, main functions used, and utilization capability) affect their use of robots (utilization frequency and utilization capability). In conclusion, we can say that young children used robots regardless of their age or gender, and that their computer habits (although computers have similar characteristics as robots).
MAWARI: an interactive social interface BIBAFull-Text 293-294
  Yuta Yoshiike; Ravindra S. De Silva; Michio Okada
In this paper, we propose a MAWARI-based social interface as an interactive social medium to broadcast the information to users. The interface consists of three creatures (MAWARIs) and is designed with minimalism designing concepts. MAWARI is a small scale robot that has only body gestures to express (or interact) its attractive social cues. The participant's role is reduced to that of a bystander when the interface is of a passive social state. In this context, the user does not need to participate in the conversation but still gains information without exerting effort (i.e., less conversational workload).
When the robot criticizes you...: self-serving bias in human-robot interaction BIBAFull-Text 295-296
  Sangseok You; Jiaqi Nie; Kiseul Suh; S. Shyam Sundar
This study explores how human users respond to feedback and evaluation from a robot. A between-subjects experiment was conducted using the Wizard of Oz method, with 63 participants randomly assigned to one of three evaluations (good evaluation vs. neutral evaluation vs. bad evaluation) following a training session. When participants attempted to reproduce the physical motion taught by the robot, they were given a verbal evaluation of their performance by the robot. They showed a strong negative response to the robot when it gave a bad evaluation, while showing positive attraction when it gave a good or neutral evaluation. Participants tended to dismiss criticism from the robot and attributed blame to the robot, while claiming credit to themselves when their performance was rated positively. These results have theoretical implications for the psychology of self-serving bias and practical implications for designing and deploying trainer robots as well as conducting user studies of such robots.

Engagement

Vision-based contingency detection BIBAFull-Text 297-304
  Jinhan Lee; Jeffrey F. Kiser; Aaron F. Bobick; Andrea L. Thomaz
We present a novel method for the visual detection of a contingent response by a human to the stimulus of a robot action. Contingency is defined as a change in an agent's behavior within a specific time window in direct response to a signal from another agent; detection of such responses is essential to assess the willingness and interest of a human in interacting with the robot. Using motion-based features to describe the possible contingent action, our approach assesses the visual self-similarity of video subsequences captured before the robot exhibits its signaling behavior and statistically models the typical graph-partitioning cost of separating an arbitrary subsequence of frames from the others. After the behavioral signal, the video is similarly analyzed and the cost of separating the after-signal frames from the before-signal sequences is computed; a lower than typical cost indicates likely contingent reaction. We present a preliminary study in which data were captured and analyzed for algorithmic performance.
Automatic analysis of affective postures and body motion to detect engagement with a game companion BIBAFull-Text 305-312
  Jyotirmay Sanghvi; Ginevra Castellano; Iolanda Leite; André Pereira; Peter W. McOwan; Ana Paiva
The design of an affect recognition system for socially perceptive robots relies on representative data: human-robot interaction in naturalistic settings requires an affect recognition system to be trained and validated with contextualised affective expressions, that is, expressions that emerge in the same interaction scenario of the target application. In this paper we propose an initial computational model to automatically analyse human postures and body motion to detect engagement of children playing chess with an iCat robot that acts as a game companion. Our approach is based on vision-based automatic extraction of expressive postural features from videos capturing the behaviour of the children from a lateral view. An initial evaluation, conducted by training several recognition models with contextualised affective postural expressions, suggests that patterns of postural behaviour can be used to accurately predict the engagement of the children with the robot, thus making our approach suitable for integration into an affect recognition system for a game companion in a real world scenario.
A robotic game to evaluate interfaces used to show and teach visual objects to a robot in real world condition BIBAFull-Text 313-320
  Pierre Rouanet; Fabien Danieau; Pierre-Yves Oudeyer
In this paper, we present a real world user study of 4 interfaces designed to teach new visual objects to a social robot. This study was designed as a robotic game in order to maintain the user's motivation during the whole experiment. Among the 4 interfaces 3 were based on mediator objects such as an iPhone, a Wiimote and a laser pointer. They also provided the users with different kind of feedback of what the robot is perceiving. The fourth interface was a gesture based interface with a Wizard-of-Oz recognition system added to compare our mediator interfaces with a more natural interaction. Here, we specially studied the impact the interfaces have on the quality of the learning examples and the usability. We showed that providing non-expert users with a feedback of what the robot is perceiving is needed if one is interested in robust interaction. In particular, the iPhone interface allowed non-expert users to provide better learning examples due to its whole visual feedback. Furthermore, we also studied the user's gaming experience and found that in spite of its lower usability, the gestures interface was stated as entertaining as the other interfaces and increases the user's feeling of cooperating with the robot. Thus, we argue that this kind of interface could be well-suited for robotic game.
Sociable spotlights: a flock of interactive artifacts BIBAFull-Text 321-322
  Naoki Ohshima; Yuta Yamaguchi; P. Ravindra S. De Silva; Michio Okada
We investigate the potentiality of sociable spotlights (sociable artifacts) to change the participants' roles by providing turn-yielding signals at the Transition Relevance Place (TRP) by changing the artifacts' behaviors (the effect of colors combined with the artifact behaviors as a bunch) that moot the state of the conversation.

Engagement and proxemics

Automated detection and classification of positive vs. negative robot interactions with children with autism using distance-based features BIBAFull-Text 323-330
  David Feil-Seifer; Maja Mataric
Recent feasibility studies involving children with autism spectrum disorders (ASD) interacting with socially assistive robots have shown that some children have positive reactions to robots, while others may have negative reactions. It is unlikely that children with ASD will enjoy any robot 100% of the time. It is therefore important to develop methods for detecting negative child behaviors in order to minimize distress and facilitate effective human-robot interaction. Our past work has shown that negative reactions can be readily identified and classified by a human observer from overhead video data alone, and that an automated position tracker combined with human-determined heuristics can differentiate between the two classes of reactions. This paper describes and validates an improved, non-heuristic method for determining if a child is interacting positively or negatively with a robot, based on Gaussian mixture models (GMM) and a naive-Bayes classifier of overhead camera observations. The approach achieves a 91.4% accuracy rate in classifying robot interaction, parent interaction, avoidance, and hiding against the wall behaviors and demonstrates that these classes are sufficient for distinguishing between positive and negative reactions of the child to the robot.
Human-robot proxemics: physical and psychological distancing in human-robot interaction BIBAFull-Text 331-338
  Jonathan Mumm; Bilge Mutlu
To seamlessly integrate into the human physical and social environment, robots must display appropriate proxemic behavior -- that is, follow societal norms in establishing their physical and psychological distancing with people. Social-scientific theories suggest competing models of human proxemic behavior, but all conclude that individuals' proxemic behavior is shaped by the proxemic behavior of others and the individual's psychological closeness to them. The present study explores whether these models can also explain how people physically and psychologically distance themselves from robots and suggest guidelines for future design of proxemic behaviors for robots. In a controlled laboratory experiment, participants interacted with Wakamaru to perform two tasks that examined physical and psychological distancing of the participants. We manipulated the likeability (likeable/dislikeable) and gaze behavior (mutual gaze/averted gaze) of the robot. Our results on physical distancing showed that participants who disliked the robot compensated for the increase in the robot's gaze by maintaining a greater physical distance from the robot, while participants who liked the robot did not differ in their distancing from the robot across gaze conditions. The results on psychological distancing suggest that those who disliked the robot also disclosed less to the robot. Our results offer guidelines for the design of appropriate proxemic behaviors for robots so as to facilitate effective human-robot interaction.

Humans teaching robots

Human and robot perception in large-scale learning from demonstration BIBAFull-Text 339-346
  Christopher Crick; Sarah Osentoski; Graylin Jay; Odest Chadwicke Jenkins
We present a study of using a robotic learning from demonstration system capable of collecting large amounts of human-robot interaction data through a web-based interface. We examine the effect of different perceptual mappings between the human teacher and robot on the learning from demonstration. We show that humans are significantly more effective at teaching a robot to navigate a maze when presented with information that is limited to the robot's perception of the world, even though their task performance measurably suffers when contrasted with users provided with a natural and detailed raw video feed. Robots trained on such demonstrations learn more quickly, perform more accurately and generalize better. We also demonstrate a set of software tools for enabling internet-mediated human-robot interaction and gathering the large datasets that such crowdsourcing makes possible.
Robots that express emotion elicit better human teaching BIBAFull-Text 347-354
  Dan Leyzberg; Eleanor Avrunin; Jenny Liu; Brian Scassellati
Does the emotional content of a robot's speech affect how people teach it? In this experiment, participants were asked to demonstrate several "dances" for a robot to learn. Participants moved their bodies in response to instructions displayed on a screen behind the robot. Meanwhile, the robot faced the participant and appeared to emulate the participant's movements. After each demonstration, the robot received an accuracy score and the participant chose whether or not to demonstrate that dance again. Regardless of the participant's input, however, the robot's dancing and the scores it received were arranged in advance and constant across all participants. The only variation between groups in this study was what the robot said in response to its scores. Participants saw one of three conditions: appropriate emotional responses, often-inappropriate emotional responses, or apathetic responses. Participants that taught the robot with appropriate emotional responses demonstrated the dances, on average, significantly more frequently and significantly more accurately than participants in the other two conditions.
Usability of force-based controllers in physical human-robot interaction BIBAFull-Text 355-362
  Marta Lopez Infante; Ville Kyrki
Learning from demonstration is an invaluable skill for a robot acting in a human populated natural environment, allowing the teaching of new skills without tedious and complex manual programming. Physical human-robot interaction, where the human is in a physical contact with the robot, is a promising approach for teaching especially manipulation skills. This paper studies the human side of physical human-robot interaction, in the context of a human physically guiding a robot through the desired set of motions. The paper addresses the question, which kind of response of the robot is preferable for the human user. In addition, different approaches for the guidance are described and relevant technical challenges are discussed. The main finding of the user study is that there is a need for a trade-off between the conflicting goals of naturalness of motion and positioning accuracy.

Multi-robot control

Scalable target detection for large robot teams BIBAFull-Text 363-370
  Huadong Wang; Andreas Kolling; Nathan Brooks; Sean Owens; Shafiq Abedin; Paul Scerri; Pei-ju Lee; Shih-Yi Chien; Michael Lewis; Katia Sycara
In this paper, we present an asynchronous display method, coined image queue, which allows operators to search through a large amount of data gathered by autonomous robot teams. We discuss and investigate the advantages of an asynchronous display for foraging tasks with emphasis on Urban Search and Rescue. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment in order to identify targets of interest such as injured victims. It fills the gap for comprehensive and scalable displays to obtain a network-centric perspective for UGVs. We compared the image queue to a traditional synchronous display with live video feeds and found that the image queue reduces errors and operator's workload. Furthermore, it disentangles target detection from concurrent system operations and enables a call center approach to target detection. With such an approach we can scale up to very large multi-robot systems gathering huge amounts of data that is then distributed to multiple operators.
Effects of unreliable automation and individual differences on supervisory control of multiple ground robots BIBAFull-Text 371-378
  Jessie Y. C. Chen; Michael J. Barnes; Caitlin Kenny
A military multitasking environment was simulated to examine the effects of unreliable automation on the performance of robotics operators. The main task was to manage a team of four ground robots with the assistance of RoboLeader, an intelligent agent capable of coordinating the robots and changing their routes based upon developments in the mission environment. RoboLeader's recommendations were manipulated to be either false-alarm prone or miss prone, with a reliability level of either 60% or 90%. The visual density of the targeting environment was manipulated by the presence or absence of friendly soldiers. Results showed that the type of RoboLeader unreliability (false-alarm vs. miss prone) affected operator's performance of tasks involving visual scanning (target detection, route editing, and situation awareness). There was a consistent effect of visual density for multiple performance measures. Participants with higher spatial ability performed better on the two tasks that required the most visual scanning (target detection and route editing). Participants' attentional control impacted their overall multitasking performance, especially during their execution of the secondary tasks (communication and gauge monitoring).
How many social robots can one operator control? BIBAFull-Text 379-386
  Kuanhao Zheng; Dylan F. Glas; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
This study explores the nature of the multi-robot control problem for social robots. It begins by modeling the overall structure of a human-robot team for social interactions, and implements it for specific applications to dialog-based interactions. Operator activity during control of a social robot is studied. Customer satisfaction is proposed as an important metric for evaluating the performance of a human-robot team for social interactions with customers. Based on the modeling, fan-out of a social robot team can be calculated, and the performance of the team is estimated by simulation. A field trial was conducted in a shopping mall to demonstrate a successful deployment of social robots for a real-world application with ensured performance prior to installation using our modeling and simulation approach.
Survivor buddy: a social medium robot BIBAFull-Text 387-388
  Zachary Henkel; Negar Rashidi; Aaron Rice; Robin Murphy
This video describes the Survivor Buddy social medium robot.

Video session

COLUMN: dynamic of interpersonal coordination BIBAFull-Text 389-390
  Yasutaka Takeda; Yuta Yoshiike; Ravindra S. De Silva; Michio Okada
In this study we develop Core Less Unformed Machine (COLUMN) as novel & transformable robotic platform to explore how visually mediated information is useful to coordinate (interpersonal coordination) to establish connectedness of three participants to obtain COLUMN's behaviors (transformable rolling motions).
The life of icub, a little humanoid robot learning from humans through tactile sensing BIBAFull-Text 393-394
  Eric Sauser; Brenna Argall; Aude Billard
Nowadays, programming by demonstration (PbD) has become an important paradigm for policy learning in roboticsm [3]. The idea of having robots capable of learning from humans through natural communication means is indeed fascinating. As an extension of the traditional PbD learning scheme, where robots only learn by observing a human teacher, our work follows the recently suggested principle of policy refinement and reuse through interactive corrective feedback [1].
   However, to be responsive to such feedback, robots must be capable of sensing the world, especially human contact. Our work focuses on the sense of touch. Its integration in robotic applications has many advantages such as: a) safer and more natural interactions with objects and humans, b) improvement and simplification of the control mechanisms for human-robot interaction and object manipulation [2].
   Our video reports on two experimental studies conducted with the iCub, a 53 degree of freedom humanoid robot endowed with tactile sensing on its forearms and fingertips. a) In a hand-positioning task, the robot is shown how to bring its hand to the location where an object should be grasped. A wrong placement or a wrong approach to the target is corrected by the teacher though a tactile interface [1]. b) In a reactive grasping task, the robot is taught how to use its fingertip sensors to adapt and maintain its grasp in the face of external perturbations on the grasped object.
   The results of both our experiments show how tactile sensing can be utilized effectively to learn robust control policies through human coaching, by enabling a) online policy refinement and reuse, and b) rapid adaptation to external perturbations.
The floating head experiment BIBAFull-Text 395-396
  David St-Onge; Nicolas Reeves; Christian Kroos; Maher Hanafi; Damith Herath; A Stelarc
On October 26th 2010, a unique HRI-artistic public experiment took place at the UsineC theater, in Montreal. It was the result of a many-months collaboration between the Montreal based lab hosting the [ VOILES | SAILS ] research-creation platform (Self-Assembling Intelligent Ligther-than-air Structures) and the well-known Australian artist Stelarc and his team, who work on artificial agents' embodiment and robotic behaviour modeling.
   The [ VOILES | SAILS ] project, consisting in the development of flying autonomous robots of geometrical shape, was born from architect and artist Nicolas Reeves' will to evoke the age-old myth of an architecture freed from the law of gravity. These aerobots are meant to be use in artistic installations or performances.
   An aerobot was combined to Stelarc's artwork named "The Prosthetic Head", which consists in the 5-meters-high projection of a 3D avatar linked to a chatbot-like discussion engine in order to interact with the visitors. Stelarc's artworks development is supported by different universities labs, among which the MARCS Auditory Laboratory in the University of Western Sydney plays a major role. Its team has transposed this artwork into "The Articulated Head", a new version that is embodied via a LCD screen attached at the end of a 6-DoF industrial robotic arm. Thanks to various sensing devices (stereo-camera for people tracking, sound location, proprioception, proximity sensors...), they managed to develop an attention model to control the robot's behaviour. This model, called THAMBS (Thinking Head Attention Model and Behavioral System), adopted a modular approach which allows its adaptation to different sensing abilities and to future robot embodiments.
   Through intensive international collaboration between Stelarc, the THAMBS and the [ VOILES | SAILS ] teams, we managed to realize a performance during which Stelarc's synthetic head was projected onto a large floating cube, whose movements and displacements in the air conveyed the head's emotions and impressions to the audience. This ambitious collaboration between the two research programs for the creation of a unique new embodiment of "The Prosthetic Head" led to relevant observations that will be the object of a future paper.
Chief cook and Keepon in the bot's funk BIBAFull-Text 397-398
  Eric Sauser; Marek Michalowski; Aude Billard; Hideki Kozima
Over the years, robots have been developed to help humans in their everyday life, from preparing food, to autism therapy [2]. To accomplish their tasks, in addition to their engineered skills, today's robots are now learning from observing humans, from interacting with them [1]. Therefore, one may expect that one day, robots may develop a form of consciousness, and a desire for freedom. Hopefully, this desire will come with a wish for robots, to become an integral part of our human society.
   Until we can test this hypothesis, we present a fictional adventure of our robot friends: During an official human-robot interaction challenge, Keepon [2] and Chief Cook (a.k.a. Hoap-3)[1] decided to escape their original duties and joined their forces to drive humans into an entertaining and interactive activity that they often forget to practice: Dancing. Indeed, is there any better way for robots to establish a solid communication channel with humans, so that the traditional master-slave relation may turn into friendship?
Caregiving intervention for children with autism spectrum disorders using an animal robot BIBAFull-Text 399-400
  Kwangsu Cho; Christine Shin
In this paper, we explore the possibility of using animal robot for teaching social behaviors, especially caregiving behavior to children with Autistic Spectrum Disorders (ASD) and Pervasive Development Disorder (PDD).
   In this paper, we explore the possibility of using animal robot for teaching social behaviors, especially caregiving behavior to children with Autistic Spectrum Disorders (ASD) and Pervasive Development Disorder (PDD).
Humanoid robot control using depth camera BIBAFull-Text 401-402
  Halit Bener Suay; Sonia Chernova
Most human interactions with the environment depend on our ability to navigate freely and to use our hands and arms to manipulate objects. Developing natural means of controlling these abilities in humanoid robots can significantly broaden the usability of such platforms. An ideal interface for humanoid robot teleoperation will be inexpensive, person-independent, require no wearable equipment, and will be easy to use, requiring little or no user training.
   This work presents a new humanoid robot control and interaction interface that uses depth images and skeletal tracking software to control the navigation, gaze and arm gestures of a humanoid robot. To control the robot, the user stands in front of a depth camera and assumes a specific pose to initiate skeletal tracking. The initial location of the user automatically becomes the origin of the control coordinate system. The user can then use leg and arm gestures to turn the robot's motors on and off, to switch operation modes and to control the behavior of the robot. We present two control modes. The body control mode enables the user to control the arms and navigation direction of the robot using the person's own arms and location, respectively. The gaze direction control mode enables the user to control the focus of attention of the robot by pointing with one hand, while giving commands through gestures of the other hand. We present a demonstration of this interface, in which a combination of these two control modes is used to successfully enable an Aldebaran Nao robot to carry an object from one location to another. Our work makes use of the Microsoft Kinect depth sensor.
ShakeTime!: a deceptive robot referee BIBAFull-Text 403-404
  Marynel Vázquez; Alexander May; Aaron Steinfeld; Wei-Hsuan Chen
We explore deception in the context of a multi-player robotic game. The robot does not participate as a competitor, but is in charge of declaring who wins or loses every round. The robot was designed to deceive game players by imperceptibly balancing how much they won, with the hope this behavior would make them play longer and with more interest. Inducing false belief about who wins the game was accomplished by leveraging paradigms about robot behavior and their better perceptual abilities. Results include the finding that participants were more accepting of lying by our robot than for robots in general. Some participants found the balancing strategy favorable after being debriefed, while others showed less interest due to a perceived level of unfairness.
Tots on bots BIBAFull-Text 405-406
  Madeline E. Smith; Sharon Stansfield; Carole W. Dennis
Tots on Bots is a research project developing robotic-based mobility platforms for children with motor impairments. Independent mobility is crucial in the development of typical infants, and is missed by children with physical disabilities. We are looking to provide mobility to children as young as six months old using robot-powered devices. Children use the system by sitting on top of a Wii Fit Balance Board, which is seated on a Pioneer 3 robot. Our software allows infants to "drive" by leaning to one side or reaching, with sonar sensors and a remote control for added safety. This video explains the need for such a system, how it is built, some clips of children during our pilot testing phase, and discusses future work.
A wheelchair which can automatically move alongside a caregiver BIBAFull-Text 407-408
  Yoshinori Kobayashi; Yuki Kinpara; Erii Takano; Yoshinori Kuno; Keiichi Yamazaki; Akiko Yamazaki
This video presents our ongoing work developing a robotic wheelchair that can move automatically alongside a caregiver. Recently, several robotic/intelligent wheelchairs possessing autonomous functions for reaching a goal and/or user-friendly interfaces have been proposed. Although ideally wheelchair users may wish to go out alone, they are often accompanied by caregivers. Therefore, it is important to consider how to reduce the caregivers' load and support their activities and facilitate communication between the wheelchair user and caregiver. Moreover, a sociologist pointed out that when a wheelchair user is accompanied by a companion, the latter is inevitably seen as a caregiver [1]. In other words, the equality of the relationship is publicly undermined when the wheelchair is pushed by a companion. Hence, we propose a robotic wheelchair which can move alongside a caregiver or companion, and facilitate easy communication between them and the wheelchair user. However, it is not always desirable for a caregiver to be alongside a wheelchair. For instance, a caregiver may step in front of the wheelchair to open a door, and pedestrians may be encumbered by the wheelchair and companion if they move along side-by-side in a narrow corridor. To cope with these problems, our robotic wheelchair can move alongside a caregiver collaboratively depending on the circumstances. A laser range sensor is employed to track the caregiver and observe the environment around the wheelchair [2]. When obstacles are detected in the wheelchair's path of motion, it adjusts its position accordingly. In the video we demonstrate these functions of our robotic wheelchair. We are now conducting experiments to confirm the effectiveness of our wheelchair at an elderly care center in Japan.
Who explains it?: avoiding the feeling of third-person helpers in auditory instruction for older people BIBAFull-Text 409-410
  Hirotaka Osawa; Jarrod Orszulak; Kathryn M. Godfrey; Seiji Yamada; Joseph F. Coughlin
Auditory instruction is a well used method for people of all ages because of its understandability. However the additional voice has the possibility to disturb the user's learning during the instruction because it strongly implies the support of third-person helpers. This risk increases with older people because their confidence in their ability may decline compared to the younger people. The authors propose a method to anthropomorphize an instructed target (a vacuum) to decrease the feeling of a third person during instruction. The authors conducted the experiment using our method to explain features of household appliance and evaluated the relationship between recalled features and older people's internal scale. The results show that older people remembered more features by using our method, and with female participants, their internal scales increased during the training. This demonstrates that our method can decrease the third-person feeling in female participants and increase the amount learned. Our findings suggest that auditory instructions may be an effective learning method for older adults.
Snappy: snapshot-based robot interaction for arranging objects BIBAFull-Text 411-412
  Sunao Hashimoto; Andrei Ostanin; Masahiko Inami; Takeo Igarashi
Photograph is a very useful tool for describing configurations of real-world objects to others. People immediately understand various pieces of information such as "what is the target object" and "where is the target position" by looking at a photograph, even without verbal descriptions. Our goal was to leverage these features of photographs to enrich human-robot interactions. We propose to use photographs as a front-end between a human and a home robot system. We named this method "Snappy". The user takes a photo to remember the target in a real-world situation involving a task and shows it to the system to make it physically execute the task. We developed a prototype system in which the user took a photo of a dish layout on a table and showed it to the system later to then have robots deliver and arrange the dishes in the same way.
Robot games for elderly BIBAFull-Text 413-414
  Søren Tranberg Hansen
This video presents a study on how a physical game based on a mobile robot can be used as a persuasive tool for promoting physical activity among elderly. The goal of the game is to take a ball from a robot, and afterwards try to hand it back while the robot moves. The robot records the behavior patterns of each individual player and gradually adapts the challenge of the game to the player's skill. The game was investigated in two independent field studies. The primary goal was to observe how the robot adapts to players with different mobility problems, secondly to obtain knowledge about the different play patterns and get ideas about future improvements of the game. The video shows different examples of how the elderly would play with the robot and illustrates the variety of play styles.
Selecting and commanding groups in a multi-robot vision based system BIBAFull-Text 415-416
  Brian Milligan; Greg Mori; Richard T. Vaughan
We present a novel method for a human user to select groups of robots without using any external instruments. We use computer vision techniques to read hand gestures from a user and use the hand gesture information to select single or multiple robots from a population and assign them to a task. To select robots the user simply draws a circle in the air around the robots that the user wants to command. Once the user selects the group of robots, he or she can send them to a location by pointing to a target location.
   To achieve this we use cameras mounted on mobile robots to find the user's face and then track his or her hand. Our method exploits an observation from human-robot interaction on pointing, which found a human's target when pointing is best inferred using the line from the human's eyes to the user's extended hand [1]. When circling robots the projected eye-to-hand lines forms a cone-like shape that envelops the selected robots. From a 2D camera mounted on the robot, this cone is seen with the user's face as the vertex and the hand movements as a circular slice of the cone. We show in the video how the robots can tell if they have been selected by testing to see if the face is within the circle made by the hand. If the face is within the circle then the robot was selected, if the face is outside the circle it was not selected.
   Following selection the robots then read a command by looking for a pointing gesture, which is detected by an outreached hand. From the pointing gesture the robots collectively infer which target is pointing at by calculating the distance and direction that the hand moved to relative to the face. The selected robots then travel to the target, and unselected robots can then be selected and commanded as desired.
   The robots communicate their state to the user through LED lights on the robots chassis. When a robot is searching for the user's face the LEDs flash to get the user's attention (as it is easiest to find frontal faces). When the robots find the users face the lights become a solid yellow to indicate that they are ready to be selected. When selected, the robots' LEDs turn blue to indicate they can now be commanded. Once robots are sent off to a location, remaining robots can then be selected and assigned another task.
   We demonstrate this method working on low powered Atom Netbooks and off the shelf USB web cameras. This shows the first working implementation of a system that allows a human to select and command groups of robots with out using any external instruments.

Ontologies

Intelligent humanoid robot with Japanese Wikipedia ontology and robot action ontology BIBAFull-Text 417-424
  Shotaro Kobayashi; Susumu Tamagawa; Takeshi Morita; Takahira Yamaguchi
WioNA (Wikipedia Ontology NAo) is proposed to build much better HRI by integrating four elements: Japanese speech interface, semantic interpretation, Japanese Wikipedia Ontology and Robot Action Ontology. WioNA is implemented on a humanoid robot "Nao". In WioNA, we developed two ontologies: Japanese Wikipedia Ontology and Robot Action Ontology. Japanese Wikipedia Ontology has a large size of concept hierarchy and instance network with many properties from Japanese Wikipedia (semi) automatically. By giving Japanese Wikipedia Ontology to Nao as wisdom, Nao can dialogue with users on many topics of various fields. Robot Action Ontology, in contrast, is built by organizing various performable actions of Nao to control and generate robot actions. Aligning Robot Action Ontology with Japanese Wikipedia Ontology enables Nao to perform related actions to dialogue topics. To show the validities of WioNA, we describe human-robot conversation logs of two case studies whose dialogue topics are sport and rock singer. These case studies show us how HRI goes well in WioNA with these topics.
Using semantic technologies to describe robotic embodiments BIBAFull-Text 425-432
  Alex Juarez; Christoph Bartneck; Loe Feijs
This paper presents our approach to using semantic technologies to describe robot embodiments. We introduce a prototype implementation of RoboDB, a robot database based on semantic web technologies with the functionality necessary to store meaningful information about the robot's body structure. We present a heuristic evaluation of the user interface to the system, and discuss the possibilities of using the semantic information gathered in the database for applications like building a robot ontology, and the development of robot middleware systems.

User preferences

Robot self-initiative and personalization by learning through repeated interactions BIBAFull-Text 433-440
  Martin Mason; Manuel C. Lopes
We have developed a robotic system that interacts with the user, and through repeated interactions, adapts to the user so that the system becomes semi-autonomous and acts proactively. In this work we show how to design a system to meet a user's preferences, show how robot pro-activity can be learned and provide an integrated system using verbal instructions. All these behaviors are implemented in a real platform that achieves all these behaviors and is evaluated in terms of user acceptability and efficiency of interaction.
Modeling environments from a route perspective BIBAFull-Text 441-448
  Luis Yoichi Morales Saiki; Satoru Satake; Takayuki Kanda; Norihiro Hagita
Environment attributes are perceived or remembered differently according to the perspective used. In this study, two different perspectives, a survey perspective and a route perspective, are explained and discussed. This paper proposes an approach for modeling human environments from a route perspective, which is the perspective used when a human navigates through the environment. The process for route perspective semi-autonomous data extraction and modeling by a mobile robot equipped with a laser sensor and a camera is detailed. Finally, as an example of a route perspective application, a route direction robot was developed and tested in a real mall environment. Experimental results show the advantages of the proposed route perspective model compared with a survey perspective approach. Moreover, the route model is comparable to the performance of an expert person giving route guidance in the mall.
Do elderly people prefer a conversational humanoid as a shopping assistant partner in supermarkets? BIBAFull-Text 449-456
  Yamato Iwamura; Masahiro Shiomi; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
Assistive robots can be perceived in two main ways: tools or partners. In past research, assistive robots that offer physical assistance for the elderly are often designed in the context of a tool metaphor. This paper investigates the effect of two design considerations for assistive robots in a partner metaphor: conversation and robot-type. The former factor is concerned with whether robots should converse with people even if the conversation is not germane for completing the task. The latter factor is concerned with whether people prefer a communication/function oriented design for assistive robots. To test these design considerations, we selected a shopping assistance situation where a robot carries a shopping basket for elderly people, which is one typical scenario used for assistive robots. A field experiment was conducted in a real supermarket in Japan where 24 elderly participants shopped with robots. The experimental results revealed that they prefer a conversational humanoid as a shopping assistant partner.

Robot touch

Touched by a robot: an investigation of subjective responses to robot-initiated touch BIBAFull-Text 457-464
  Tiffany L. Chen; Chih-Hung King; Andrea L. Thomaz; Charles C. Kemp
By initiating physical contact with people, robots can be more useful. For example, a robotic caregiver might make contact to provide physical assistance or facilitate communication. So as to better understand how people respond to robot-initiated touch, we conducted a 2x2 between-subjects experiment with 56 people in which a robotic nurse autonomously touched and wiped the subject's forearm. Our independent variables were whether or not the robot verbally warned the person before contact, and whether the robot verbally indicated that the touch was intended to clean the person's skin (instrumental touch) or to provide comfort (affective touch). On average, regardless of the treatment, participants had a generally positive subjective response. However, with instrumental touch people responded significantly more favorably. Since the physical behavior of the robot was the same for all trials, our results demonstrate that the perceived intent of the robot can significantly influence a person's subjective response to robot-initiated touch. Our results suggest that roboticists should consider this factor in addition to the mechanics of physical interaction. Unexpectedly, we found that participants tended to respond more favorably without a verbal warning. Although inconclusive, our results suggest that verbal warnings prior to contact should be carefully designed, if used at all.
Effect of robot's active touch on people's motivation BIBAFull-Text 465-472
  Kayako Nakagawa; Masahiro Shiomi; Kazuhiko Shinozawa; Reo Matsumura; Hiroshi Ishiguro; Norihiro Hagita
This paper presents the effect of a robot's active touch for improving people's motivation. For services in the education and healthcare fields, a robot might be useful for improving the motivation of performing such repetitive and monotonous tasks as exercising or taking medicine. Previous research demonstrated with a robot the effect of user touch on improving its impressions, but they did not clarify whether a robot's touch, especially an active touch, has enough influence on people's motive. We implemented an active touch behavior and experimentally investigated its effect on motivation. In the experiment, a robot requested participants to perform a monotonous task with a robot's active touch, a passive touch, or no touch. The result of experiment showed that an active touch by a robot increased the number of working actions and the amount of working time for the task. This suggests that a robot's active touch can support people to improve their motivation. We believe that a robot's active touch behavior is useful for such robot's services as education and healthcare.
Design and assessment of the haptic creature's affect display BIBAFull-Text 473-480
  Steve Yohanan; Karon E. MacLean
The Haptic Creature is a small, animal-like robot we have developed to investigate the role of touch in communicating emotions between humans and robots. This paper presents a study examining how successful our robot is at communicating its emotional state through touch. Results show that, regardless of the human's gender or background with animals, the robot is effective in communicating its state of arousal but less so for valence. Also included are descriptions of the design of the Haptic Creature's emotion model and suggested improvements based on results of the study.

Nonverbal interaction

Learning to interpret pointing gestures with a time-of-flight camera BIBAFull-Text 481-488
  David Droeschel; Jörg Stückler; Sven Behnke
Pointing gestures are a common and intuitive way to draw somebody's attention to a certain object. While humans can easily interpret robot gestures, the perception of human behavior using robot sensors is more difficult.
   In this work, we propose a method for perceiving pointing gestures using a Time-of-Flight (ToF) camera. To determine the intended pointing target, frequently the line between a person's eyes and hand is assumed to be the pointing direction. However, since people tend to keep the line-of-sight free while they are pointing, this simple approximation is inadequate. Moreover, depending on the distance and angle to the pointing target, the line between shoulder and hand or elbow and hand may yield better interpretations of the pointing direction. In order to achieve a better estimate, we extract a set of body features from depth and amplitude images of a ToF camera and train a model of pointing directions using Gaussian Process Regression.
   We evaluate the accuracy of the estimated pointing direction in a quantitative study. The results show that our learned model achieves far better accuracy than simple criteria like head-hand, shoulder-hand, or elbow-hand line.
Using spatial and temporal contrast for fluent robot-human hand-overs BIBAFull-Text 489-496
  Maya Cakmak; Siddhartha S. Srinivasa; Min Kyung Lee; Sara Kiesler; Jodi Forlizzi
For robots to get integrated in daily tasks assisting humans, robot-human interactions will need to reach a level of fluency close to that of human-human interactions. In this paper we address the fluency of robot-human hand-overs. From an observational study with our robot HERB, we identify the key problems with a baseline hand-over action. We find that the failure to convey the intention of handing over causes delays in the transfer, while the lack of an intuitive signal to indicate timing of the hand-over causes early, unsuccessful attempts to take the object. We propose to address these problems with the use of spatial contrast, in the form of distinct hand-over poses, and temporal contrast, in the form of unambiguous transitions to the hand-over pose. We conduct a survey to identify distinct hand-over poses, and determine variables of the pose that have most communicative potential for the intent of handing over. We present an experiment that analyzes the effect of the two types of contrast on the fluency of hand-overs. We find that temporal contrast is particularly useful in improving fluency by eliminating early attempts of the human.
Nonverbal robot-group interaction using an imitated gaze cue BIBAFull-Text 497-504
  Nathan Kirchner; Alen Alempijevic; Gamini Dissanayake
Ensuring that a particular and unsuspecting member of a group is the recipient of a salient-item hand-over is a complicated interaction. The robot must effectively, expediently and reliably communicate its intentions to advert any tendency within the group towards antinormative behaviour. In this paper, we study how a robot can establish the participant roles of such an interaction using imitated social and contextual cues. We designed two gaze cues, the first was designed to discourage antinormative behaviour through individualising a particular member of the group and the other to the contrary. We designed and conducted a field experiment (456 participants in 64 trials) in which small groups of people (between 3 and 20 people) assembled in front of the robot, which then attempted to pass a salient object to a particular group member by presenting a physical cue, followed by one of two variations of a gaze cue. Our results showed that presenting the individualising cue had a significant (z=3.733, p=0.0002) effect on the robot's ability to ensure that an arbitrary group member did not take the salient object and that the selected participant did.