HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction

Fullname:HRI'09 Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction
Editors:Matthias Scheutz; François Michaud; Pamela Hinds; Brian Scassellati
Location:La Jolla, California, USA
Dates:2009-Mar-09 to 2009-Mar-13
Publisher:ACM
Standard No:ISBN: 1-60558-404-5, 978-1-60558-404-1; ACM DL: Table of Contents hcibib: HRI09
Papers:96
Pages:332
Links:Conference Series Home Page
  1. Designing robots based on human behavior
  2. Robots as intermediaries
  3. Non-verbal communication in HRI
  4. New methods for studying HRI
  5. Modeling social interaction
  6. Situation awareness, interface design and usability
  7. Responding to autonomy
  8. HRI video abstracts
  9. HRI late-breaking abstracts
Bringing physical characters to life BIBAKFull-Text 1-2
  Akhil J. Madhani
At Disney, we are storytellers, and all good stories are filled with compelling characters. One way to present these characters to audiences in immersive, 3D environments is through the use of entertainment robots, or Audio Animatronics Figures, as they have traditionally been known at Disney in attractions such as Pirates of the Caribbean.
   In this talk, I hope to give insight into the design and development of entertainment robots at Disney. In particular, I share -- from the point of view of a robot builder -- some of the guidelines distilled from Disney's tradition of hand-drawn animation as they are applied to these systems.
   As examples of characters which partake in two-way interactions with audiences via teleoperation, I discuss two newer characters. The first, Lucky the Dinosaur, was designed to roam freely through the Disney theme park environment while interacting with guests. The second, Wall-E, was developed in conjunction with Pixar Animation Studios to represent the character from the film, and has made appearances and given interviews at red carpet premieres, press events, and in television studios around the world.
   Ultimately, we hope that a further scientific study of the principles of animation and character development would be useful to anyone designing robots, autonomous or teleoperated, which must interact with humans.
Keywords: interaction, robotics
Interacting with robots on Mars: operation of the mars exploration rovers BIBAKFull-Text 3-4
  Steve Squyres
The rovers Spirit and Opportunity have been operating on the surface of Mars since January of 2004. Interaction with these robotic vehicles involves overcoming a number of operational challenges. The challenges include the distance between Mars and Earth (the one-way travel time for commands and data can be as long as 20 minutes), environmental factors (e.g., extreme temperatures, dust storms), and the need to respond quickly and effectively to unexpected events and scientific discoveries. In the five years since the rovers landed, the Mars Exploration Rover project team has developed operational procedures for interacting with the rovers that are both scientifically productive and sustainable for what has become a long-duration mission.
Keywords: keynote talk
Robots with emotional intelligence BIBAKFull-Text 5-6
  Rosalind W. Picard
This keynote talk will illustrate a basic set of skills of emotional intelligence, how they are important for robots and agents that interact with people, and how our research at MIT addresses part of the problem of giving robots such skills. One of the most important skills is the ability to perceive and understand expressions of emotion, which I will highlight by demonstrating our latest technologies developed to read joint facial-head movements in real-time and associate these with complex affective-cognitive states, and technologies to read paralinguistic vocal cues from speech. The latter have been made open-source and are available for free. I will also show some non-traditional ways robots might sense and learn about human emotion, and ways they can respond to what they sense that can help or hurt people. I will discuss social and ethical issues these technologies raise. Finally, I will present some new possibilities for robots to both learn from people and help teach skills of emotional intelligence to people, especially to those with nonverbal learning impairments who may want to learn these skills, including many people with diagnoses of autism spectrum disorders such as Aspergers Syndrome.
Keywords: affective computing, autism, deception detection, emotion recognition, emotional intelligence, empathic technology, facial expression recognition, physiological sensing, prosody analysis, system

Designing robots based on human behavior

The snackbot: documenting the design of a robot for long-term human-robot interaction BIBAKFull-Text 7-14
  Min Kyung Lee; Jodi Forlizzi; Paul E. Rybski; Frederick Crabbe; Wayne Chung; Josh Finkle; Eric Glaser; Sara Kiesler
We present the design of the Snackbot, a robot that will deliver snacks in our university buildings. The robot is intended to provide a useful, continuing service and to serve as a research platform for long-term Human-Robot Interaction. Our design process, which occurred over 24 months, is documented as a contribution for others in HRI who may be developing social robots that offer services. We describe the phases of the design project, and the design decisions and tradeoffs that led to the current version of the robot.
Keywords: design process, holistic design, interaction design, social robot
Learning about objects with human teachers BIBAKFull-Text 15-22
  Andrea L. Thomaz; Maya Cakmak
A general learning task for a robot in a new environment is to learn about objects and what actions/effects they afford. To approach this, we look at ways that a human partner can intuitively help the robot learn, Socially Guided Machine Learning. We present experiments conducted with our robot, Junior, and make six observations characterizing how people approached teaching about objects. We show that Junior successfully used transparency to mitigate errors. Finally, we present the impact of "social" versus "non-social" data sets when training SVM classifiers.
Keywords: interactive machine learning, social robot learning
How people talk when teaching a robot BIBAKFull-Text 23-30
  Elizabeth S. Kim; Dan Leyzberg; Katherine M. Tsui; Brian Scassellati
We examine affective vocalizations provided by human teachers to robotic learners. In unscripted one-on-one interactions, participants provided vocal input to a robotic dinosaur as the robot selected toy buildings to knock down. We find that (1) people vary their vocal input depending on the learner's performance history, (2) people do not wait until a robotic learner completes an action before they provide input and (3) people naively and spontaneously use intensely affective prosody. Our findings suggest modifications may be needed to traditional machine learning models to better fit observed human tendencies. Our observations of human behavior contradict the popular assumptions made by machine learning algorithms (in particular, reinforcement learning) that the reward function is stationary and path-independent for social learning interactions.
   We also propose an interaction taxonomy that describes three phases of a human-teacher's vocalizations: direction, spoken before an action is taken; guidance, spoken as the learner communicates an intended action; and feedback, spoken in response to a completed action.
Keywords: affective input, affective vocalization, human-robot interaction, naive teaching, reinforcement learning, social learning

Robots as intermediaries

I am my robot: the impact of robot-building and robot form on operators BIBAKFull-Text 31-36
  Victoria Groom; Leila Takayama; Paloma Ochi; Clifford Nass
As robots become more pervasive, operators will develop richer relationships with them. In a 2 (robot form: humanoid vs. car) x 2 (assembler: self vs. other) between-participants experiment (N=56), participants assembled either a humanoid or car robot. Participants then used, in the context of a game, either the robot they built or a different robot. Participants showed greater extension of their self-concept into the car robot and preferred the personality of the car robot over the humanoid robot. People showed greater self extension into a robot and preferred the personality of the robot they assembled over a robot they believed to be assembled by another. Implications for the theory and design of robots and human-robot interaction are discussed.
Keywords: anthropomorphism, human-robot interaction, humanoid robots, robot form, robot personality, robots, self, self extension
Egocentric and exocentric teleoperation interface using real-time, 3D video projection BIBAKFull-Text 37-44
  François Ferland; François Pomerleau; Chon Tam Le Dinh; François Michaud
The user interface is the central element of a telepresence robotic system and its visualization modalities greatly affect the operator's situation awareness, and thus its performance. Depending on the task at hand and the operator's preferences, going from ego- and exocentric viewpoints and improving the depth representation can provide better perspectives of the operation environment. Our system, which combines a 3D reconstruction of the environment using laser range finder readings with two video projection methods, allows the operator to easily switch from ego- to exocentric viewpoints. This paper presents the interface developed and demonstrates its capabilities by having 13 operators teleoperate a mobile robot in a navigation task.
Keywords: enabling technologies, feedback modalities, human-robot interaction (HRI), interface design and usability, robot intermediaries (e.g., telepresence, proxies, avatars), situation awareness (SA)
Robots in the wild: understanding long-term use BIBAKFull-Text 45-52
  JaYoung Sung; Henrik I. Christensen; Rebecca E. Grinter
It has long been recognized that novelty effects exist in the interaction with technologies. Despite this recognition, we still know little about the novelty effects associated with domestic robotic appliances and more importantly, what occurs after the novelty wears off. To address this gap, we undertook a longitudinal field study with 30 households to which we gave Roomba vacuuming robots and then observed use over six months. During this study, which spans over 149 home visits, we encountered methodological challenges in understanding households' usage patterns. In this paper we report on our longitudinal research, focusing particularly on the methods that we used 1) to understand human-robot interaction over time despite the constraints of privacy and temporality in the home, and 2) to uncover information when routines became less conscious to the participants themselves.
Keywords: domestic robot, longitudinal field research, user study

Non-verbal communication in HRI

Providing route directions: design of robot's utterance, gesture, and timing BIBAKFull-Text 53-60
  Yusuke Okuno; Takayuki Kanda; Michita Imai; Hiroshi Ishiguro; Norihiro Hagita
Providing route directions is a complicated interaction. Utterances are combined with gestures and pronounced with appropriate timing. This study proposes a model for a robot that generates route directions by integrating three important crucial elements: utterances, gestures, and timing. Two research questions must be answered in this modeling process. First, is it useful to let robot perform gesture even though the information conveyed by the gesture is given by utterance as well? Second, is it useful to implement the timing at which humans speaks? Many previous studies about the natural behavior of computers and robots have learned from human speakers, such as gestures and speech timing. However, our approach is different from such previous studies. We emphasized the listener's perspective. Gestures were designed based on the usefulness, although we were influenced by the basic structure of human gestures. Timing was not based on how humans speak, but modeled from how they listen. The experimental result demonstrated the effectiveness of our approach, not only for task efficiency but also for perceived naturalness.
Keywords: gesture, route directions, timing
Footing in human-robot conversations: how robots might shape participant roles using gaze cues BIBAKFull-Text 61-68
  Bilge Mutlu; Toshiyuki Shiwa; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
During conversations, speakers establish their and others' participant roles (who participates in the conversation and in what capacity) -- or "footing" as termed by Goffman-using gaze cues. In this paper, we study how a robot can establish the participant roles of its conversational partners using these cues. We designed a set of gaze behaviors for Robovie to signal three kinds of participant roles: addressee, bystander, and overhearer. We evaluated our design in a controlled laboratory experiment with 72 subjects in 36 trials. In three conditions, the robot signaled to two subjects, only by means of gaze, the roles of (1) two addressees, (2) an addressee and a bystander, or (3) an addressee and an overhearer. Behavioral measures showed that subjects' participation behavior conformed to the roles that the robot communicated to them. In subjective evaluations, significant differences were observed in feelings of groupness between addressees and others and liking between overhearers and others. Participation in the conversation did not affect task performance-measured by recall of information presented by the robot-but affected subjects' ratings of how much they attended to the task.
Keywords: conversational participation, footing, gaze, participant roles, participation structure, robovie
Nonverbal leakage in robots: communication of intentions through seemingly unintentional behavior BIBAKFull-Text 69-76
  Bilge Mutlu; Fumitaka Yamaoka; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
Human communication involves a number of nonverbal cues that are seemingly unintentional, unconscious, and automatic-both in their production and perception-and convey rich information on the emotional state and intentions of an individual. One family of such cues is called "nonverbal leakage." In this paper, we explore whether people can read nonverbal leakage cues-particularly gaze cues-in humanlike robots and make inferences on robots' intentions, and whether the physical design of the robot affects these inferences. We designed a gaze cue for Geminoid-a highly humanlike android-and Robovie-a robot with stylized, abstract humanlike features-that allowed the robots to "leak" information on what they might have in mind. In a controlled laboratory experiment, we asked participants to play a game of guessing with either of the robots and evaluated how the gaze cue affected participants' task performance. We found that the gaze cue did, in fact, lead to better performance, from which we infer that the cue led to attributions of mental states and intentionality. Our results have implications for robot design, particularly for designing expression of intentionality, and for our understanding of how people respond to human social cues when they are enacted by robots.
Keywords: gaze, geminoid, humanlikeness, nonverbal behavior, nonverbal leakage, robovie
Visual attention in spoken human-robot interaction BIBAKFull-Text 77-84
  Maria Staudte; Matthew W. Crocker
Psycholinguistic studies of situated language processing have revealed that gaze in the visual environment is tightly coupled with both spoken language comprehension and production. It has also been established that interlocutors monitor the gaze of their partners, a phenomenon called "joint attention", as a further means for facilitating mutual understanding. We hypothesise that human-robot interaction will benefit when the robot's language-related gaze behaviour is similar to that of people, potentially providing the user with valuable non-verbal information concerning the robot's intended message or the robot's successful understanding. We report findings from two eye-tracking experiments demonstrating (1) that human gaze is modulated by both the robot speech and gaze, and (2) that human comprehension of robot speech is improved when the robot's real-time gaze behaviour is similar to that of humans.
Keywords: experimental methods, gaze, user study, visual attention

New methods for studying HRI

An information pipeline model of human-robot interaction BIBAKFull-Text 85-92
  Kevin Gold
This paper investigates the potential usefulness of viewing the system of human, robot, and environment as an "information pipeline" from environment to user and back again. Information theory provides tools for analyzing and maximizing the information rate of each stage of this pipeline, and could thus encompass several common HRI goals: "situational awareness," which can be seen as maximizing the information content of the human's model of the situation; efficient robotic control, which can be seen as finding a good codebook and high throughput for the Human-Robot channel; and artificial intelligence, which can be assessed by how much it reduces the traffic on all four channels. Analysis of the information content of the four channels suggests that human to robot communication tends to be the bottleneck, suggesting the need for greater onboard intelligence and a command interface that can adapt to the situation.
Keywords: conceptual/foundational, human-robot interaction, information theory
Systemic interaction analysis (SInA) in HRI BIBAKFull-Text 93-100
  Manja Lohse; Marc Hanheide; Katharina J. Rohlfing; Gerhard Sagerer
Recent developments in robotics enable advanced human-robot interaction. Especially interactions of novice users with robots are often unpredictable and, therefore, demand for novel methods for the analysis of the interaction in systemic ways. We propose Systemic Interaction Analysis (SInA) as a method to jointly analyze system level and interaction level in an integrated manner using one tool. The approach allows us to trace back patterns that deviate from prototypical interaction sequences to the distinct system components of our autonomous robot. In this paper, we exemplarily apply the method to the analysis of the follow behavior of our domestic robot BIRON. The analysis is the basis to achieve our goal of improving human-robot interaction iteratively.
Keywords: analysis tools, autonomous system, domestic robot, follow behavior, user studies
The oz of wizard: simulating the human for interaction research BIBAKFull-Text 101-108
  Aaron Steinfeld; Odest Chadwicke Jenkins; Brian Scassellati
The Wizard of Oz experiment method has a long tradition of acceptance and use within the field of human-robot interaction. The community has traditionally downplayed the importance of interaction evaluations run with the inverse model: the human simulated to evaluate robot behavior, or Oz of Wizard. We argue that such studies play an important role in the field of human-robot interaction. We differentiate between methodologically rigorous human modeling and placeholder simulations using simplified human models. Guidelines are proposed for when Oz of Wizard results should be considered acceptable. This paper also describes a framework for describing the various permutations of Wizard and Oz states.
Keywords: evaluation, human-robot interaction, interaction, wizard of oz

Modeling social interaction

How to approach humans?: strategies for social robots to initiate interaction BIBAKFull-Text 109-116
  Satoru Satake; Takayuki Kanda; Dylan F. Glas; Michita Imai; Hiroshi Ishiguro; Norihiro Hagita
This paper proposes a model of approach behavior with which a robot can initiate conversation with people who are walking. We developed the model by learning from the failures in a simplistic approach behavior used in a real shopping mall. Sometimes people were unaware of the robot's presence, even when it spoke to them. Sometimes, people were not sure whether the robot was really trying to start a conversation, and they did not start talking with it even though they displayed interest. To prevent such failures, our model includes the following functions: predicting the walking behavior of people, choosing a target person, planning its approaching path, and nonverbally indicating its intention to initiate a conversation. The approach model was implemented and used in a real shopping mall. The field trial demonstrated that our model significantly improves the robot's performance in initiating conversations.
Keywords: anticipation, approaching behavior, position-based interaction
ShadowPlay: a generative model for nonverbal human-robot interaction BIBAKFull-Text 117-124
  Eric M. Meisner; Selma Àbanovic; Volkan Isler; Linnda Caporeal R. Caporeal; Jeff Trinkle
Humans rely on a finely tuned ability to recognize and adapt to socially relevant patterns in their everyday face-to-face interactions. This allows them to anticipate the actions of others, coordinate their behaviors, and create shared meaning to communicate. Social robots must likewise be able to recognize and perform relevant social patterns, including interactional synchrony, imitation, and particular sequences of behaviors. We use existing empirical work in the social sciences and observations of human interaction to develop nonverbal interactive capabilities for a robot in the context of shadow puppet play, where people interact through shadows of hands cast against a wall. We show how information theoretic quantities can be used to model interaction between humans and to generate interactive controllers for a robot. Finally, we evaluate the resulting model in an embodied human-robot interaction study. We show the benefit of modeling interaction as a joint process rather than modeling individual agents.
Keywords: control architecture, gesture recognition, interaction synchrony, modeling social situations, nonverbal interaction
Creating and using matrix representations of social interaction BIBAKFull-Text 125-132
  Alan R. Wagner
This paper explores the use of an outcome matrix as a computational representation of social interaction suitable for implementation on a robot. An outcome matrix expresses the reward afforded to each interacting individual with respect to pairs of potential behaviors. We detail the use of the outcome matrix as a representation of interaction in social psychology and game theory, discuss the need for modeling the robot's interactive partner, and contribute an algorithm for creating outcome matrices from perceptual information. Experimental results explore the use of the algorithm with different types of partners and in different environments.
Keywords: interaction, interdependence theory, mental model
Developing a model of robot behavior to identify and appropriately respond to implicit attention-shifting BIBAKFull-Text 133-140
  Fumitaka Yamaoka; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
In this paper, we present our current research on developing a model of robot behavior that leads to feelings of being together using the robot's body position and orientation. Creating feelings of "being together" will be an essential skill for robots that live with humans and adapt to daily human activities such as walking together or establishing joint attention to information in the environment. We observe people's proxemic behavior in joint attention situations and develop a model of behavior for robots to detect a partner's attention shift and appropriately adjust its body position and orientation in establishing joint attention with the partner. We experimentally evaluate the effectiveness of our model, and our results demonstrate the model's effectiveness.
Keywords: position-based interaction, proximics, transition of attention

Situation awareness, interface design and usability

How search and its subtasks scale in N robots BIBAKFull-Text 141-148
  Huadong Wang; Michael Lewis; Prasanna Velagapudi; Paul Scerri; Katia Sycara
The present study investigates the effect of the number of controlled robots on performance of an urban search and rescue (USAR) task using a realistic simulation. Participants controlled either 4, 8, or 12 robots. In the fulltask control condition participants both dictated the robots' paths and controlled their cameras to search for victims. In the exploration condition, participants directed the team of robots in order to explore as wide an area as possible. In the perceptual search condition, participants searched for victims by controlling cameras mounted on robots following predetermined paths selected to match characteristics of paths generated under the other two conditions. By decomposing the search and rescue task into exploration and perceptual search subtasks the experiment allows the determination of their scaling characteristics in order to provide a basis for tentative task allocations among humans and automation for controlling larger robot teams. In the fulltask control condition task performance increased in going from four to eight controlled robots but deteriorated in moving from eight to twelve. Workload increased monotonically with number of robots. Performance per robot decreased with increases in team size. Results are consistent with earlier studies suggesting a limit of between 8-12 robots for direct human control.
Keywords: evaluation, human-robot interaction, metrics, multi-robot system
Field trial for simultaneous teleoperation of mobile social robots BIBAKFull-Text 149-156
  Dylan F. Glas; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
Simultaneous teleoperation of mobile, social robots presents unique challenges, combining the real-time demands of conversation with the prioritized scheduling of navigational tasks. We have developed a system in which a single operator can effectively control four mobile robots performing both conversation and navigation. We compare the teleoperation requirements for mobile, social robots with those of traditional robot systems, and we identify metrics for evaluating task difficulty and operator performance for teleoperation of mobile social robots. As a proof of concept, we present an integrated priority model combining real-time conversational demands and non-real-time navigational demands for operator attention, and in a pioneering study, we apply the model and metrics in a demonstration of our multi-robot system through real-world field trials in a shopping arcade.
Keywords: adjustable autonomy, prioritized control, simultaneous teleoperation, teleoperation of social robots
Mobile human-robot teaming with environmental tolerance BIBAKFull-Text 157-164
  Matthew M. Loper; Nathan P. Koenig; Sonia H. Chernova; Chris V. Jones; Odest C. Jenkins
We demonstrate that structured light-based depth sensing with standard perception algorithms can enable mobile peer-to-peer interaction between humans and robots. We posit that the use of recent emerging devices for depth-based imaging can enable robot perception of non-verbal cues in human movement in the face of lighting and minor terrain variations. Toward this end, we have developed an integrated robotic system capable of person following and responding to verbal and non-verbal commands under varying lighting conditions and uneven terrain. The feasibility of our system for peer-to-peer HRI is demonstrated through two trials in indoor and outdoor environments.
Keywords: gesture recognition, human-robot interaction, person following

Responding to autonomy

On using mixed-initiative control: a perspective for managing large-scale robotic teams BIBAKFull-Text 165-172
  Benjamin Hardin; Michael A. Goodrich
Prior work suggests that the potential benefits of mixed initiative management of multiple robots are mitigated by situational factors, including workload and operator expertise. In this paper, we present an experiment where allowing a supervisor and group of searchers to jointly decide the correct level of autonomy for a given situation ("mixed initiative") results in better overall performance than giving an agent exclusive control over their level of autonomy ("adaptive autonomy") or giving a supervisor exclusive control over the agent's level of autonomy ("adjustable autonomy"), regardless of the supervisor's expertise or workload. In light of prior work, we identify two elements of our experiment that appear to be requirements for effective mixed initiative control of large-scale robotic teams: (a) Agents must be capable of making progress toward a goal without having to wait for human input in most circumstances. (b) The operator control interface must help the human to rapidly understand and modify the progress and intent of several agents.
Keywords: adaptive autonomy, adjustable autonomy, human-robot interaction, mixed initiative, unmanned vehicles, user study
An affective guide robot in a shopping mall BIBAKFull-Text 173-180
  Takayuki Kanda; Masahiro Shiomi; Zenta Miyashita; Hiroshi Ishiguro; Norihiro Hagita
To explore possible robot tasks in daily life, we developed a guide robot for a shopping mall and conducted a field trial with it. The robot was designed to interact naturally with customers and to affectively provide shopping information. It was also designed to repeatedly interact with people to build a rapport; since a shopping mall is a place people repeatedly visit, it provides the chance to explicitly design a robot for multiple interactions. For this capability, we used RFID tags for person identification. The robot was semi-autonomous, partially controlled by a human operator, to cope with the difficulty of speech recognition in a real environment and to handle unexpected situations.
   A field trial was conducted at a shopping mall for 25 days to observe how the robot performed this task and how people interacted with it. The robot interacted with approximately 100 groups of customers each day. We invited customers to sign up for RFID tags and those who participated answered questionnaires. The results revealed that 63 out of 235 people in fact went shopping based on the information provided by the robot. The experimental results suggest promising potential for robots working in shopping malls.
Keywords: communication robots, field trial, service robots
Concurrent performance of military tasks and robotics tasks: effects of automation unreliability and individual differences BIBAKFull-Text 181-188
  Jessie Y. C. Chen
This study investigated the performance and workload of the combined position of gunner and robotics operator in a simulated military multitasking environment. Specifically, we investigated how aided target recognition (AiTR) capabilities for the gunnery task with imperfect reliability (false-alarm-prone vs. miss-prone) might affect the concurrent robotics and communication tasks. Additionally, we examined whether performance was affected by individual differences in spatial ability and attentional control. Results showed that when the robotics task was simply monitoring the video, participants had the best performance in their gunnery and communication tasks and the lowest perceived workload, compared with the other robotics tasking conditions. There was a strong interaction between the type of AiTR unreliability and participants' perceived attentional control. Overall, for participants with higher perceived attentional control, false-alarm-prone alerts were more detrimental; for low attentional control participants, conversely, miss-prone automation was more harmful. Low spatial ability participants preferred visual cueing, and high spatial ability participants favored tactile cueing. Potential applications of the findings include personnel selection for robotics operation, robotics user interface designs, and training development.
Keywords: cueing, human robot interaction, imperfect automation, individual differences, military, reconnaissance, simulation, tactile display

HRI video abstracts

Non-facial and non-verbal affective expression in appearance-constrained robots for use in victim management: robots to the rescue! BIBAKFull-Text 191-192
  Cindy L. Bethel; Christine Bringes; Robin R. Murphy
This video presents a visual summary of large-scale, complex human study in Human-Robot Interaction (HRI) designed to evaluate whether humans would view interactions with two non-anthropomorphic robots more positively and calming when the robots were operated in an emotive mode versus a standard, non-emotive mode. The video presents actual participants' reactions, the study design, and images from search and rescue operations.
Keywords: affective robotics, experimental design, human-robot interaction, urban search and rescue, victim management
A native iPhone packbot OCU BIBAKFull-Text 193-194
  Rodrigo Gutierrez; Jeff Craighead
This video abstract discusses the details of the implementation of a Packbot operator control unit (OCU) using Apple's official iPhone SDK.
Keywords: iPhone, robot, touch screen
FaceBots: social robots utilizing facebook BIBAKFull-Text 195-196
  Nikolaos Mavridis; Chandan Datta; Shervin Emami; Chiraz BenAbdelkader; Andry Tanoto; Tamer Rabie
Although existing robotic systems are interesting to interact with in the short term, it has been shown that after some weeks of quasi-regular encounters, humans gradually lose their interest, and meaningful longer-term human-robot relationships are not established. An underlying hypothesis driving the proposed project is that such relationships can be significantly enhanced if the human and the robot are gradually creating a pool of shared episodic memories that they can co-refer to, and if they are both embedded in a social web of other humans and robots they both know and encounter frequently. Thus, here we propose to use Facebook, a highly successful online networking resource for humans, towards enhancing longer-term human-robot relationships, by helping to address the above two prerequisites. As a starting point, we utilize social information in order to personalize human-robot dialogues, and to include references to past encounters and to encounters with friends within dialogues. A robot equipped with a modular software architecture (with IPC-intercommunicating modules for face recognition, a simple dialog system, a navigation subsystem, and a real-time Facebook connection/local social database) has been deployed, and is encountering humans in the environment of our lab. An early demonstration of a basic form of such encounters is shown in the submitted video. The system is expected to achieve two significant novelties: arguably being one of the first robots to be embedded in a social web, and being the first robot that can purposefully exploit and create social information that is available online. Furthermore, it is expected to provide empirical support for our main driving hypothesis, that the formation of shared episodic memories within a social web can lead to more meaningful long-term human-robot relationships.
Keywords: conversational robots, human-robot interaction, social robots
Keepon goes Seoul-searching BIBAKFull-Text 197-198
  Marek P. Michalowski; Jaewook Kim; Bomi Kim; Sonya S. Kwak; Hideki Kozima
Keepon is a robot designed for social interaction with children for the purposes of social development research and autism therapy [1]. Keepon's capacity for rhythmic synchrony in the form of dance has resulted in the popularity of several fictional music videos on the internet [2,3]. During a research collaboration visit at the KAIST PES Design Lab in Korea, Keepon's creators added this new chapter to the story of Keepon's travels. Upon watching a video of traditional Korean "Pungmulnori" dancing, which features distinctive spinning hats, Keepon becomes enamored. The robot has many adventures as he travels around Korea in search of a dance group that finally welcomes him into their cultural performance.
   Additional credits: Music ("Superfantastic" by Peppertones/Cavare Sound); Videography (Uyoung Chang and Minwoo Kang); Pungmulnori Team "Ghil" (Junhyung Park, Seongbok Chae, Sangmi Lee, Mikyeong Kim, and Sohyun Park).
   This video is available at http://beatbots.org and at http://www.youtube.com/watch?v=XwqfWR2KPd0.
Keywords: human-robot interaction, rhythmic synchrony, social robotics
AURAL: evolutionary sonification with robots BIBAKFull-Text 199-200
  Artemis M. F. S. Moroni; Jônatas Manzolli
This study aims to provide a platform for exploring robotic navigation in line with evolutionary computation of sound control data. Real world devices, two mobile robots and an omnidirectional vision system are integrated to sonify trajectories of robots in real time.
Keywords: algorithmic composition, computer art, evolutionary computation, omnidirectional vision system, robotics
Preliminary observation of HRI in robot-assisted medical response BIBAKFull-Text 201-202
  Robin Murphy; Masashi Konyo; Pedro Davalas; Gabe Knezek; Satoshi Tadokoro; Kazuna Sawata; Maarten Van Zomeren
This video captures human-robot interaction which occurred during an evaluation of a novel, snake-like search and rescue robot assisting with victim management. Most of the observations confirmed previous findings- That a 2:1 H-R ratio ratio is appropriate, Team coordination is enhanced by shared visual perception, and Poor interfaces continue to lead to incomplete coverage. However, the victims responded to the robot in two surprising ways: grabbing the robot and being concerned about its appearance.
Keywords: HRI, rescue robotics, search and rescue
Blog robot: a new style for accessing location-based contents BIBAKFull-Text 203-204
  Masato Noda; Toshihiro Osumi; Kenta Fujimoto; Yuki Kuwayama; Hirotaka Osawa; Michita Imai; Kazuhiko Shinozawa
We propose a portable robot named "Blog Robot" which presents blog contents by using verbal and non-verbal expression. Blog Robot is a robotized smart-phone which has a head and arms for making hand gestures, eye contact, and joint attention. The blog is widely used to express personal views or to record daily occurrences. One of the information frequently posted on the blog is related to a certain place such as a tourist site or a shop. Meanwhile, people sit down in front of their PC and check blogs through the text and the image displayed on the Web browser. However, their style of checking the blogs is not good way for them to realize the authentic situations which blog writers let them know. The user carries Blog Robot like cellular phone and can browse blogs related to the location where user is. The browse method makes the user access the blog at the real scene related to the contents of the blog. Blog Robot gives her/him the content of the blog by reading it with synthesized speech. In particular, the nonverbal information generated by Blog Robot enhances the read information as if the blog writer is next her/him while telling her/him it. The browse method is expected to enable the user to obtain more realistic information than the Web browser on the PC.
   Moreover, it enables the user shares the information with the blog writer. In addition, since the browse through Blog Robot is performed at the location that the blog writer once visited, the blog writer has proper feedback from the user. It is difficult for the blog writer to obtain the same feedback from the user who sits in front of her/his PC because she/he is not there. We have also designed tags specific to generating the nonverbal expression of Blog Robot and the tags are embedded within the text in the blog. The tags can be used not only for Blog Robot but also for the PC. If the user checks the blog including the tags, they are displayed as icon on the Web browser.
Keywords: agent presentation, human robot interaction, information revitalization, web contents
Human-robot physical interaction with dynamically stable mobile robots BIBAKFull-Text 205-206
  Umashankar Nagarajan; George Kantor; Ralph L. Hollis
Developed by Prof. Ralph Hollis in the Microdynamic Systems Laboratory at Carnegie Mellon University, Ballbot is a dynamically stable mobile robot moving on a single spherical wheel providing omni-directional motion. Unlike statically stable mobile robots, dynamically stable mobile robots can be tall and skinny with high center of gravity and small base. The ball drive mechanism is a four motor inverse mouse-ball setup. An Inertial Measuring Unit (IMU) and encoders on the motors provide all information needed for full-state feedback. Ballbot has three legs that provide static stability when powered down and is capable of auto-transitioning from the statically stable state to the dynamically stable state and vice versa. It is also capable of yaw rotation about its vertical axis. An absolute encoder provides the relative angle between the IMU and the ball drive unit.
   We wish to demonstrate Human-Robot Physical Interaction with dynamically stable mobile robots using Ballbot as an example. The balancing controller on Ballbot is extremely robust to disturbances like shoves, kicks and collisions with furniture and wall. Due to its dynamic stability, Ballbot can be moved around with very little effort. Physically directing a heavy statically stable mobile robot can be a difficult task, whereas Ballbot can be moved around with just a single finger. Similarly, while moving, Ballbot can be stopped with very little effort. We have developed some basic behaviors that enable Ballbot to detect human intentions with the physical interaction it has using just the encoder and IMU data. For example, given a soft push, Ballbot tries to stick to its position on the floor, whereas, when given a hard push, it moves away from its current location and station-keeps at a different point on the floor. We also present our initial results in developing a Learn-Repeat behavior in Ballbot, where in during the Learn mode, the user drives Ballbot around and it remembers the path travelled, and during the Repeat mode, Ballbot attempts to repeat the path learnt. We are in the process of adding stereo cameras and laser range finders to the robot, which will help us explore and extend more areas of Human-Robot Interaction.
Keywords: dynamically stable mobile robots, human-robot physical interaction
Anthropomorphization method using attachable humanoid parts BIBAKFull-Text 207-208
  Hirotaka Osawa; Ren Ohmura; Michita Imai
With this video, we propose a new human-robot interaction that anthropomorphizes a target common object and transform it into a communicative agent using attachable humanoid parts. The user perceives the target to have its own intentions and body image through the attached body parts. This video shows examples of anthropomorphization method as below.
   First, the video shows the setup process of our method in which a demonstrator attaches each part, such as eye-like parts, arm-like parts, and camera to a common electric oven. The oven becomes a communicative robot by attaching these parts.
   Second, the video explains three applications -- self advertisement, self presentation, and interactive manual -- that are achieved by anthropomorphized objects.
   In the self advertisement situation, the anthropomorphized oven attracts customers and explains its function by itself. This situation assumes that these devices are used in shops in future. In the self presentation situation, an anthropomorphized poster explains its contents by itself. There is no other explainer. This situation assumes that these devices are used on a poster presentation. In the interactive manual situation, an anthropomorphized printer explains its function interactively. This explanation is intuitive and understandable for children and elderly people. After third situation, this anthropomorphized printer is compared to an explanation from the humanoid robot Robovie through gaze direction analysis. In the Robovie situation, the guidance fails because the robot distracts from the target itself. However in the anthropomorphized printer situation, users can concentrate on the interaction.
   Last, we use an anthropomorphized shredder using eye-like parts, arm-like parts, and skin sensor. The shredder explains its interactive manual like in the printer situation. However in this interaction, this shredder detects the user's touch and proceeds with the interaction instead of waiting to detect voice.
Keywords: anthropomorphization, human interface, human robot interaction
Roball interacting with children BIBAKFull-Text 209-210
  Tamie Salter; Francois Michaud; Dominic Letourneau
This video shows a light hearted view of a rolling autonomous robot named Roball. Roball is shown interacting with various children who age from 10 months old to teenagers at a high school. The clips show the different ways children interact with Roball and also the different types of reactions the children can have to Roball. Each clip was taken from a trial that was conducted to investigate Child-Robot Interaction (CRI).
   First we see the different reactions of young children (tikes) aged between 2 to 4 years. We see them laughing, chasing Roball, dancing to music that Roball is playing and also running away from Roball. It is possible to see how active this age group can be in there interaction. Next we see a 10 month old toddler enjoying hitting Roball and then we see this child apparently showing some form of affection to Roball. Then after watching Roball bang into a door we see the toddler copy the behaviour by also banging into the door. Proceeding is the very different style of interaction from teenagers. Despite Roball gaining a lot of interest the actual physical contact or interaction is much lower. We see the robot being purposefully kicked and also a teenager pretending to kick Roball in a show of bravado. Finally we see the dangers of being a rolling robot when in the presence of a group of teenagers, as Roball through random wandering rolls into the path of a group of teenagers walking at high speeds.
   Although this is a lighthearted view there are still many important CRI factors that can be gained from watching the footage.
Keywords: child-robot interaction (CRI), experimentation in the wild, human-robot interaction (HRI)
The SantaBot experiment: a pilot study of human-robot interaction BIBAKFull-Text 211-212
  Søren Tranberg Hansen; Mikael Svenstrup; Hans Jørgen Andersen; Thomas Bak; Ole B. Jensen
The video shows how an autonomous mobile robot dressed as Santa Claus is interacting with people in a shopping mall. The underlying hypothesis is that it is possible to create interesting new living spaces and induce value in terms of experiences, information or economics, by putting socially interactive mobile agents into public urban transit area. To investigate the hypothesis, an experiment was carried out using a robot capable of navigating autonomously based on the input of an onboard laser scanner. The robot would detect and follow random people, who afterwards were asked to fill out a questionnaire for quantitative analysis of the experiment. The presented video is the corresponding video documentation of the experiment used in the evaluation. The results showed that people were generally positive towards having mobile robots in this type of environment where shopping is combined with transit. However, it also showed harder than expected to start interaction with commuters due to their determination and speed towards their goal. Further it was demonstrated that it was possible to track and follow people, who were not beforehand informed on the experiment. The evaluation indicated, that the distance to initiate interaction was shorter than initially expected, but complies with the distance for normal human to human interaction.
Keywords: human-robot interaction, mobile robotics, pilot study, transit space
Emotion induction during human-robot interaction BIBAKFull-Text 213-214
  Cornelia Wendt; Michael Popp; Berthold Faerber
The aim of the presented study was to measure physiological correlates of emotions that are of particular interest in the field of human-robot interaction (HRI). Therefore, we did not focus on self-induced basic emotions but rather evoked states that might occur naturally in this context. Our video shows how such states (namely stress, boredom, surprise, and perplexity) were elicited during a joint construction task with an industrial robot (see figure 1). Participants were asked to build different LEGO objects, while the robot arm was passing the bricks with predetermined velocity. States of stress and boredom were generated by varying the handover interval from 3 seconds (stress) to 5 seconds (normal working condition) up to 35 seconds (boredom). Surprise was induced by passing an unexpected component. At the end of the experiment, we additionally wanted to know how people react if the robot seems to tease them by repeatedly changing the handover position.
   This experiment was realized by the support of researchers from the MMK and the IWB of the Technical University Munich who provided the technical facilities and know-how. The underlying project is supported within the DFG excellence initiative research cluster Cognition for Technical Systems -- CoTeSys, see also www.cotesys.org.
Keywords: emotion recognition, human-robot interaction, joint construction, stress induction
Human-robot interaction for 3D telemanipulated fracture reduction BIBAKFull-Text 215-216
  Ralf Westphal; Simon Winkelbach; Thomas Goesling; Markus Oszwald; Tobias Huefner; Christian Krettek; Friedrich M. Wahl
Today, femoral shaft fractures are usually stabilized by means of intramedullary nails. This video presents the development of a telemanipulator system, which, by supporting the fracture reduction process, aims at achieving reliable operation results with high reduction accuracies. First, we present a system using 2D X-ray images as base information for the surgeon to guide the reduction. We show the advantages but also the limitations of this approach, which finally led to the development of a telemanipulator system that is based on 3D imaging data instead.
Keywords: force feedback, fracture reduction, haptics, surgical navigation, surgical robotics, telemanipulation

HRI late-breaking abstracts

Can users react toward an on-screen agent as if they are reacting toward a robotic agent? BIBAKFull-Text 217-218
  Takanori Komatsu; Nozomi Kuki
Our former study showed that users tended not to react to an on-screen agent's invitation of a Shiritori game (a last and first game), but did to a robotic agent. Thus, the purpose of this study was to investigate the contributing factors that could make the users react toward an on-screen agent as if they were reacting toward a robotic agent. The results showed that the participants who first accepted the invitation of a robotic agent that was assigned an attractive character reacted toward the on-screen agents as if they were reacting to the robotic one.
Keywords: on-screen agent, robotic agent, shiritori game
Individualization of voxel-based hand model BIBAKFull-Text 219-220
  Albert J. Causo; Mai Matsuo; Etsuko Ueda; Yoshio Matsumoto; Tsukasa Ogasawara
Improvements in hand pose estimation, made possible by refining the model matching step, is necessary in creating a more natural human-robot interface. Individualizing the 3D hand model of the user can result to a better hand pose estimation. This paper presents a way to accomplish the individualization by estimating the length of the finger links (bones), which is unique for every user. The 3D model of the hand is made up of voxel data derived from silhouette images obtained by multiple cameras and the finger link is estimated by searching a set of models generated from the calibration motion of the fingers. Initial pose estimation result using the model shows the feasibility of the system.
Keywords: hand model, hand pose estimation, multi camera system, voxel
Robots with projectors: an alternative to anthropomorphic HRI BIBAKFull-Text 221-222
  Jongkyeong Park; Gerard Jounghyun Kim
Current forms of Human Robot Interaction (HRI) pursue mostly anthropomorphism and direct interaction. That is, the interaction paradigm is based on imitating how "people" interact with one another (e.g. using spoken language, gestures, facial expression, etc.). However, "direct" interaction/contact with the "robot," often causes significant inconvenience and usability problems. In this paper, we present an alternative to the anthropomorphic interface using a projected display and indirect, yet already familiar GUI based interaction. That is, the projector (on the moving robot) projects information on the nearby surface and provides a relatively large area through which indirect GUI based interaction can occur. As an instance of such an HRI paradigm, we present a moving robot kiosk that projects displays around itself and serve and interact with multiple people at once. We report our on-going development efforts and a pilot experimental study that compares it to the typical touch screen based direct HRI.
Keywords: anthropomorphism, human robot interaction, indirect/direct interaction, large display, projector
comforTABLE: a robotic environment for aging in place BIBAKFull-Text 223-224
  Keith Evan Green; Ian D. Wakjer; Johnell O. Brooks; Tarek Mohktar; Linnea Smolentzov
While high-technology has become pervasive in hospitals, domestic environments remain essentially low-tech and conventional, despite the care needs of an aging population wishing to age in place. In response, an interdisciplinary team -- robotics engineer, architect, human factors psychologist and gerontologist -- are designing, constructing, field testing, and evaluating comforTABLE, an intelligent environment for aging in place. comforTABLE is designed to increase the quality of life of both healthy individuals as well as persons with impaired mobility by intelligently supporting the physical organization of their immediate environment. While comforTABLE features intelligent behavior and robotic elements, comforTABLE aims to help people do things for themselves. This paper introduces the motivations for comforTABLE, presents its three intelligent, networked components and describes scenarios of how the system might operate in domestic situations.
Keywords: aging in place, architectural design, gerontology, human factors, intelligent environments, robotics
In-home telehealth clinical interaction using a robot BIBAKFull-Text 225-226
  Simon Brière; Patrick Boissy; Francois Michaud
Providing healthcare in remote locations can prove to be costly. Using a static videoconference system in the patient's home has its limitations. A remotely operated mobile robot platform could provide a better interaction with the patient located at home. This paper presents Telerobot, a teleoperated mobile robotic platform equipped with videoconferencing capabilities. Developed by a team of roboticists and clinical experts, the system is designed specifically for the provision of in-home telerehabilitation services. A usability study was done in order to qualify the robot user control scheme and the clinician-patient interaction.
Keywords: clinical interaction, in home telerehabilitation, mobile robotics, mobile telepresence, usability
A leader-follower turn-taking model incorporating beat detection in musical human-robot interaction BIBAKFull-Text 227-228
  Gil Weinberg; Brian Blosser
This paper describes the implementation of a leader-follower model in a musical HRI based on beat detection analysis and a novel turn taking scheme. The project enables Haile, a robotic percussionist, to fluidly interact with humans in the context of an improvisatory jam session. The long-term goal of this work is to facilitate dynamic interactions between humans and machines that will lead to novel and inspiring musical outcomes.
Keywords: beat-detection, human-robot interaction, leader-follower paradigm, machine listening, robotic musicianship
Planning as an architectural control mechanism BIBAKFull-Text 229-230
  Nick Hawes; Michael Brenner; Kristoffer Sjöö
We describe recent work on PECAS, an architecture for intelligent robotics that supports multi-modal interaction.
Keywords: architecture, integration, planning, robotics
Influences of concerns toward emotional interaction into social acceptability of robots BIBKFull-Text 231-232
  Tatsuya Nomura; Takayuki Kanda; Tomohiro Suzuki; Sachie Yamada; Kensuke Kato
Keywords: human-robot interaction, negative attitudes, social acceptance
Interactive jamming with Shimon: a social robotic musician BIBAKFull-Text 233-234
  Gil Weinberg; Aparna Raman; Trishul Mallikarjuna
The paper introduces Shimon: a socially interactive and improvisational robotic marimba player. It presents the interaction schemes used by Shimon in the realization of an interactive musical jam session among human and robotic musicians.
Keywords: beat, haile, improvisation, interaction, jam, marimba, markov, melody, music, rhythm, robot, shimon, social
Human-robot interaction observations from a proto-study using SUAVs for structural inspection BIBKFull-Text 235-236
  Maarten van Zomeren; Joshua M. Peschel; Timothy Mann; Gabe Knezek; James Doebbler; Jeremy Davis; Tracy A. Hammond; Augustinus H. J. Oomes; Robin R. Murphy
Keywords: (s)UAV, human robot interaction, interface, rescue robotics
What are the benefits of adaptation when applied in the domain of child-robot interaction? BIBAKFull-Text 237-238
  Tamie Salter; Francois Michaud; Dominic Létourneau
There is great potential for robotic devices when being applied with children. They can be used from play to assistive applications. We develop robotic devices for a diverse range of children that differ in age, gender and ability, which includes children that are diagnosed with cognitive difficulties such as autism. Every child is an individual and they vary in their personalities and styles of interaction. Therefore, being able to adjust the robot's behaviour to the type of interaction it is receiving was believed to be essential. In this abstract we examine a series of trials which investigated how adaptation (through changes in motion and sound) on a fully autonomous rolling robot could help gain and sustain the interest of five different children. We discovered surprising benefits to having adaptation on-board Roball.
Keywords: adaptive mobile robots, child-robot interaction (CRI), experimentation in the wild, human-robot interaction (HRI)
Making sense of agentic objects and teleoperation: in-the-moment and reflective perspectives BIBAKFull-Text 239-240
  Leila Takayama
Agentic objects are those entities that are perceived and responded to in-the-moment as if they were agentic despite the likely reflective perception that they are not agentic at all. They include autonomous robots, but also simpler systems like automatic doors, trashcans, and staplers -- anything that seems to possess agency. It is well known that low-level spatiotemporal information elicits in-the-moment responses that are interpreted as perceiving mentalism [8, 17], but people reflectively believe that there is a distinction between human and non-human agents. How are we to make sense of these agentic objects?
Keywords: agentic object, human-robot interaction, in-the-moment, perceived agency, teleoperation
Cognitive architecture for perception-reaction intelligent computer agents (CAPRICA) BIBAKFull-Text 241-242
  Dustin B. Chertoff; Sandy Vanderbleek; Stephen M. Fiore; Shaun Gallagher
In this paper, we introduce a cognitive agent architecture that can be used in the study of Human-Robot Interaction. The Cognitive Architecture for Perception-Reaction Intelligent Computer Agents (CAPRICA) is an extensible agent library built around the ideas of theory of mind, episodic memory, and embodied cognition. Existing agent research in each of these areas was used to formulate design requirements. We provide an overview of the library's design and discuss future work in progress.
Keywords: cognitive agents, embodied cognition, episodic memory, theory of mind
System design of group communication activator: an entertainment task for elderly care BIBAKFull-Text 243-244
  Yoichi Matsuyama; Hikaru Taniyama; Shinya Fujie; Tetsunori Kobayashi
Our community is facing serious Aging Society especially in Japan.
   We have investigated in one of daycare centers which are facilities for elderly care. As the result, we realized that communication is needed for its own sake in these facilities and active communication can cure even depression and dementia. Therefore we propose to cope with these problems using a robot as a communication activator in order to improve group communication. We define group communication as one of types of communication which is formed by several persons. This time, we focus on a recreation game named "Nandoku". Nandoku is a quiz which can be described as group communication with a master of ceremony (MC).
   The system always selects its behavior and target (a participant in the game) to maximize "communication activeness." Communication activeness is defined as amount of several panelists'(ordinary three: A, B, C) participation, which are calculated with panelists' face direction using camera information.
   For instance, if participant A is not fully participating by not making eye contact, the system is expected to select one of the behaviors such as "Can you answer, Mr.A?" to encourage A to participate in the game.
   We experimented with the system in a daycare center. As the result, obvious increase in participation was observed. That offers evidence that the robot can serve a practical role in improving the group communication as a communication activator especially for entertainment use.
Keywords: group communication
How anthropomorphism affects empathy toward robots BIBAKFull-Text 245-246
  Laurel D. Riek; Tal-Chen Rabinowitch; Bhismadev Chakrabarti; Peter Robinson
A long-standing question within the robotics community is about the degree of human-likeness robots ought to have when interacting with humans. We explore an unexamined aspect of this problem: how people empathize with robots along the anthropomorphic spectrum. We conducted an experiment that measured how people empathized with robots shown to be experiencing mistreatment by humans. Our results indicate that people empathize more strongly with more human-looking robots and less with mechanical-looking robots.
Keywords: anthropomorphism, emotion, empathy, human-robot interaction, simulation theory, theory of mind
Responsiveness to robots: effects of ingroup orientation & communication style on hri in china BIBAKFull-Text 247-248
  Lin Wang; Pei-Luen Patrick Rau; Vanessa Evers; Benjamin Robinson; Pamela Hinds
This study investigates the effects of group orientation and communication style on Chinese subjects' responsiveness to robots. A 2x2 experiment was conducted with group orientation (ingroup vs. outgroup) and communication style (implicit vs. explicit) as dimensions. The results confirm expectations that subjects with a Chinese cultural background are more responsive to robots that use implicit communication styles. We also found some evidence that subjects were more responsive when they thought of the robot as an ingroup member. These findings inform the design of robots for use in China and countries with similar cultural values and reinforce the importance of culturally sensitive design in HRI.
Keywords: communication style, human robot interaction, relationship
Tele-operators' judgments of their ability to drive through apertures BIBAKFull-Text 249-250
  Keith S. Jones; Elizabeth A. Schmidlin; Brian R. Johnson
It has been suggested that operators should base decisions to enter apertures on their ability to control the robot, rather than its static dimensions. Doing so, however, assumes that operators know whether they can drive a robot through the aperture. The present study tested that assumption. Results indicated that judgments about control of the robot were not accurate. In contrast, judgments of static dimensions were accurate. Thus, operators will require support if they must base decisions to enter apertures on their ability to control that robot.
Keywords: affordance, aperture, human-robot interaction, tele-operation
Distinguishing defaults and second-line conceptualization in reasoning about humans, robots, and computers BIBAKFull-Text 251-252
  Daniel T. Levin; Megan M. Saylor
In previous research, we demonstrated that people distinguish between human and nonhuman intelligence by assuming that humans are more likely to engage in intentional goal-directed behaviors than computers or robots. In the present study, we tested whether participants who respond relatively quickly when making predictions about an entity are more or less likely to distinguish between human and nonhuman agents on the dimension of intentionality. Participants responded to a series of five scenarios in which they chose between intentional and nonintentional actions for a human, a computer, and a robot. Results indicated that participants who chose quickly were more likely to distinguish human and nonhuman agents than participants who deliberated more over their responses. We suggest that the short-RT participants were employing a first-line default to distinguish between human intentionality and more mechanical nonhuman behavior, and that the slower, more deliberative participants engaged in deeper second-line reasoning that led them to change their predictions for the behavior of a human agent.
Keywords: HRI, theory of mind
Probo: a testbed for human robot interaction BIBAKFull-Text 253-254
  Kristof Goris; Jelle Saldien; Dirk Lefeber
The concept of the huggable robot Probo is a result of the desire to improve the living conditions of children in hospital environment. These children need distraction and lots of information. In this paper the concept of a new social robot is presented. This robot can be used in hospitals, as a tele-interface for entertainment, communication and medical assistance.
   Besides the prototype of the real robot, a virtual model has been developed. With user friendly software these models can be used as an interface between an operator and a child. That way, Probo becomes a platform for experiments concerning human robot interaction with great opportunities in different disciplines.
Keywords: human robot interaction, robotics, user interface
r-Learning services for elementary school students with a teaching assistant robot BIBAKFull-Text 255-256
  Jeonghye Han; Dongho Kim
The r-Learning paradigm with educational robots is emerging as a part of e-Learning, which means using technology for learning. This study on using robots as a teaching assistant robot opened the possibility of r-Learning for English in classroom. We found that children like robot services for personal relationship in class and teachers prefer them related to their convenience to manage the lesson. Related robot services such as praising and cheering up or calling the roll are the effective way for motivating children to learn, enhancing the relationship between TIRO and children. We are going on conducting further field trials for new scenarios and services that motivate children and make them concentrate on class with teachers, pre-teachers, children, parents, robotic researchers, social scientists, etc.
Keywords: e-learning, personal relationship, r-learning, robot service, teaching assistant robot
Autonomous vs. tele-operated: how people perceive human-robot collaboration with hrp-2 BIBAKFull-Text 257-258
  Astrid Weiss; Daniela Wurhofer; Michael Lankes; Manfred Tscheligi
Effective collaboration between robots and humans is not only a question of interface design and usability, but also of user experience and social acceptance. To investigate these aspects for Human-Robot Collaboration with the HRP-2 robot, two video-based focus groups enhanced with creative stimuli were conducted. The following research question was addressed: Is the HRP-2 robot perceived differently in an autonomous collaboration condition compared to a tele-operated collaboration condition, in terms of social acceptance and user experience?"The results show that participants in general are open to a humanoid robot as working partner as long as there is a clear distinction between a human and a robot, in terms of tasks and working procedures. Furthermore, participants stated a positive attitude toward the remotely-controlled HRP-2 robot.
Keywords: autonomous, evaluation, focus group, humanoid, social acceptance, tele-operated, user experience
I would choose the other card: humanoid robot gives an advice BIBAKFull-Text 259-260
  Astrid Weiss; Roland Buchner; Thomas Scherndl; Manfred Tscheligi
This article reports on a user study conducted to asses the credibility of a humanoid robot. The study set-up was based on the "Monty Hall Problem. Overall 13 people between the ages of 19 and 84 took part in the study (7 male and 6 female). The experiment was set up as a card-game where the participant had to guess which of the three cards shows a price. At one point of the experiment the robot advised the participant to change his/her mind and choose another card. During the user study the participants had to fill in a questionnaire on their level of certainty about their choice and the credibility of the robot. The results showed a significant correlation between the believability of the robot and the certainty in the decision made. Furthermore, the outcomes showed differences between participants who followed the robot's advise and participants who did not, regarding credibility, certainty of the decision made and the estimation whether the robot was helpful or not.
Keywords: credibility, monty hall problem, uncertain knowledge, user study
Evaluating the ICRA 2008 HRI challenge BIBAKFull-Text 261-262
  Astrid Weiss; Thomas Scherndl; Manfred Tscheligi; Aude Billard
This paper reports on the evaluation of the ICRA 2008 Human-Robot Interaction (HRI) Challenge. Five research groups demonstrated state-of-the-art work on HRI with a special focus on social and learning abilities. The demonstrations were rated by expert evaluators, in charge of awarding the prize, and 269 participants, i.e. 20 percent of the conference attendees through a standardized questionnaire (semantic differential). The data was analyzed with respect to six independent variables: expert evaluators vs. attendees, nationality of participants, origin region of the demo, age, gender and knowledge level of the attendees. Conference attendees tended to give higher scores for Social Skills, General Impression, and Overall Score than the expert evaluators. Irrespectively of the level of knowledge, age, and gender, conference attendees rated all demos relatively homogeneously. However, a comparative analysis of the conference attendees's ratings nationality-wise showed that demonstrations were rated differently depending on the region of origin. Conference attendees for the USA and Asian countries tended to rate demos from the same country of origin more frequently and more positively.
Keywords: evaluation, expert evaluation, human-robot interaction challenge, user-based evaluation
Using bio-electrical signals to influence the social behaviours of domesticated robots BIBAKFull-Text 263-264
  Paul Saulnier; Ehud Sharlin; Saul Greenberg
Several emerging computer devices read bio-electrical signals (e.g., electro-corticographic signals, skin biopotential or facial muscle tension) and translate them into computer-understandable input. We investigated how one low-cost commercially-available device could be used to control a domestic robot. First, we used the device to issue direct motion commands; while we could control the device somewhat, it proved difficult to do reliably. Second, we interpreted one class of signals as suggestive of emotional stress, and used that as an emotional parameter to influence (but not directly control) robot behaviour. In this case, the robot would react to human stress by staying out of the person's way. Our work suggests that affecting behaviour may be a reasonable way to leverage such devices.
Keywords: bio-electric signals, emotional control, emotional instrument, iRobot Roomba, OCZ NIA, robots
A robot that says bad!: using negative and positive social feedback from a robotic agent to save energy BIBAKFull-Text 265-266
  Jaap Ham; Cees Midden
Two experiments explored the persuasive effects of social feedback, as provided by a robotic agent, on behavioral change. Results indicate stronger persuasive effects of social feedback than of factual feedback (Experiment 1) or factual evaluative feedback (Experiment 2), and of negative feedback (especially social but also factual) than of positive feedback.
Keywords: embodied agents, persuasion, social feedback, social robotics
Formal verification of human-robot teamwork BIBAKFull-Text 267-268
  Rafael H. Bordini; Michael Fisher; Maarten Sierhuis
We here address the modelling and analysis of human-agent teamwork, specifically in the context of proposed astronaut-robot collaboration in future space missions. We are particularly interested in modelling such systems at a level that allows formal verification techniques to be applied, and hence carry out sophisticated analysis of the reliability and effectiveness of the teams before the system is deployed in real scenarios. In this paper we describe our ongoing research in this area.
Keywords: agent-based modelling, teamwork, verification
Transactive memory systems: a perspective on coordination in human-robot incident response teams BIBAKFull-Text 269-270
  Lei Liu; Pamela J. Hinds
This paper introduces Transactive Memory System (TMS) theory to the study of human-robot interaction in a complex work setting comprised of people with complementary domains of expertise. New insights regarding the development of TMS in human-robot incident response teams are presented.
Keywords: human-robot interaction, transactive memory systems (TMS)
Robot-directed speech as a means of exploring conceptualizations of robots BIBAKFull-Text 271-272
  Sarah Kriz; Gregory Anderson; Magdalena Bugajska; J. Gregory Trafton
Decades of research have shown that speakers adapt the way in which they speak to meet the needs of listeners, and that speech modifications can illuminate speakers' conceptualizations of their listeners' cognitive and communicative abilities. The present study extends this line of research into human-robot communication by analyzing the linguistic features of commands given to a robotic dog. The results indicate that males and females differed in the way in which they spoke to the robot, suggesting that there was not a uniform expectation of the robot's communicative capacities.
Keywords: HRI communication
FaceBots: robots utilizing and publishing social information in facebook BIBAKFull-Text 273-274
  Nikolaos Mavridis; Chandan Datta; Shervin Emami; Andry Tanoto; Chiraz BenAbdelkader; Tamer Rabie
Our project aims at supporting the creation of sustainable and meaningful longer-term human-robot relationships through the creation of embodied robots with face recognition and natural language dialogue capabilities, which exploit and publish social information available on the web (Facebook). Our main underlying experimental hypothesis is that such relationships can be significantly enhanced if the human and the robot are gradually creating a pool of shared episodic memories that they can co-refer to (shared memories), and if they are both embedded in a social web of other humans and robots they both know and encounter (shared friends). In this paper, we are presenting such a robot, which as we will see achieves two significant novelties.
Keywords: conversational robots, human-robot interaction, social robots
The effects of robot touch and proactive behaviour on perceptions of human-robot interactions BIBAKFull-Text 275-276
  Henriette S. M. Cramer; Nicander A. Kemper; Alia Amin; Vanessa Evers
Despite robots' embodiment, the effect of physical contact or touch and its interaction with robots' autonomous behaviour has been a mostly overlooked aspect of human-robot interaction. This video-based, 2x2 between-subject survey experiment (N=119) found that touch and proactiveness interacted in their effects on perceived machine-likeness and dependability. Attitude towards robots in general also interacted with the effects of touch. Results show the value of further exploring the combination of physical aspects of human-robot interaction and proactiveness.
Keywords: autonomy, human-robot interaction, proactiveness, touch
Towards a design method for expressive robots BIBAKFull-Text 277-278
  Bernt Meerbeek; Martin Saerbeck; Christoph Bartneck
Autonomous robots tend to induce the perception of a personality through their behavior and appearance. It has been suggested that the personality of a robot can be used as a design guideline and as a mental model of the robot. We propose a method to design and evaluate personality and expressions for domestic robots.
Keywords: animacy, anthropomorphism, design method, expression, human-robot interaction, product personality, robot behavior
Are we living in a robot cargo cult? BIBAKFull-Text 279-280
  Ylva Fernaeus; Mattias Jacobsson; Sara Ljungblad; Lars Erik Holmquist
We use the Cargo Cult metaphor to discuss visions, methods and communication of robot research. Essentially cargo cult involves performing of imitative rituals that are conducted without understanding the underlying cause of a phenomenon. We discuss how this is an ongoing challenge within the field of HRI, and what researchers could do to avoid contributing to a robotic cargo cult.
Keywords: human-robot interaction, robotics research, robots in popular culture
Human-robot physical interaction with dynamically stable mobile robots BIBAKFull-Text 281-282
  Umashankar Nagarajan; George Kantor; Ralph L. Hollis
Human-Robot Physical Interaction is an important attribute for robots operating in human environments. The authors illustrate some basic physically interactive behaviors with dynamically stable mobile robots using the ballbot as an example. The ballbot is a dynamically stable mobile robot moving on a single spherical wheel. The dynamic stability and robust controllers enable the ballbot to be physically moved with ease. The authors also demonstrate other behaviors like human intent detection and learn-repeat behavior on the real robot.
Keywords: dynamically stable mobile robots, human-robot physical interaction
Hardware-assisted multiple object tracking for human-robot-interaction BIBKFull-Text 283-284
  Claus Lenz; Giorgio Panin; Thorsten Röder; Martin Wojtczyk; Alois Knoll
Keywords: GPU, HRI, joint-action, model-based tracking
Human-in-the-loop control of an assistive robotic arm in unstructured environments for spinal cord injured users BIBAKFull-Text 285-286
  Dae-Jin Kim; Aman Behal
We describe the progress in implementing a vision based robotic assist device to facilitate Activities of Daily Living (ADL) tasks for a class of users with motor disabilities. The goal of the research is to reduce time to task completion and cognitive burden for users interacting with an unstructured environment via a Wheelchair Mounted Robotic Arm (WMRA). A developed robot system is tested with five healthy subjects to assess its usefulness.
Keywords: assistive robots, control architecture, experimental
Bandwidth allocation in a military teleoperation task BIBAKFull-Text 287-288
  Alia Fisher; Patricia L. McDermott; Shane Fagan
The implications of bandwidth allocation are described for teleoperation in a military task that involved navigation, target detection, and target identification. Color versus grayscale imagery was manipulated. Participants themselves traded off resolution and frame rate settings. Participants minimized switching between resolution/frame rate settings and tended to use settings with high resolution/low frame rate. Courses completed with the highest resolution (and lowest frame rate) had the fastest target identification times, but no other differences were observed between settings. Color imagery offered advantages for overall course time and the time to identify a tank as friendly or enemy.
Keywords: color, frame rate, greyscale, human-robot interaction, resolution, teleoperation
General visualization abstraction algorithm for geographic map-based human-robot interfaces BIBAKFull-Text 289-290
  Curtis M. Humphrey; Julie A. Adams
This paper presents a novel visualization technique that provides integration, abstraction, and sharing of the information generated by remotely deployed robots or sensors. The General Visualization Abstraction (GVA) algorithm is designed to display the most useful information items at any moment by determining an importance value for each information item with a focus on two classes of information: historically relevant and currently relevant information, and novel and emerging information.
Keywords: GIS, human-robot interfaces (HRI), information abstraction
Preliminary results: humans find emotive non-anthropomorphic robots more calming BIBAKFull-Text 291-292
  Cindy L. Bethel; Kristen Salomon; Robin R. Murphy
This paper describes preliminary results of a large-scale, complex human study in HRI in which results show that participants were calmer interacting with non-anthropomorphic robots operated in an emotive mode versus a standard, non-emotive mode.
Keywords: affective computing, affective robotics, doubly multivariate analysis, experimental design, human-robot interaction, urban search and rescue robotics, victim management
Where third wave HCI meets HRI: report from a workshop on user-centred design of robots BIBAKFull-Text 293-294
  Ylva Fernaeus; Sara Ljungblad; Mattias Jacobsson; Alex Taylor
In this report we discuss some of the challenges when applying a user-centred design approach in the field of human-robot interaction (HRI). The discussion is based on a one-day workshop at the NordiCHI'08 conference, investigating how methods, techniques and perspectives from the field of Human Computer Interaction (HCI) could contribute to and learn from recent developments in the area of HRI. Emphasis was put on topics that are infrequent in mainstream HCI such as machine movement, autonomy, anthropomorphism, physical interaction, environmental issues and issues concerned more generally with cultural notions of robots.
Keywords: 3rd wave human-computer interaction, human-robot interaction
Robot motivator: improving user performance on a physical/mental task BIBAKFull-Text 295-296
  Juan Fasola; Maja J. Mataric
We describe the design and implementation of a socially assistive robot that is able to monitor the performance of a user during a combined mental and physical task, with the purpose of motivating the user to complete the task and to improve performance. A three-condition experimental study was constructed for evaluation of the robot and preliminary results of the robot's interaction with human participants are presented.
Keywords: human-robot interaction, motivation, socially assistive robotics
Music therapist robot for individuals with cognitive impairments BIBAKFull-Text 297-298
  Adriana Tapus; Cristian Tapus; A Maja J. Mataric
Currently the 2 percent growth rate for the world's older population exceeds the 1.2 percent rate for the world's population as a whole. This difference is expected to increase rather than diminish so that by 2050, the number of individuals over the age 85 is projected to be three times what it is today. Most of these individuals will need physical, emotional, and cognitive assistance. In this paper, we present a new system based on the socially assistive robotics (SAR) technology that will play the role of a music therapist and will try to provide a customized help protocol through motivation, encouragements, and companionship to users suffering from cognitive changes related to aging and/or Alzheimer's disease.
Keywords: adaptive systems, assistive robotics, machine learning
A preliminary system for recognizing boredom BIBAKFull-Text 299-300
  Allison M. Jacobs; Benjamin Fransen; J. Malcolm McCurry; Frederick W. P. Heckel; Alan R. Wagner; J. Gregory Trafton
A 3D optical flow tracking system was used to track participants as they watched a series of boring videos. The video stream of the participants was rated for boredom events. Ratings and head position data were combined to predict boredom events.
Keywords: human-robot interaction
Situated messages for asynchronous human-robot interaction BIBAKFull-Text 301-302
  Nicolai Marquardt; James Young; Ehud Sharlin; Saul Greenberg
An ongoing issue in human robot interaction (HRI) is how people and robots communicate with one another. While there is considerable work in real-time human-robot communication, fairly little has been done in asynchronous realm. Our approach, which we call situated messages, lets humans and robots asynchronously exchange information by placing physical tokens -- each representing a simple message -- in meaningful physical locations of their shared environment. Using knowledge of the robot's routines, a person can place a message token at a location, where the location is typically relevant to redirecting the robot's behavior at that location. When the robot passes nearby that location, it detects the message and reacts accordingly. Similarly, robots can themselves place tokens at specific locations for people to read. Thus situated messages leverages embodied interaction, where token placement exploits the everyday practices and routines of both people and robots. We describe our working prototype, introduce application scenarios, explore message categories and usage patterns, and suggest future directions.
Keywords: RFID, asynchronous interaction, human-robot interaction, situated messages
Multi-sensor fusion for human daily activity recognition in robot-assisted living BIBAKFull-Text 303-304
  Chun Zhu; Weihua Sheng
In this paper, we propose a human activity recognition method by fusing the data from two wearable inertial sensors attached to one foot and the waist of a human subject, respectively. Our multi-sensor fusion based method combines neural networks and hidden Markov models (HMMs), and can reduce the computation load. We conducted experiments using a prototype wearable sensor system and the obtained results prove the effectiveness and the accuracy of our algorithm.
Keywords: activity recognition, assisted living, sensor fusion, wearable sensor
Focus group interview for designing a growing robot BIBAKFull-Text 305-306
  Ryoung Kim; Sona S. Kwak; Youn-kyung Lim; Myung-suk Kim
This study describes preliminary research for designing a growing robot. To explore the interaction between a human and an object that changes physically through its growth, focus group interviews were conducted with participants who kept pets, plants, and a plant-like product. An appropriate target model for designing a growing robot, the value of raising living things, and the features of interaction that induce affinity were examined.
Keywords: focus group interview, growing robot, social relationship
Sociable robot improves toddler vocabulary skills BIBAKFull-Text 307-308
  Javier Movellan; Micah Eckhardt; Marjo Virnes; Angelica Rodriguez
We report results of a study in which a low cost sociable robot was immersed at an Early Childhood Education Center for a period of 2 weeks. The study was designed to investigate whether the robot, which operated fully autonomously during the intervention period, could improve target vocabulary skills of 18-24 month of age toddlers. The results showed a 27% improvement in knowledge of the target words taught by the robot when compared to a matched set of control words. The results suggest that sociable robots may be an effective and low cost technology to enrich Early Childhood Education environments.
Keywords: algorithms, human factors, robotics, ubiquitous computing
A vision based human robot interface for robotic walkthroughs in a biotech laboratory BIBAKFull-Text 309-310
  Martin Wojtczyk; Giorgio Panin; Claus Lenz; Thorsten Röder; Suraj Nair; Erwin Roth; Alois Knoll; Rüdiger Heidemann; Klaus Joeris; Chun Zhang; Mark Burnett; Tom Monica
Both Robots and Personal Computers established new markets about 30 years ago and were enabling factors in Automation and Information Technology. However, while you can see Personal Computers in almost every home nowadays, the domain of Robots in general still is mostly restricted to industrial automation. Due to the physical impact of robots, a safe design is essential, which most robots still lack of and therefore prevent their application for personal use, although a slow change can be noticed by the introduction of dedicated robots for specific tasks, which can be classified as service robots. Moreover, as more and more robots are designed as service robots, their developers face the challenge of reducing the machines' complexity and providing smart user interface methods. Ideally the robot would be able to cooperate with a human, just like another human would.
Keywords: HRI, lab automation, life sciences, model based tracking
On Line -- affective state reporting device: a tool for evaluating affective state inference systems BIBAKFull-Text 311-312
  Susana Zoghbi; Dana Kuliff; Elizabeth Croft; Machiel Van der Loos
The monitoring of human affective state is a key part of developing responsive and naturally behaving human-robot interaction systems. However, evaluation and calibration of physiologically monitored affective state data is typically done using offline questionnaires and user reports. In this paper we investigate the use of an online-device for collecting real-time user reports of affective state during interaction with a robot. These reports are compared to both previous survey reports taken after the interaction, and the affective states estimated by an inference system. The aim is to evaluate and characterize the physiological signal-based inference system and determine which factors significantly influence its performance. This analysis will be used in future work, to fine tune the affective estimations by identifying what kind of variations in physiological signals precede or accompany the variations in reported affective states.
Keywords: affective responses, affective state estimation, human responses to robots, human-robot interaction, physiological signal monitoring
An uncanny game of trust: social trustworthiness of robots inferred from subtle anthropomorphic facial cues BIBAKFull-Text 313-314
  Maya B. Mathur; David B. Reichling
Modern android robots have begun to penetrate the social realm of humans. This study quantitatively probed the impact of anthropomorphic robot appearance on human social interpretation of robot facial expression. The "Uncanny Valley" theory describing the disturbing effect of imperfect human likenesses has been a dominant influence in discussions of human-robot social interaction, but measuring its effect on human social interactions with robots has been problematic. The present study addresses this issue by examining social responses of human participants to a series of digitally composed pictures of realistic robot faces that span a range from mechanical to human in appearance. Our first experiment provides evidence that an Uncanny Valley effect on social attractiveness is indeed a practical concern in the design of robots meant to interact socially with the lay public. In the second experiment, we employed game-theory research methods to measure the effect of subtle facial expressions in robots on human judgments of their trustworthiness as social counterparts. Our application of game-theory research methods to the study of human-robot interactions provides a model for such empirical measurement of human's social responses to android robots.
Keywords: android, anthropomorphism, facial expression, game theory, humanoid, social interaction, uncanny valley
Incorporating active vision into the body schema BIBKFull-Text 315-316
  Justin W. Hart; Eleanor R. Avrunin; David Golub; Brian Scassellati; Steven W. Zucker
Keywords: robotic self modeling
The power of suggestion: teaching sequences through assistive robot motions BIBAKFull-Text 317-318
  Ross Mead; A Maja J. Mataric
We present a preliminary implementation of a robot within the context of social skills intervention. The robot engages a human user in an interactive and adaptive game-playing session that emphasizes a specific sequence of movements over time. Such games highlight joint attention and encourage forms of interaction that are useful within various assistive domains. Noteworthy robot activities include those that could be used to promote social cues in children with autism, sequences that maintain or improve memory in Alzheimer's patients, and movements that encourage exercises to increase range of motion in post-stroke rehabilitation.
Keywords: human-robot interaction, socially assistive robotics
CALLY: the cell-phone robot with affective expressions BIBAKFull-Text 319-320
  Ji-Dong Yim; Christopher D. Shaw
This poster describes a robot cell-phone named CALLY with which we are exploring the roles of facial and gestural expressions of robotic products in the human computer interaction. We discuss non-verbal anthropomorphic affect features as media for building emotional relationships between a user and a product, and introduce new types of robotic products in the market that may be capable of establish intimacy by applying such features. A couple of social robot application ideas generated from the early phase of our project are also presented with their usage scenarios and implementations. CALLY was used in our initial participatory design workshop and helped participants generate new application ideas.
Keywords: mobile phone, non-verbal anthropomorphic affect features, robot
Relating initial turns of human-robot dialogues to discourse BIBAKFull-Text 321-322
  Maxim Makatchev; Min Kyung Lee; Reid Simmons
Similarly, User models can be useful for improving dialogue management. In this paper we analyze human-robot dialogues that occur during uncontrolled interactions and estimate relations between the initial dialogue turns and patterns of discourse that are indicative of such user traits as persistence and politeness. The significant effects shown in this preliminary study suggest that initial dialogue turns may be useful in modeling a user's interaction style.
Keywords: dialogue, human-robot interaction
HomeWindow: an augmented reality domestic monitor BIBAKFull-Text 323-324
  Paul Lapides; Ehud Sharlin; Saul Greenberg
Computation is increasingly prevalent in the home: it serves as a way to control the home itself, or it is part of the many digital appliances within it. The question is: how can home inhabitants effectively understand and control the digital home? Our solution lets a person examine and control their home surroundings through a mobile display that serves as a 'magic lens', where the detail shown varies with proximity. In particular, HomeWindow is an augmented reality system that superimposes an interactive graphical interface atop of physical but digital artifacts in the home. One can get an overview of a room's computational state by looking through the display: the basic state of all digital hot spots are shown atop their physical counterparts. As one approaches a particular digital spot, more detailed information as well as a control interface is shown using a semantic zoom. Our current implementation works with two home devices. First, people can examine and remotely control the status of mobile domestic robots. Second, people can discover the power consumption of household appliances, where appliances are surrounded by a colorful aura that reflects its current and historical energy use.
Keywords: augmented reality, domestic computing ubiquitous computing., energy awareness, human-robot interaction
Tea table, come closer to me BIBAKFull-Text 325-326
  Vikas Reddy Enti; Rajesh Arumugam; Krishnamoorthy Baskaran; Bingbing Liu; Foo Kong Foong; Appadorai Senthil Kumar; Dee Meng Kang; Xiaojun Wu; Wai Kit Goh
We present a new concept (named DA vinCi) of distributed agents, sensor networks and an intelligent server catered to the home environment. Instead of a single multi-tasking human-like robot, we propose a team of networked task-specific robotic agents that interface with each other and the environment through a spatial map built by the server. We also highlight how our server will be a proxy for all the human-robot interactions (HRI) in the system and discuss the challenges involved. The paper's title captures the jist of our system where even a tea table can be inexpensively mobilized and interacted with via the DA vinCi architecture.
Keywords: multi-agent system, robot intermediaries
Self introducing poster using attachable humanoid parts BIBAKFull-Text 327-328
  Hirotaka Osawa; Ren Ohmura; Michita Imai
In this paper, we propose new robotics presentation method called, Self Introducing Poster that uses attachable humanoid parts and explains its contents through a self introduction style. Presentation by a conventional robot sometimes fails because the robot presenter is often too attractive and distracts from the presentation itself. In our method, the poster is anthropomorphized and explains its contents. Due to this self presentation, users can more easily understand its meaning because the information's contents and information provider are strongly related. We designed and implemented our system and evaluated it in the field. The results suggest that the self-introducing system is useful for gaining users attention and effectively presenting information.
Keywords: anthropomorphization, human interface, human robot interaction
Evaluation of the effects of the shape of the artificial hand on the quality of the interaction: natural appearance vs. symbolic appearance BIBAKFull-Text 329-330
  Massimiliano Zecca; Fumiya Iida; Nobutsuna Endo; Yu Mizoguchi; Keita Endo; Yousuke Kawabata; Kazuko Itoh; Atsuo Takanishi
Personal robots and robot technology (RT)-based assistive devices are expected to play a major role in our elderly-dominated society, by interacting with surrounding people both physically and psychologically. A fundamental role during the interaction is of course played by the hand. In this paper we present the evaluation of the effect of hand shape to the quality of the interaction, in particular during handshake.
Keywords: emotion expression, handshake, humanoid robotics, soft hand