HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction

Fullname:HRI'08 Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction
Editors:Terry Fong; Kerstin Dautenhahn; Matthias Scheutz; Yiannis Demiris
Location:Amsterdam, The Netherlands
Dates:2008-Mar-12 to 2008-Mar-15
Publisher:ACM
Standard No:ISBN: 1-60558-017-1, 978-1-60558-017-3; ACM DL: Table of Contents hcibib: HRI08
Papers:52
Pages:394
Links:Conference Series Home Page
  1. Technical papers
  2. Videos
  3. Invited-keynote talks

Technical papers

Achieving fluency through perceptual-symbol practice in human-robot collaboration BIBAKFull-Text 1-8
  Guy Hoffman; Cynthia Breazeal
We have developed a cognitive architecture for robotic teammates based on the neuro-psychological principles of perceptual symbols and simulation, with the aim of attaining increased fluency in human-robot teams. An instantiation of this architecture was implemented on a robotic desk lamp, performing in a human-robot collaborative task. This paper describes initial results from a human-subject study measuring team efficiency and team fluency, in which the robot works on a joint task with untrained subjects. We find significant differences in a number of efficiency and fluency metrics, when comparing our architecture to a purely reactive robot with similar capabilities.
Keywords: anticipation, cognitive architectures, fluency, human subject studies, human-robot interaction, perceptual symbols, robotics, teamwork
Assessing cooperation in human control of heterogeneous robots BIBAKFull-Text 9-16
  Jijun Wang; Michael Lewis
Human control of multiple robots has been characterized by the average demand of single robots on human attention. While this matches situations in which independent robots are controlled sequentially it does not capture aspects of demand associated with coordinating dependent actions among robots. This paper presents an extension of Crandall's neglect tolerance model intended to accommodate both coordination demands (CD) and heterogeneity among robots. The reported experiment attempts to manipulate coordination demand by varying the proximity needed to perform a joint task in two conditions and by automating coordination within subteams in a third. Team performance and the process measure CD were assessed for each condition. Automating cooperation reduced CD and improved performance. We discuss the utility of process measures such as CD to analyze and improve control performance.
Keywords: evaluation, human-robot interaction, metrics, multi-robot system
Behaviour delay and robot expressiveness in child-robot interactions: a user study on interaction kinesics BIBAKFull-Text 17-24
  Ben Robins; Kerstin Dautenhahn; Rene te Boekhorst; Chrystopher L. Nehaniv
This paper presents results of a novel study on interaction kinesics where 18 children interacted with a humanoid child-sized robot called KASPAR. Based on findings in psychology and social sciences we propose the temporal behaviour matching hypothesis which predicts that children will adapt to and match the robot's temporal behaviour. Each child took part in six experimental trials involving two games in which the dynamics of interactions played a key part: a body expression imitation game, where the robot imitated expressions demonstrated by the children, and a drumming game where the robot mirrored the children's drumming. In both games KASPAR responded either with or without a delay. Additionally, in the drumming game, KASPAR responded with or without exhibiting facial/gestural expressions. Individual case studies as well as statistical analysis of the complete sample are presented. Results show that a delay of the robot's drumming response lead to larger pauses (with and without robot nonverbal gestural expressions) and longer drumming durations (with nonverbal gestural expressions only). In the imitation game, the robot's delay lead to longer imitation eliciting behaviour with longer pauses for the children, but systematic individual differences are observed in regards to the effects on the children's pauses. Results are generally consistent with the temporal behaviour matching hypothesis, i.e. children adapted the timing of their behaviour, e.g. by mirroring to the robot's temporal behaviour.
Keywords: human-robot interaction, humanoid, interaction kinesics
Beyond dirty, dangerous and dull: what everyday people think robots should do BIBAKFull-Text 25-32
  Leila Takayama; Wendy Ju; Clifford Nass
We present a study of people's attitudes toward robot workers, identifying the characteristics of occupations for which people believe robots are qualified and desired. We deployed a web-based public-opinion survey that asked respondents (n=250) about their attitudes regarding robots' suitability for a variety of jobs (n=812) from the U.S. Department of Labor's O*NET occupational information database. We found that public opinion favors robots for jobs that require memorization, keen perceptual abilities, and service-orientation. People are preferred for occupations that require artistry, evaluation, judgment and diplomacy. In addition, we found that people will feel more positively toward robots doing jobs with people rather than in place of people.
Keywords: human-robot interaction, jobs, occupations, survey
Combining dynamical systems control and programming by demonstration for teaching discrete bimanual coordination tasks to a humanoid robot BIBAKFull-Text 33-40
  Elena Gribovskaya; Aude Billard
We present a generic framework that combines Dynamical Systems movement control with Programming by Demonstration (PbD) to teach a robot bimanual coordination task. The model consists of two systems: a learning system that processes data collected during the demonstration of the task to extract coordination constraints and a motor system that reproduces the movements dynamically, while satisfying the coordination constraints learned by the first system. We validate the model through a series of experiments in which a robot is taught bimanual manipulatory tasks with the help of a human.
Keywords: bimanual coordination, dynamical systems, human-robot interaction (hri), humanoid robot, learning by imitation, programming by demonstration (pbd)
A comparative psychophysical and EEG study of different feedback modalities for HRI BIBAKFull-Text 41-48
  Xavier Perrin; Ricardo Chavarriaga; Céline Ray; Roland Siegwart; José del R. Millán
This paper presents a comparison between six different ways to convey navigational information provided by a robot to a human. Visual, auditory, and tactile feedback modalities were selected and designed to suggest a direction of travel to a human user, who can then decide if he agrees or not with the robot's proposition. This work builds upon a previous research on a novel semi-autonomous navigation system in which the human supervises an autonomous system, providing corrective monitoring signals whenever necessary.
   We recorded both qualitative (user impressions based on selected criteria and ranking of their feelings) and quantitative (response time and accuracy) information regarding different types of feedback. In addition, a preliminary analysis of the influence of the different types of feedback on brain activity is also shown. The result of this study may provide guidelines for the design of such a human-robot interaction system, depending on both the task and the human user.
Keywords: auditory feedback, brain-computer interface, multimodal interaction, robot navigation, vibro-tactile feedback, visual feedback
Compass visualizations for human-robotic interaction BIBAKFull-Text 49-56
  Curtis M. Humphrey; Julie A. Adams
Compasses have been used for centuries to express directions and are commonplace in many user interfaces; however, there has not been work in human-robotic interaction (HRI) to ascertain how different compass visualizations affect the interaction. This paper presents a HRI evaluation comparing two representative compass visualizations: top-down and in-world world-aligned. The compass visualizations were evaluated to ascertain which one provides better metric judgment accuracy, lowers workload, provides better situational awareness, is perceived as easier to use, and is preferred. Twenty-four participants completed a within-subject repeated measures experiment. The results agreed with the existing principles relating to 2D and 3D views, or projections of a three-dimensional scene, in that a top-down (2D view) compass visualization is easier to use for metric judgment tasks and a world-aligned (3D view) compass visualization yields faster performance for general navigation tasks. The implication for HRI is that the choice in compass visualization has a definite and non-trivial impact on operator performance (world-aligned was faster), situational awareness (top-down was better), and perceived ease of use (top-down was easier).
Keywords: compass visualization, human-robotic interaction (hri)
Concepts about the capabilities of computers and robots: a test of the scope of adults' theory of mind BIBAKFull-Text 57-64
  Daniel T. Levin; Stephen S. Killingsworth; Megan M. Saylor
We have previously demonstrated that people apply fundamentally different concepts to mechanical agents and human agents, assuming that mechanical agents engage in more location-based, and feature-based behaviors whereas humans engage in more goal-based, and category-based behavior. We also found that attributions about anthropomorphic agents such as robots are very similar to those about computers, unless subjects are asked to attend closely to specific intentional-appearing behaviors. In the present studies, we ask whether subjects initially do not attribute intentionality to robots because they believe that temporary limits in current technology preclude real intelligent behavior. In addition, we ask whether a basic categorization as an artifact affords lessened attributions of intentionality. We find that subjects assume that robots created with future technology may become more intentional, but will not be fully equivalent to humans, and that even a fully human-controlled robot will not be as intentional as a human. These results suggest that subjects strongly distinguish intelligent agents based on intentionality, and that the basic living/mechanical distinction is powerful enough, even in adults, to make it difficult for adults to assent to the possibility that mechanical things can be fully intentional.
Keywords: cognitive modeling/science, philosophical foundations of hri, user modeling and awareness
Construction and evaluation of a model of natural human motion based on motion diversity BIBAKFull-Text 65-72
  Takashi Minato; Hiroshi Ishiguro
A natural human-robot communication is supported by a person's interpersonal behavior for a robot. The condition to elicit interpersonal behavior is thought to be related to a mechanism to support natural communication. In the present study, we hypothesize that motion diversity produced independently of a subject's intention contributes to the human-like nature of the motions of an android that closely resembles a human being. In order to verify this hypothesis, we construct a model of motion diversity through the observation of human motion, specifically, a touching motion. Psychological experiments have shown that the presence of motion diversity in android motion influences the impression toward the android.
Keywords: android, human motion model, natural motion
Crew roles and operational protocols for rotary-wing micro-uavs in close urban environments BIBAKFull-Text 73-80
  Robin R. Murphy; Kevin S. Pratt; Jennifer L. Burke
A crew organization and four-step operational protocol is recommended based on a cumulative descriptive field study of teleoperated rotary-wing micro air vehicles (MAV) used for structural inspection during the response and recovery phases of Hurricanes Katrina and Wilma. The use of MAVs for real civilian missions in real operating environments provides a unique opportunity to consider human-robot interaction. The analysis of the human-robot interaction during 8 days, 14 missions, and 38 flights finds that a three person crew is currently needed to perform distinct roles: Pilot, Mission Specialist, and Flight Director. The general operations procedure is driven by the need for safety of bystanders, other aircraft, the tactical team, and the MAV itself, which leads to missions being executed as a series of short, line-of-sight flights rather than a single flight. Safety concerns may limit the utility of autonomy in reducing the crew size or enabling beyond line-of-sight-operations but autonomy could lead to an increase in flights per mission and reduced Pilot training demands. This paper is expected to contribute to set a foundation for future research in HRI and MAV autonomy and to help establish regulations and acquisition guidelines for civilian operations. Additional research in autonomy, interfaces, attention, and out-of-the-loop (OOTL) control is warranted.
Keywords: human-robot interaction, robot, unmanned aerial vehicle
Crossmodal content binding in information-processing architectures BIBAKFull-Text 81-88
  Henrik Jacobsson; Nick Hawes; Geert-Jan Kruijff; Jeremy Wyatt
Operating in a physical context, an intelligent robot faces two fundamental problems. First, it needs to combine information from its different sensors to form a representation of the environment that is more complete than any representation a single sensor could provide. Second, it needs to combine high-level representations (such as those for planning and dialogue) with sensory information, to ensure that the interpretations of these symbolic representations are grounded in the situated context. Previous approaches to this problem have used techniques such as (low-level) information fusion, ontological reasoning, and (high-level) concept learning. This paper presents a framework in which these, and related approaches, can be used to form a shared representation of the current state of the robot in relation to its environment and other agents. Preliminary results from an implemented system are presented to illustrate how the framework supports behaviours commonly required of an intelligent robot.
Keywords: symbol grounding
Decision-theoretic human-robot communication BIBAKFull-Text 89-96
  Tobias Kaupp; Alexei Makarenko
Humans and robots need to exchange information if the objective is to achieve a task cooperatively. Two questions are considered in this paper: what type of information to communicate, and how to cope with the limited resources of human operators. Decision-theoretic human-robot communication can provide answers to both questions: the type of information is determined by the underlying probabilistic representation, and value-of-information theory helps decide when it is appropriate to query operators for information. A robot navigation task is used to evaluate the system by comparing it to conventional teleoperation. The results of a user study show that the developed system is superior with respect to performance, operator workload, and usability.
Keywords: decision making, human-robot communication, human-robot information fusion
Design patterns for sociality in human-robot interaction BIBAKFull-Text 97-104
  Peter H. Kahn; Nathan G. Freier; Takayuki Kanda; Hiroshi Ishiguro; Jolina H. Ruckert; Rachel L. Severson; Shaun K. Kane
We propose that Christopher Alexander's idea of design patterns can benefit the emerging field of HRI. We first discuss four features of design patterns that appear particularly useful. For example, a pattern should be specified abstractly enough such that many different instantiations of the pattern can be uniquely realized in the solution to specific problems in context. Then, after describing our method for generating patterns, we offer and describe eight possible design patterns for sociality in human robot interaction: initial introduction, didactic communication, in motion together, personal interests and history, recovering from mistakes, reciprocal turn-taking in game context, physical intimacy, and claiming unfair treatment or wrongful harms. We also discuss the issue of validation of design patterns. If a design pattern program proves successful, it will provide HRI researchers with basic knowledge about human robot interaction, and save time through the reuse of patterns to achieve high levels of sociality.
Keywords: design patterns, human-robot interaction, sociality
Development and evaluation of a flexible interface for a wheelchair mounted robotic arm BIBAKFull-Text 105-112
  Katherine Tsui; Holly Yanco; David Kontak; Linda Beliveau
Accessibility is a challenge for people with disabilities. Differences in cognitive ability, sensory impairments, motor dexterity, behavioral skills, and social skills must be taken into account when designing interfaces for assistive devices. Flexible interfaces tuned for individuals, instead of custom-built solutions, may benefit a larger number of people. The development and evaluation of a flexible interface for controlling a wheelchair mounted robotic arm is described in this paper. There are four versions of the interface based on input device (touch screen or joystick) and a moving or stationary shoulder camera. We describe results from an eight week experiment conducted with representative end users who range in physical and cognitive ability.
Keywords: assistive technology, human-robot interaction, robotic arm
Enjoyment intention to use and actual use of a conversational robot by elderly people BIBAKFull-Text 113-120
  Marcel Heerink; Ben Kröse; Bob Wielinga; Vanessa Evers
In this paper we explore the concept of enjoyment as a possible factor influencing acceptance of robotic technology by elderly people. We describe an experiment with a conversational robot and elderly users (n=30) that incorporates both a test session and a long term user observation. The experiment did confirm the hypothesis that perceived enjoyment has an effect on the intention to use a robotic system. Furthermore, findings show that the general assumption in technology acceptance models that intention to use predicts actual use is also applicable to this specific technology used by elderly people.
Keywords: assistive technology, eldercare, human-robot interaction, technology acceptance models
Governing lethal behavior: embedding ethics in a hybrid deliberative/reactive robot architecture BIBAKFull-Text 121-128
  Ronald C. Arkin
This paper provides the motivation and philosophy underlying the design of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system, so that its behavior will fall within the bounds prescribed by the Laws of War and Rules of Engagement. This research, funded by the U.S. Army Research Office, is intended to ensure that robots do not behave illegally or unethically in the battlefield. Reasons are provided for the necessity of developing such a system at this time, as well as arguments for and against its creation.
Keywords: ethics, robotics
Housewives or technophiles?: understanding domestic robot owners BIBAKFull-Text 129-136
  Ja-Young Sung; Rebecca E. Grinter; Henrik I. Christensen; Lan Guo
Despite the growing body of Human-Robot Interaction (HRI) research focused on domestic robots, surprisingly little is known about the demographic profile of robot owners and their influence on usage patterns. In this paper, we present the results of a survey of 379 iRobot's Roomba owners, that identified their demographic and usage trends. The outcome of the survey suggests that Roomba users are equally likely to be men or women, and they tend to be younger with high levels of education and technical backgrounds. Their adoption and use patterns illustrate the important role that gift exchange plays in adoption, and how the robot changes cleaning routines and creates non-cleaning activities. More generally, we argue that domestic robot adoption is growing, and suggest some of the factors that lead to a positive experience.
Keywords: domestic robot, user study
How close?: model of proximity control for information-presenting robots BIBAKFull-Text 137-144
  Fumitaka Yamaoka; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
This paper describes a model for a robot to appropriately control its position when it presents information to a user. This capability is indispensable since in the future many robots will be functioning in daily situations as shopkeepers presenting products to customers or museum guides presenting information to visitors. Psychology research suggests that people adjust their positions to establish a joint view toward a target object. Similarly, when a robot presents an object, it should stand at an appropriate position that considers the positions of both the listener and the object to optimize the listener's field of view and to establish a joint view. We observed human-human interaction situations where people presented objects and developed a model for an information-presenting robot to appropriately adjust its position. Our model consists of four constraints for establishing O-space: (1) proximity to listener, (2) proximity to object, (3) listener's field of view, and (4) presenter's field of view. We also present an experimental evaluation of the effectiveness of our model.
Keywords: communication robot, human-robot interaction
How people anthropomorphize robots BIBAKFull-Text 145-152
  Susan R. Fussell; Sara Kiesler; Leslie D. Setlock; Victoria Yew
We explored anthropomorphism in people's reactions to a robot in social context vs. their more considered judgments of robots in the abstract. Participants saw a photo and read transcripts from a health interview by a robot or human interviewer. For half of the participants, the interviewer was polite and for the other half, the interviewer was impolite. Participants then summarized the interactions in their own words and responded true or false to adjectives describing the interviewer. They later completed a post-task survey about whether a robot interviewer would possess moods, attitudes, and feelings. The results showed substantial anthropomorphism in participants' interview summaries and true-false responses, but minimal anthropomorphism in the abstract robot survey. Those who interacted with the robot interviewer tended to anthropomorphize more in the post-task survey, suggesting that as people interact more with robots, their abstract conceptions of them will become more anthropomorphic.
Keywords: human-robot interaction, social robots
How quickly should communication robots respond? BIBAKFull-Text 153-160
  Toshiyuki Shiwa; Takayuki Kanda; Michita Imai; Hiroshi Ishiguro; Norihiro Hagita
This paper reports a study about system response time (SRT) in communication robots that utilize human-like social features, such as anthropomorphic appearance and conversation in natural language. Our research purpose established a design guideline for SRT in communication robots. The first experiment observed user preferences toward different SRTs in interaction with a robot. In other existing user interfaces, faster response is usually preferred. In contrast, our experimental result indicated that user preference for SRT in a communication robot is highest at one second, and user preference ratings level off at two seconds.
   However, a robot cannot always respond in such a short time as one or two seconds. Thus, the important question is "What should a robot do if it cannot respond quickly enough?" The second experiment tested the effectiveness of a conversational filler: behavior to notify listeners that the robot is going to respond. In Japanese "etto" is used to buy time to think and resembles "well..." and "uh..." In English. We used the same strategy in a communication robot to shadow system response time. Our results indicated that using a conversational filler by the robot moderated the user's impression toward a long SRT.
Keywords: communication robots, conversational filler, system response time
How training and experience affect the benefits of autonomy in a dirty-bomb experiment BIBAKFull-Text 161-168
  David J. Bruemmer; Curtis W. Nielsen; David I. Gertman
A dirty-bomb experiment conducted at the INL is used to evaluate the effectiveness and suitability of three different modes of robot control. The experiment uses three distinct user groups to understand how participants' background and training affect the way in which they use and benefit from autonomy. The results show that the target mode, which involves automated mapping and plume tracing together with a point and click tasking tool, provides the best performance for each group. This is true for objective performance such as source detection and localization accuracy as well as subjective measures such as perceived workload, frustration and preference. The best overall performance is achieved by the Explosive Ordinance Disposal group which has experience in both robot teleoperation and dirty bomb response. The user group that benefits least from autonomy is the Nuclear Engineers that have no experience with either robot operation or dirty bomb response. The group that benefits most from autonomy is the Weapons of Mass Destruction Civil Support Team that has extensive experience related to the task, but no robot training.
Keywords: expert user, human-robot interaction, map-building, seamless autonomy
Human emotion and the uncanny valley: a GLM, MDS, and Isomap analysis of robot video ratings BIBAKFull-Text 169-176
  Chin-Chang Ho; Karl F. MacDorman; Z. A. D. Dwi Pramono
The eerie feeling attributed to human-looking robots and animated characters may be a key factor in our perceptual and cognitive discrimination of the human and humanlike. This study applies regression, the generalized linear model (GLM), factor analysis, multidimensional scaling (MDS), and kernel isometric mapping (Isomap) to analyze ratings of 27 emotions of 18 moving figures whose appearance varies along a human likeness continuum. The results indicate (1) Attributions of eerie and creepy better capture our visceral reaction to an uncanny robot than strange. (2) Eerie and creepy are mainly associated with fear but also shocked, disgusted, and nervous. Strange is less strongly associated with emotion. (3) Thus, strange may be more cognitive, while eerie and creepy are more perceptual/emotional. (4) Human features increase ratings of human likeness. (5) Women are slightly more sensitive to eerie and creepy than men; and older people may be more willing to attribute human likeness to a robot despite its eeriness.
Keywords: android science, data visualization, emotion, uncanny valley
Human to robot demonstrations of routine home tasks: exploring the role of the robot's feedback BIBAKFull-Text 177-184
  Nuno Otero; Aris Alissandrakis; Kerstin Dautenhahn; Chrystopher Nehaniv; Dag Sverre Syrdal; Kheng Lee Koay
In this paper, we explore some conceptual issues, relevant for the design of robotic systems aimed at interacting with humans in domestic environments. More specifically, we study the role of the robot's feedback (positive or negative acknowledgment of understanding) on a human teacher's demonstration of a routine home task (laying a table). Both the human and the system's perspectives are considered in the analysis and discussion of results from a human-robot user study, highlighting some important conceptual and practical issues. These include the lack of explicitness and consistency on people's demonstration strategies. Furthermore, we discuss the need to investigate design strategies to elicit people's knowledge about the task and also successfully advertize the robot's abilities in order to promote people's ability to provide appropriate demonstrations.
Keywords: correspondence problem, effect metrics, gestures, human-robot interaction, routine home tasks, social learning, spatial configuration tasks
A hybrid algorithm for tracking and following people using a robotic dog BIBAKFull-Text 185-192
  Martijn Liem; Arnoud Visser; Frans Groen
The capability to follow a person in a domestic environment is an important prerequisite for a robot companion. In this paper, a tracking algorithm is presented that makes it possible to follow a person using a small robot. This algorithm can track a person while moving around, regardless of the sometimes erratic movements of the legged robot. Robust performance is obtained by fusion of two algorithms, one based on salient features and one on color histograms. Re-initializing object histograms enables the system to track a person even when the illumination in the environment changes. By being able to re-initialize the system on run time using background subtraction, the system gains an extra level of robustness.
Keywords: awareness and monitoring of humans, robot companion
Hybrid tracking of human operators using IMU/UWB data fusion by a Kalman filter BIBAKFull-Text 193-200
  J. A. Corrales; F. A. Candelas; F. Torres
The precise localization of human operators in robotic workplaces is an important requirement to be satisfied in order to develop human-robot interaction tasks. Human tracking provides not only safety for human operators, but also context information for intelligent human-robot collaboration. This paper evaluates an inertial motion capture system which registers full-body movements of an user in a robotic manipulator workplace. However, the presence of errors in the global translational measurements returned by this system has led to the need of using another localization system, based on Ultra-WideBand (UWB) technology. A Kalman filter fusion algorithm which combines the measurements of these systems is developed. This algorithm unifies the advantages of both technologies: high data rates from the motion capture system and global translational precision from the UWB localization system. The developed hybrid system not only tracks the movements of all limbs of the user as previous motion capture systems, but is also able to position precisely the user in the environment.
Keywords: data fusion, human tracking and monitoring, indoor location, inertial sensors, kalman filter, motion capture, uwb
Integrating vision and audition within a cognitive architecture to track conversations BIBAKFull-Text 201-208
  J. Gregory Trafton; Magda D. Bugajska; Benjamin R. Fransen; Raj M. Ratwani
We describe a computational cognitive architecture for robots which we call ACT-R/E (ACT-R/Embodied). ACT-R/E is based on ACT-R [1, 2] but uses different visual, auditory, and movement modules. We describe a model that uses ACT-R/E to integrate visual and auditory information to perform conversation tracking in a dynamic environment. We also performed an empirical evaluation study which shows that people see our conversational tracking system as extremely natural.
Keywords: act-r, cognitive modeling, conversation following, human-robot interaction
Learning polite behavior with situation models BIBAKFull-Text 209-216
  Rémi Barraquand; James L. Crowley
In this paper, we describe experiments with methods for learning the appropriateness of behaviors based on a model of the current social situation. We first review different approaches for social robotics, and present a new approach based on situation modeling. We then review algorithms for social learning and propose three modifications to the classical Q-Learning algorithm. We describe five experiments with progressively complex algorithms for learning the appropriateness of behaviors. The first three experiments illustrate how social factors can be used to improve learning by controlling learning rate. In the fourth experiment we demonstrate that proper credit assignment improves the effectiveness of reinforcement learning for social interaction. In our fifth experiment we show that analogy can be used to accelerate learning rates in contexts composed of many situations.
Keywords: credit assignment, learning by analogy, q-learning, social interaction, social learning, social robotics
Loudness measurement of human utterance to a robot in noisy environment BIBAKFull-Text 217-224
  Satoshi Kagami; Yoko Sasaki; Simon Thompson; Tomoaki Fujihara; Tadashi Enomoto; Hiroshi Mizoguchi
In order to understand utterance based human-robot interaction, and to develop such a system, this paper initially analyzes how loud humans speak in a noisy environment. Experiments were conducted to measure
   how loud humans speak with 1) different noise levels, 2) different number of sound sources, 3) different sound sources, and 4) different distances to a robot. Synchronized sound sources add noise to the auditory scene, and resultant utterances are recorded and compared to a previously recorded noiseless utterance. From experiments, we understand that humans generate basically the same level of sound pressure level at his/her location irrespective of distance and background noise. More precisely, there is a band according to a distance, and also according to sound sources that is including
   language pronounce.
   According to this understanding, we developed an online spoken command recognition system for a mobile robot. System consists of two key components: 1) Low side-lobe microphone array that works as omini-directional telescopic microphone, and 2) DSBF combined with FBS
   method for sound source localization and segmentation. Caller location and segmented sound stream are calculated, and then the segmented sound stream is sent to voice recognition system. The system works with at most five sound sources at the same time with about at most
   18[dB] sound pressure differences. Experimental results with the modile robot are also shown.
Keywords: human utterance, sound localization, sound pressure level
Multi-thresholded approach to demonstration selection for interactive robot learning BIBAKFull-Text 225-232
  Sonia Chernova; Manuela Veloso
Effective learning from demonstration techniques enable complex robot behaviors to be taught from a small number of demonstrations. A number of recent works have explored interactive approaches to demonstration, in which both the robot and the teacher are able to select training examples. In this paper, we focus on a demonstration selection algorithm used by the robot to identify informative states for demonstration. Existing automated approaches for demonstration selection typically rely on a single threshold value, which is applied to a measure of action confidence. We highlight the limitations of using a single fixed threshold for a specific subset of algorithms, and contribute a method for automatically setting multiple confidence thresholds designed to target domain states with the greatest uncertainty. We present a comparison of our multi-threshold selection method to confidence-based selection using a single fixed threshold, and to manual data selection by a human teacher. Our results indicate that the automated multi-threshold approach significantly reduces the number of demonstrations required to learn the task.
Keywords: human-robot interaction, learning from demonstration
Object schemas for responsive robotic language use BIBAKFull-Text 233-240
  Kai-yuh Hsiao; Soroush Vosoughi; Stefanie Tellex; Rony Kubat; Deb Roy
The use of natural language should be added to a robot system without sacrificing responsiveness to the environment. In this paper, we present a robot that manipulates objects on a tabletop in response to verbal interaction. Reactivity is maintained by using concurrent interaction processes, such as visual trackers and collision detection processes. The interaction processes and their associated data are organized into object schemas, each representing a physical object in the environment, based on the target of each process. The object schemas then serve as discrete structures of coordination between reactivity, planning, and language use, permitting rapid integration of information from multiple sources.
Keywords: affordances, behavior-based, language grounding, object schema, robot
A point-and-click interface for the real world: laser designation of objects for mobile manipulation BIBAKFull-Text 241-248
  Charles C. Kemp; Cressel D. Anderson; Hai Nguyen; Alexander J. Trevor; Zhe Xu
We present a novel interface for human-robot interaction that enables a human to intuitively and unambiguously select a 3D location in the world and communicate it to a mobile robot. The human points at a location of interest and illuminates it ("clicks it") with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and estimates the location's 3D position with respect to the robot's frame of reference.
   Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications.
   We demonstrate that this human-robot interface enables a person to designate a wide variety of everyday objects placed throughout a room. In 99.4% of these tests, the robot successfully looked at the designated object and estimated its 3D position with low average error. We also show that this interface can support object acquisition by a mobile manipulator. For this application, the user selects an object to be picked up from the floor by "clicking" on it with the laser pointer interface. In 90% of these trials, the robot successfully moved to the designated object and picked it up off of the floor.
Keywords: 3d point estimation, assistive robotics, autonomous manipulation, catadioptric camera, deictic, human-robot interaction, laser pointer interface, mobile manipulation, object fetching, robot
Reasoning for a multi-modal service robot considering uncertainty in human-robot interaction BIBAKFull-Text 249-254
  Sven R. Schmidt-Rohr; Steffen Knoop; Martin Lösch; Rüdiger Dillmann
This paper presents a reasoning system for a multi-modal service robot with human-robot interaction. The reasoning system uses partially observable Markov decision processes (POMDPs) for decision making and an intermediate level for bridging the gap of abstraction between multi-modal real world sensors and actuators on the one hand and POMDP reasoning on the other. A filter system handles the abstraction of multi-modal perception while preserving uncertainty and model-soundness. A command sequencer is utilized to control the execution of symbolic POMDP decisions on multiple actuator components. By using POMDP reasoning, the robot is able to deal with uncertainty in both observation and prediction of human behavior and can balance risk and opportunity. The system has been implemented on a multi-modal service robot and is able to let the robot act autonomously in modeled human-robot interaction scenarios. Experiments evaluate the characteristics of the proposed algorithms and architecture.
Keywords: HRI, pomdp, robot decision making
Relational vs. group self-construal: untangling the role of national culture in HRI BIBAKFull-Text 255-262
  Vanessa Evers; Heidy C. Maldonado; Talia L. Brodecki; Pamela J. Hinds
As robots (and other technologies) increasingly make decisions on behalf of people, it is important to understand how people from diverse cultures respond to this capability. Thus far, much design of autonomous systems takes a Western view valuing individual preferences and choice. We challenge the assumption that Western values are universally optimal for robots. In this study, we sought to clarify the effects of users' cultural background on human-robot collaboration by investigating their attitudes toward and the extent to which people accepted choices made by a robot or human assistant. A 2x2x2 experiment was conducted with nationality (US vs. Chinese), in group strength (weak vs. strong) and human vs. robot assistant as dimensions. US participants reported higher trust of and compliance with the assistants (human and robot) although when the assistant was characterized as a strong ingroup member, Chinese as compared with the US subjects were more comfortable. Chinese also reported a stronger sense of control with both assistants and were more likely to anthropomorphize the robot than were US subjects. This pattern of findings confirms that people from different national cultures may respond differently to robots, but also suggests that predictions from human-human interaction do not hold universally.
Keywords: cross-cultural design, human-robot collaboration
Robot social presence and gender: do females view robots differently than males? BIBAKFull-Text 263-270
  Paul Schermerhorn; Matthias Scheutz; Charles R. Crowell
Social-psychological processes in humans will play an important role in long-term human-robot interactions. This study investigates people's perceptions of social presence in robots during (relatively) short interactions. Findings indicate that males tend to think of the robot as more human-like and accordingly show some evidence of "social facilitation" on an arithmetic task as well as more socially desirable responding on a survey administered by a robot. In contrast, females saw the robot as more machine-like, exhibited less socially desirable responding to the robot's survey, and were not socially facilitated by the robot while engaged in the arithmetic tasks. Various alternative accounts of these findings are explored and the implications of these results for future work are discussed.
Keywords: human-robot interaction, robot social presence
Robotic animals might aid in the social development of children with autism BIBAKFull-Text 271-278
  Cady M. Stanton; Peter H., Jr. Kahn; Rachel L. Severson; Jolina H. Ruckert; Brian T. Gill
This study investigated whether a robotic dog might aid in the social development of children with autism. Eleven children diagnosed with autism (ages 5-8) interacted with the robotic dog AIBO and, during a different period within the same experimental session, a simple mechanical toy dog (Kasha), which had no ability to detect or respond to its physical or social environment. Results showed that, in comparison to Kasha, the children spoke more words to AIBO, and more often engaged in three types of behavior with AIBO typical of children without autism: verbal engagement, reciprocal interaction, and authentic interaction. In addition, we found suggestive evidence (with p values ranging from .07 to .09) that the children interacted more with AIBO, and, while in the AIBO session, engaged in fewer autistic behaviors. Discussion focuses on why robotic animals might benefit children with autism.
Keywords: aibo, animals, autism, reciprocity, robots, social development
Robotics operator performance in a military multi-tasking environment BIBAKFull-Text 279-286
  Jessie Y. C. Chen; Michael J. Barnes
We simulated a military mounted environment and examined the performance of the combined position of gunner and robotics operator and how aided target recognition (AiTR) capabilities (delivered either through tactile or tactile + visual cueing) for the gunnery task might benefit the concurrent robotics and communication tasks. Results showed that participants' teleoperation task improved significantly when the AiTR was available to assist them with their gunnery task. However, the same improvement was not found for semi-autonomous robotics task performance. Additionally, when teleoperating, those participants with higher spatial ability outperformed those with lower spatial ability. However, performance gap between those with higher and lower spatial ability appeared to be narrower when the AiTR was available to assist the gunnery task. Participants' communication task performance also improved significantly when the gunnery task was aided by AiTR. Finally, participant's perceived workload was significantly higher when they teleoperated a robotic asset and when their gunnery task was unassisted.
Keywords: human robot interaction, individual differences., reconnaissance, simulation, tactile display, ugv
Robots in organizations: the role of workflow, social, and environmental factors in human-robot interaction BIBAKFull-Text 287-294
  Bilge Mutlu; Jodi Forlizzi
Robots are becoming increasingly integrated into the workplace, impacting organizational structures and processes, and affecting products and services created by these organizations. While robots promise significant benefits to organizations, their introduction poses a variety of design challenges. In this paper, we use ethnographic data collected at a hospital using an autonomous delivery robot to examine how organizational factors affect the way its members respond to robots and the changes engendered by their use. Our analysis uncovered dramatic differences between the medical and post-partum units in how people integrated the robot into their workflow and their perceptions of and interactions with it. Different patient profiles in these units led to differences in workflow, goals, social dynamics, and the use of the physical environment. In medical units, low tolerance for interruptions, a discrepancy between the perceived cost and benefits of using the robot, and breakdowns due to high traffic and clutter in the robot's path caused the robot to have a negative impact on the workflow and staff resistance. On the contrary, post-partum units integrated the robot into their workflow and social context. Based on our findings, we provide design guidelines for the development of robots for organizations.
Keywords: autonomous robots, ethnography, groupware, organizational interfaces, organizational technology, robots in organizations
The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue BIBAKFull-Text 295-302
  Mary Ellen Foster; Ellen Gurman Bard; Markus Guhe; Robin L. Hill; Jon Oberlander; Alois Knoll
Generating referring expressions is a task that has received a great deal of attention in the natural-language generation community, with an increasing amount of recent effort targeted at the generation of multimodal referring expressions. However, most implemented systems tend to assume very little shared knowledge between the speaker and the hearer, and therefore must generate fully-elaborated linguistic references. Some systems do include a representation of the physical context or the dialogue context; however, other sources of contextual information are not normally used. Also, the generated references normally consist only of language and, possibly, deictic pointing gestures.
   When referring to objects in the context of a task-based interaction involving jointly manipulating objects, a much richer notion of context is available, which permits a wider range of referring options. In particular, when conversational partners cooperate on a mutual task in a shared environment, objects can be made accessible simply by manipulating them as part of the task. We demonstrate that such expressions are common in a corpus of human-human dialogues based on constructing virtual objects, and then describe how this type of reference can be incorporated into the output of a humanoid robot that engages in similar joint construction dialogues with a human partner.
Keywords: multimodal dialogue, referring expressions
A semi-autonomous communication robot: a field trial at a train station BIBAKFull-Text 303-310
  Masahiro Shiomi; Daisuke Sakamoto; Kanda Takayuki; Carlos Toshinori Ishi; Hiroshi Ishiguro; Norihiro Hagita
This paper reports an initial field trial with a prototype of a semiautonomous communication robot at a train station. We developed an operator-requesting mechanism to achieve semiautonomous operation for a communication robot functioning in real environments. The operator-requesting mechanism autonomously detects situations that the robot cannot handle by itself; a human operator helps by assuming control of the robot.
   This approach gives semi-autonomous robots the ability to function naturally with minimum human effort. Our system consists of a humanoid robot and ubiquitous sensors. The robot has such basic communicative behaviors as greeting and route guidance. The experimental results revealed that the operator-requesting mechanism correctly requested operator's help in 85% of the necessary situations; the operator only had to control 25% of the experiment time in the semi-autonomous mode with a robot system that successfully guided 68% of the passengers. At the same time, this trial provided the opportunity to gather user data for the further development of natural behaviors for such robots operating in real environments.
Keywords: field trial, human-robot interaction, operator-requesting mechanism, semi-autonomous commutation robot
Simultaneous teleoperation of multiple social robots BIBAKFull-Text 311-318
  Dylan F. Glas; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
Teleoperation of multiple robots has been studied extensively for applications such as robot navigation; however, this concept has never been applied to the field of social robots. To explore the unique challenges posed by the remote operation of multiple social robots, we have implemented a system in which a single operator simultaneously controls up to four robots, all engaging in communication interactions with users. We present a user inter-face designed for operating a single robot while monitoring several others in the background, then we propose methods for characterizing task difficulty and introduce a technique for improving multiple-robot performance by reducing the number of conflicts between robots demanding the operator's attention. Finally, we demonstrate the success of our system in laboratory trials based on real-world interactions.
Keywords: adjustable autonomy, communication robots, human-robot interaction, multiple robots, social robots, supervisory control
Spiral response-cascade hypothesis: intrapersonal responding-cascade in gaze interaction BIBAKFull-Text 319-326
  Yuichiro Yoshikawa; Shunsuke Yamamoto; Hidenobu Sumioka; Hiroshi Ishiguro; Minoru Asada
A spiral response-cascade hypothesis is proposed to model the mechanism that enables human communication to emerge or be maintained among agents. In this hypothesis, we propose the existence of three cascades each of which indicates intrapersonal or interpersonal mutual facilitation in the formation of someone's feelings about one's communication partners and the exhibition of behaviors in communicating with them, i.e., responding. In this paper, we discuss our examination of an important part of the hypothesis, i.e., what we call an intrapersonal responding cascade, through an experiment where the gaze interactions between a participant and a communication robot were controlled not only by controlling the robot's gaze but also by signaling participants when to shift their gaze. We report that the participants' experiences in responding to the robot enable them to regard the robot as a communicative being, which partially supports the hypothesis of the intrapersonal responding cascade.
Keywords: cascade effect, feelings about communication, psychological experiment with robots, responsive behavior
Supervision and motion planning for a mobile manipulator interacting with humans BIBAKFull-Text 327-334
  Emrah Akin Sisbot; Aurelie Clodic; Rachid Alami; Maxime Ransan
Human Robot collaborative task achievement requires adapted tools and algorithms for both decision making and motion computation. The human presence as well as its behavior must be considered and actively monitored at the decisional level for the robot to produce synchronized and adapted behavior. Additionally, having a human within the robot range of action introduces security constraints as well as comfort considerations which must be taken into account at the motion planning and control level. This paper presents a robotic architecture adapted to human robot interaction and focuses on two tools: a human aware manipulation planner and a supervision system dedicated to collaborative task achievement.
Keywords: hri, motion planning, robot architecture, supervision
Theory of mind (ToM) on robots: a functional neuroimaging study BIBAKFull-Text 335-342
  Frank Hegel; Soeren Krach; Tilo Kircher; Britta Wrede; Gerhard Sagerer
Theory of Mind (ToM) is not only a key capability for cognitive development but also for successful social interaction. In order for a robot to interact successfully with a human both interaction partners need to have an adequate representation of the other's actions. In this paper we address the question of how a robot's actions are perceived and represented in a human subject interacting with the robot and how this perception is influenced by the appearance of the robot. We present the preliminary results of an fMRI-study in which participants had to play a version of the classical Prisoners' Dilemma Game (PDG) against four opponents: a human partner (HP), an anthropomorphic robot (AR), a functional robot (FR), and a computer (CP). The PDG scenario enables to implicitly measure mentalizing or Theory of Mind (ToM) abilities, a technique commonly applied in functional imaging. As the responses of each game partner were randomized unknowingly to the participants, the attribution of intention or will to an opponent (i.e. HP, AR, FR or CP) was based purely on differences in the perception of shape and embodiment.
   The present study is the first to apply functional neuroimaging methods to study human-robot interaction on a higher cognitive level such as ToM. We hypothesize that the degree of anthropomorphism and embodiment of the game partner will modulate cortical activity in previously detected ToM networks as the medial prefrontal lobe and anterior cingulate cortex.
Keywords: FMRI, anthropomorphism, social robots, theory of mind
Three dimensional tangible user interface for controlling a robotic team BIBAKFull-Text 343-350
  Paul Lapides; Ehud Sharlin; Mario Costa Sousa
We describe a new method for controlling a group of robots in three-dimensional (3D) space using a tangible user interface called the 3D Tractus. Our interface maps the task space into an interactive 3D space, allowing a single user to intuitively monitor and control a group of robots. We present the use of the interface in controlling a group of virtual software bots and a physical Sony AIBO robot dog in a simulated Explosive Ordnance Disposal (EOD) environment involving a bomb hidden inside of a building. We also describe a comparative user study we performed where participants were asked to use both the 3D physical interface and a traditional 2D graphical user interface in order to try and demonstrate the benefits and drawbacks of each approach for HRI tasks.
Keywords: evaluation, explosive ordnance disposal (eod), human-robot interaction, interaction techniques, physical interfaces, robotic team control, tangible user interfaces
Towards combining UAV and sensor operator roles in UAV-enabled visual search BIBAKFull-Text 351-358
  Joseph Cooper; Michael A. Goodrich
Wilderness search and rescue (WiSAR) is a challenging problem because of the large areas and often rough terrain that must be searched. Using mini-UAVs to deliver aerial video to searchers has potential to support WiSAR efforts, but a number of technology and human factors problems must be overcome to make this practical. At the source of many of these problems is a desire to manage the UAV using as few people as possible, so that more people can be used in ground-based search efforts. This paper uses observations from two informal studies and one formal experiment to identify what human operators may be unaware of as a function of autonomy and information display. Results suggest that progress is being made on designing autonomy and information displays that may make it possible for a single human to simultaneously manage the UAV and its camera in WiSAR, but that adaptable displays that support systematic navigation are probably needed.
Keywords: ecological interface, human-robot interaction, unmanned aerial vehicle, user study
Towards multimodal human-robot interaction in large scale virtual environment BIBAKFull-Text 359-366
  Pierre Boudoin; Christophe Domingues; Samir Otmane; Nassima Ouramdane; Malik Mallem
Human Operators (HO) of telerobotics systems may be able to achieve complex operations with robots. Designing usable and effective Human-Robot Interaction (HRI) is very challenging for system developers and human factors specialists. The search for new metaphors and techniques for HRI adapted to telerobotics systems emerge on the conception of Multimodal HRI (MHRI). MHRI allows to interact naturally and easily with robots due to combination of many devices and an efficient Multimodal Management System (MMS). A system like this should bring a new user's experience in terms of natural interaction, usability, efficiency and flexibility to HRI system. So, a good management of multimodality is very. Moreover, the MMS must be transparent to user in order to be efficient and natural.
   Empirical evaluation is necessary to have an idea about the goodness of our MMS. We will use an Empirical Evaluation Assistant (EEA) designed in the IBISC laboratory. EEA permits to rapidly gather significant feedbacks about the usability of interaction during the development lifecycle. However the HRI would be classically evaluated by ergonomics experts at the end of its development lifecycle.
   Results from a preliminary evaluation on a robot teleoperation tasks using the ARITI software framework for assisting the user in piloting the robot, and the IBISC semi-immersive VR/AR platform EVR@, are given. They compare the use of a Flystick and Data Gloves for the 3D interaction with the robot. They show that our MMS is functional although multimodality used in our experiments is not sufficient to provide an efficient Human-Robot Interaction. The EVR@ SPIDAR force feedback will be integrated in our MMS to improve the user's efficiency.
Keywords: empirical evaluation, human-robot interaction, multimodal interaction, usability, virtual environment
Understanding human intentions via hidden markov models in autonomous mobile robots BIBAKFull-Text 367-374
  Richard Kelley; Alireza Tavakkoli; Christopher King; Monica Nicolescu; Mircea Nicolescu; George Bebis
Understanding intent is an important aspect of communication among people and is an essential component of the human cognitive system. This capability is particularly relevant for situations that involve collaboration among agents or detection of situations that can pose a threat. In this paper, we propose an approach that allows a robot to detect intentions of others based on experience acquired through its own sensory-motor capabilities, then using this experience while taking the perspective of the agent whose intent should be recognized. Our method uses a novel formulation of Hidden Markov Models designed to model a robot's experience and interaction with the world. The robot's capability to observe and analyze the current scene employs a novel vision-based technique for target detection and tracking, using a non-parametric recursive modeling approach. We validate this architecture with a physically embedded robot, detecting the intent of several people performing various activities.
Keywords: hidden markov models, human-robot interaction, intention modeling, theory of mind, vision-based methods
Using a robot proxy to create common ground in exploration tasks BIBAKFull-Text 375-382
  Kristen Stubbs; David Wettergreen; Illah Nourbakhsh
In this paper, we present a user study of a new collaborative communication method between a user and remotely-located robot performing an exploration task. In the studied scenario, our user possesses scientific expertise but not necessarily detailed knowledge of the robot's capabilities, resulting in very little common ground
   between the user and robot. Because the robot is not available during mission planning, we introduce a robot proxy to build common ground with the user. Our robot proxy has the ability to provide feedback to the user about the user's plans before the plans are executed. Our study demonstrated that the use of the robot proxy resulted in
   improved performance and efficiency on an exploration task, more accurate mental models of the robot's capabilities, a stronger perception of effectiveness at the task, and stronger feelings of collaboration with the robotic system.
Keywords: common ground, exploration robotics, human-robot interaction, robot proxy

Videos

HRI caught on film 2 BIBAKFull-Text 383-388
  Christoph Bartneck
Following the great success of the first video session at the HRI2007 conference (Bartneck & Kanda, 2007), the Human Robot Interaction 2008 conference hosted the second video session, in which movies of interesting, important, illustrative, or humorous HRI research moments were shown. Robots and humans do not always behave as expected and the results can be entertaining and even enlightening -- therefore instances of failures have also been considered in the video session. Besides the importance of the lessons learned and the novelty of the situation, the videos have also an entertaining value. The video session had the character of a design competition.
Keywords: film, human, interaction, robot

Invited-keynote talks

Joint action in man and autonomous systems BIBAKFull-Text 389-390
  Harold Bekkering; Estela Bicho; Ruud G. J. Meulenbroek; Wolfram Erlhagen
This talk presents recent functional insights derived from behavioural and neuroimaging studies into the cognitive mechanisms underlying human joint action. The main question is how the cognitive system of one actor can organize the perceptual consequences of the movements of another actor such that effective joint action in the two actors can take place. Particularly, the issue of complementing each other's action in contrast to merely imitating the actions that one observes will be discussed. Other issues that have been investigated are motivational states (cooperative or competitive), error-monitoring and the interaction between actors at the level of motor control. The talk is completed with presenting recent attempts to implement the functional insights from these experiments into autonomous systems being capable of collaborating intelligently on shared motor tasks.
Keywords: miscellaneous
Toward cognitive robot companions BIBAKFull-Text 391-392
  Raja Chatila
If robots have to be, one day, part of our environment and assist humans in their daily life, they will have to be endowed not only with the necessary functions for sensing, moving and acting, but also and inevitably, with more advanced cognitive capacities. Indeed a robot that will interact with people will need to be able to understand the spatial and dynamic structure of its environment, to exhibit a social behavior, to communicate with humans at the appropriate level of abstraction, to focus its attention and to take decisions for achieving tasks, to learn new knowledge and to evolve new capacities in an open-ended fashion.
   The COGNIRON (The Cognitive Robot Companion) project studies the development of robots whose ultimate task would be serve and assist humans. The talk will overview the achievements of the project and its results that are demonstrated in three main experimental settings, enabling to exhibit cognitive capacities: the robot home tour, the curious and proactive robot and the robot learner.
Keywords: cognitive robots, human robot interaction
Talking as if BIBAKFull-Text 393-394
  Herbert H. Clark
If ordinary people are to work with humanoid robots, they will need to communicate with them. But how will they do that? For many in the field, the goal is to design robots that people can talk to just as they talk to actual people. But a close look at actual communication suggests that this goal isn't realist. It may even be untenable in principle.
   An alternative goal is to design robots that people can talk to just as they talk to dynamic depictions of other people-what I will call characters. As it happens, ordinary people have a great deal of experience in interpreting the speech, gestures, and other actions of characters, and even in interacting with them. But talking to characters is different from talking to actual people. So once we view robots as characters, we will need a model of communication based on different principles. That, in turn, may change our ideas of what people can and cannot do with robots.
Keywords: robotics