| Effects of anticipatory action on human-robot teamwork efficiency, fluency, and perception of team | | BIBAK | Full-Text | 1-8 | |
| Guy Hoffman; Cynthia Breazeal | |||
| A crucial skill for fluent action meshing in human team activity is a
learned and calculated selection of anticipatory actions. We believe that the
same holds for robotic teammates, if they are to perform in a similarly fluent
manner with their human counterparts.
In this work, we propose an adaptive action selection mechanism for a robotic teammate, making anticipatory decisions based on the confidence of their validity and their relative risk. We predict an improvement in task efficiency and fluency compared to a purely reactive process. We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the team's fluency and success. By way of explanation, we propose a number of fluency metrics that differ significantly between the two study groups. Keywords: anticipatory action selection, fluency, human-robot interaction, teamwork | |||
| Human control for cooperating robot teams | | BIBAK | Full-Text | 9-16 | |
| Jijun Wang; Michael Lewis | |||
| Human control of multiple robots has been characterized by the average
demand of single robots on human attention or the distribution of demands from
multiple robots. When robots are allowed to cooperate autonomously, however,
demands on the operator should be reduced by the amount previously required to
coordinate their actions. The present experiment compares control of small
robot teams in which cooperating robots explored autonomously, were controlled
independently by an operator or through mixed initiative as a cooperating team.
Mixed initiative teams found more victims and searched wider areas than either
fully autonomous or manually controlled teams. Operators who switched attention
between robots more frequently were found to perform better in both manual and
mixed initiative conditions. Keywords: evaluation, human-robot interaction, metrics, multi-robot system | |||
| Natural person-following behavior for social robots | | BIBAK | Full-Text | 17-24 | |
| Rachel Gockley; Jodi Forlizzi; Reid Simmons | |||
| We are developing robots with socially appropriate spatial skills not only
to travel around or near people, but also to accompany people side-by-side. As
a step toward this goal, we are investigating the social perceptions of a
robot's movement as it follows behind a person. This paper discusses our
laser-based person-tracking method and two different approaches to
person-following: direction-following and path-following. While both algorithms
have similar characteristics in terms of tracking performance and following
distances, participants in a pilot study rated the direction-following behavior
as significantly more human-like and natural than the path-following behavior.
We argue that the path-following method may still be more appropriate in some
situations, and we propose that the ideal person-following behavior may be a
hybrid approach, with the robot automatically selecting which method to use. Keywords: human-robot interaction, person following, person tracking, social robots | |||
| Managing autonomy in robot teams: observations from four experiments | | BIBAK | Full-Text | 25-32 | |
| Michael A. Goodrich; Timothy W. McLain; Jeffrey D. Anderson; Jisang Sun; Jacob W. Crandall | |||
| It is often desirable for a human to manage multiple robots. Autonomy is
required to keep workload within tolerable ranges, and dynamically adapting the
type of autonomy may be useful for responding to environment and workload
changes. We identify two management styles for managing multiple robots and
present results from four experiments that have relevance to dynamic autonomy
within these two management styles. These experiments, which involved 80
subjects, suggest that individual and team autonomy benefit from attention
management aids, adaptive autonomy, and proper information abstraction. Keywords: adjustable autonomy, dynamic autonomy, human-robot interaction, teams | |||
| Developing performance metrics for the supervisory control of multiple robots | | BIBAK | Full-Text | 33-40 | |
| Jacob W. Crandall; M. L. Cummings | |||
| Efforts are underway to make it possible for a single operator to
effectively control multiple robots. In these high workload situations, many
questions arise including how many robots should be in the team (Fan-out), what
level of autonomy should the robots have, and when should this level of
autonomy change (i.e., dynamic autonomy). We propose that a set of metric
classes should be identified that can adequately answer these questions. Toward
this end, we present a potential set of metric classes for human-robot teams
consisting of a single human operator and multiple robots. To test the
usefulness and appropriateness of this set of metric classes, we conducted a
user study with simulated robots. Using the data obtained from this study, we
explore the ability of this set of metric classes to answer these questions. Keywords: fan-out, multi-robot teams, supervisory control | |||
| Adapting GOMS to model human-robot interaction | | BIBAK | Full-Text | 41-48 | |
| Jill L. Drury; Jean Scholtz; David Kieras | |||
| A formal interaction modeling technique known as Goals, Operators, Methods,
and Selection rules (GOMS) is well-established in human-computer interaction as
a cost-effective way of evaluating designs without the participation of end
users. This paper explores the use of GOMS for evaluating human-robot
interaction. We provide a case study in the urban search-and-rescue domain and
raise issues for developing GOMS models that have not been previously
addressed. Further, we provide rationale for selecting different types of GOMS
modeling techniques to help the analyst model human-robot interfaces. Keywords: GOMS, evaluation, human-robot interaction, interface design | |||
| Interactive robot task training through dialog and demonstration | | BIBAK | Full-Text | 49-56 | |
| Paul E. Rybski; Kevin Yoon; Jeremy Stolarz; Manuela M. Veloso | |||
| Effective human/robot interfaces which mimic how humans interact with one
another could ultimately lead to robots being accepted in a wider domain of
applications. We present a framework for interactive task training of a mobile
robot where the robot learns how to do various tasks while observing a human.
In addition to observation, the robot listens to the human's speech and
interprets the speech as behaviors that are required to be executed. This is
especially important where individual steps of a given task may have
contingencies that have to be dealt with depending on the situation. Finally,
the context of the location where the task takes place and the people present
factor heavily into the robot's interpretation of how to execute the task. In
this paper, we describe the task training framework, describe how environmental
context and communicative dialog with the human help the robot learn the task,
and illustrate the utility of this approach with several experimental case
studies. Keywords: human-robot interaction, learning by demonstration | |||
| Learning by demonstration with critique from a human teacher | | BIBA | Full-Text | 57-64 | |
| Brenna Argall; Brett Browning; Manuela Veloso | |||
| Learning by demonstration can be a powerful and natural tool for developing robot control policies. That is, instead of tedious hand-coding, a robot may learn a control policy by interacting with a teacher. In this work we present an algorithm for learning by demonstration in which the teacher operates in two phases. The teacher first demonstrates the task to the learner. The teacher next critiques learner performance of the task. This critique is used by the learner to update its control policy. In our implementation we utilize a 1-Nearest Neighbor technique which incorporates both training dataset and teacher critique. Since the teacher critiques performance only, they do not need to guess at an effective critique for the underlying algorithm. We argue that this method is particularly well-suited to human teachers, who are generally better at assigning credit to performances than to algorithms. We have applied this algorithm to the simulated task of a robot intercepting a ball. Our results demonstrate improved performance with teacher critiquing, where performance is measured by both execution success and efficiency. | |||
| Efficient model learning for dialog management | | BIBAK | Full-Text | 65-72 | |
| Finale Doshi; Nicholas Roy | |||
| Intelligent planning algorithms such as the Partially Observable Markov
Decision Process (POMDP) have succeeded in dialog management applications [10,
11, 12] because they are robust to the inherent uncertainty of human
interaction. Like all dialog planning systems, however, POMDPs require an
accurate model of the user (e.g., what the user might say or want). POMDPs are
generally specified using a large probabilistic model with many parameters.
These parameters are difficult to specify from domain knowledge, and gathering
enough data to estimate the parameters accurately a priori is expensive.
In this paper, we take a Bayesian approach to learning the user model simultaneously with dialog manager policy. At the heart of our approach is an efficient incremental update algorithm that allows the dialog manager to replan just long enough to improve the current dialog policy given data from recent interactions. The update process has a relatively small computational cost, preventing long delays in the interaction. We are able to demonstrate a robust dialog manager that learns from interaction data, out-performing a hand-coded model in simulation and in a robotic wheelchair application. Keywords: decision-making under uncertainty, human-robot interaction, model learning | |||
| Using vision, acoustics, and natural language for disambiguation | | BIBAK | Full-Text | 73-80 | |
| Benjamin Fransen; Vlad Morariu; Eric Martinson; Samuel Blisard; Matthew Marge; Scott Thomas; Alan Schultz; Dennis Perzanowski | |||
| Creating a human-robot interface is a daunting experience. Capabilities and
functionalities of the interface are dependent on the robustness of many
different sensor and input modalities. For example, object recognition poses
problems for state-of-the-art vision systems. Speech recognition in noisy
environments remains problematic for acoustic systems. Natural language
understanding and dialog are often limited to specific domains and baffled by
ambiguous or novel utterances. Plans based on domain-specific tasks limit the
applicability of dialog managers. The types of sensors used limit spatial
knowledge and understanding, and constrain cognitive issues, such as
perspective-taking.
In this research, we are integrating several modalities, such as vision, audition, and natural language understanding to leverage the existing strengths of each modality and overcome individual weaknesses. We are using visual, acoustic, and linguistic inputs in various combinations to solve such problems as the disambiguation of referents (objects in the environment), localization of human speakers, and determination of the source of utterances and appropriateness of responses when humans and robots interact. For this research, we limit our consideration to the interaction of two humans and one robot in a retrieval scenario. This paper will describe the system and integration of the various modules prior to future testing. Keywords: acoustics, artificial intelligence, auditory perspective-taking, dialog,
human-robot interaction, natural language understanding, spatial reasoning,
vision | |||
| To kill a mockingbird robot | | BIBAK | Full-Text | 81-87 | |
| Christoph Bartneck; Marcel Verbunt; Omar Mubin; Abdullah Al Mahmud | |||
| Robots are being introduced in our society but their social status is still
unclear. A critical issue is if the robot's exhibition of intelligent life-like
behavior leads to the users' perception of animacy. The ultimate test for the
life-likeness of a robot is to kill it. We therefore conducted an experiment in
which the robot's intelligence and the participants' gender were the
independent variables and the users' destructive behavior of the robot the
dependent variables. Several practical and methodological problems compromised
the acquired data, but we can conclude that the robot's intelligence had a
significant influence on the users' destructive behavior. We discuss the
encountered problems and the possible application of this animacy measuring
method. Keywords: animacy, destruction, human, intelligence, interaction, robot | |||
| A dancing robot for rhythmic social interaction | | BIBAK | Full-Text | 89-96 | |
| Marek P. Michalowski; Selma Sabanovic; Hideki Kozima | |||
| This paper describes a robotic system that uses dance as a form of social
interaction to explore the properties and importance of rhythmic movement in
general social interaction. The system consists of a small creature-like robot
whose movement is controlled by a rhythm-based software system. Environmental
rhythms can be extracted from auditory or visual sensory stimuli, and the robot
synchronizes its movement to a dominant rhythm. The system was demonstrated,
and an exploratory study conducted, with children interacting with the robot in
a generalized dance task. Through a behavioral analysis of videotaped
interactions, we found that the robot's synchronization with the background
music had an effect on children's interactive involvement with the robot.
Furthermore, we observed a number of expected and unexpected styles and
modalities of interactive exploration and play that inform our discussion on
the next steps in the design of a socially rhythmic robotic system. Keywords: children, dance, human-robot interaction, social robotics | |||
| The interactive robotic percussionist: new developments in form, mechanics, perception and interaction design | | BIBAK | Full-Text | 97-104 | |
| Gil Weinberg; Scott Driscoll | |||
| We present new developments in the improvisational robotic percussionist
project, aimed at improving human-robot interaction through design, mechanics,
and perceptual modeling. Our robot, named Haile, listens to live human players,
analyzes perceptual aspects in their playing in real-time, and uses the product
of this analysis to play along in a collaborative and improvisatory manner. It
is designed to combine the benefits of computational power in algorithmic music
with the expression and visual interactivity of acoustic playing. Haile's new
features include an anthropomorphic form, a linear-motor based robotic arm, a
novel perceptual modeling implementation, and a number of new interaction
schemes. The paper begins with an overview of related work and a presentation
of goals and challenges based on Haile's original design. We then describe new
developments in physical design, mechanics, perceptual implementation, and
interaction design, aimed at improving human-robot interactions with Haile. The
paper concludes with a description of a user study, conducted in an effort to
evaluate the new functionalities and their effectiveness in facilitating
expressive musical human-robot interaction. The results of the study show
correlation between human's and Haile's rhythmic perception as well as user
satisfaction regarding Haile's perceptual and mechanical abilities. The study
also indicates areas for improvement such as the need for better timbre and
loudness control and more advance and responsive interaction schemes. Keywords: music, perception, percussion, robotics | |||
| Using proprioceptive sensors for categorizing human-robot interactions | | BIBAK | Full-Text | 105-112 | |
| T. Salter; F. Michaud; D. Létourneau; D. C. Lee; I. P. Werry | |||
| Increasingly researchers are looking outside of normal communication
channels (such as video and audio) to provide additional forms of communication
or interaction between a human and a robot, or a robot and its environment.
Amongst the new channels being investigated is the detection of touch using
infrared, proprioceptive and temperature sensors. Our work aims at developing a
system that can detect natural touch or interaction coming from children
playing with a robot, and adapt to this interaction. This paper reports trials
carried out using Roball, a spherical mobile robot, demonstrating how sensory
data patterns can be identified in human-robot interaction, and exploited for
achieving behavioral adaptation. The experimental methodology used for these
trials is reported, which validated the hypothesis that human interaction can
not only be perceived from proprioceptive sensors on-board a robotic platform,
but that this perception has the ability to lead to adaptation. Keywords: adaptive mobile robots, categorizing interaction, human-robot interaction
(HRI), sensor evaluation | |||
| Improving human-robot interaction through adaptation to the auditory scene | | BIBAK | Full-Text | 113-120 | |
| Eric Martinson; Derek Brock | |||
| Effective communication with a mobile robot using speech is a difficult
problem even when you can control the auditory scene. Robot ego-noise, echoes,
and human interference are all common sources of decreased intelligibility. In
real-world environments, however, these common problems are supplemented with
many different types of background noise sources. For instance, military
scenarios might be punctuated by high decibel plane noise and bursts from
weaponry that mask parts of the speech output from the robot. Even in
non-military settings, however, fans, computers, alarms, and transportation
noise can cause enough interference that they might render a traditional speech
interface unintelligible. In this work, we seek to overcome these problems by
applying robotic advantages of sensing and mobility to a text-to-speech
interface. Using perspective taking skills to predict how the human user is
being affected by new sound sources, a robot can adjust its speaking patterns
and/or reposition itself within the environment to limit the negative impact on
intelligibility, making a speech interface easier to use. Keywords: acoustics, auditory perspective taking, auditory scene, human-robot
interaction | |||
| Group attention control for communication robots with wizard of OZ approach | | BIBAK | Full-Text | 121-128 | |
| Masahiro Shiomi; Takayuki Kanda; Satoshi Koizumi; Hiroshi Ishiguro; Norihiro Hagita | |||
| This paper describes a group attention control (GAC) system that enables a
communication robot to simultaneously interact with many people. GAC is based
on controlling social situations and indicating explicit control to unify all
purposes of attention. We implemented a semi-autonomous GAC system into a
communication robot that guides visitors to exhibits in a science museum and
engages in free-play interactions with them. The GAC system's effectiveness was
demonstrated in a two-week experiment in the museum. We believe these results
will allow us to develop interactive humanoid robots that can interact
effectively with groups of people. Keywords: commutation robot, field trial, group attention control, human-robot
interaction, science museum robot | |||
| How robotic products become social products: an ethnographic study of cleaning in the home | | BIBAK | Full-Text | 129-136 | |
| Jodi Forlizzi | |||
| Robots that work with people foster social relationships between people and
systems. The home is an interesting place to study the adoption and use of
these systems. The home provides challenges from both technical and interaction
perspectives. In addition, the home is a seat for many specialized human
behaviors and needs, and has a long history of what is collected and used to
functionally, aesthetically, and symbolically fit the home. To understand the
social impact of robotic technologies, this paper presents an ethnographic
study of consumer robots in the home. Six families' experience of floor
cleaning after receiving a new vacuum (a Roomba robotic vacuum or the Flair, a
handheld upright) was studied. While the Flair had little impact, the Roomba
changed people, cleaning activities, and other product use. In addition, people
described the Roomba in aesthetic and social terms. The results of this study,
while initial, generate implications for how robots should be designed for the
home. Keywords: ethnography, interaction design, robotic products, robots | |||
| Humanoid robots as a passive-social medium: a field experiment at a train station | | BIBAK | Full-Text | 137-144 | |
| Kotaro Hayashi; Daisuke Sakamoto; Takayuki Kanda; Masahiro Shiomi; Satoshi Koizumi; Hiroshi Ishiguro; Tsukasa Ogasawara; Norihiro Hagita | |||
| This paper reports a method that uses humanoid robots as a communication
medium. There are many interactive robots under development, but due to their
limited perception, their interactivity is still far poorer than that of
humans. Our approach in this paper is to limit robots' purpose to a
non-interactive medium and to look for a way to attract people's interest in
the information that robots convey. We propose using robots as a passive-social
medium, in which multiple robots converse with each other. We conducted a field
experiment at a train station for eight days to investigate the effects of a
passive-social medium. Keywords: communication robot, human-robot interaction, passive-social medium,
robot-robot communication | |||
| Comparing a computer agent with a humanoid robot | | BIBAK | Full-Text | 145-152 | |
| Aaron Powers; Sara Kiesler; Susan Fussell; Cristen Torrey | |||
| HRI researchers interested in social robots have made large investments in
humanoid robots. There is still sparse evidence that peoples' responses to
robots differ from their responses to computer agents, suggesting that agent
studies might serve to test HRI hypotheses. To help us understand the
difference between people's social interactions with an agent and a robot, we
experimentally compared people's responses in a health interview with (a) a
computer agent projected either on a computer monitor or life-size on a screen,
(b) a remote robot projected life-size on a screen, or (c) a collocated robot
in the same room. We found a few behavioral and large attitude differences
across these conditions. Participants forgot more and disclosed least with the
collocated robot, next with the projected remote robot, and then with the
agent. They spent more time with the collocated robot and their attitudes were
most positive toward that robot. We discuss tradeoffs for HRI research of using
collocated robots, remote robots, and computer agents as proxies of robots. Keywords: embodiment, human-robot interaction, social robots | |||
| Experiments with a robotic computer: body, affect and cognition interactions | | BIBAK | Full-Text | 153-160 | |
| Cynthia Breazeal; Andrew Wang; Rosalind Picard | |||
| We present RoCo, the first robotic computer designed with the ability to
move its monitor in subtly expressive ways that respond to and encourage its
user's own postural movement. We use RoCo in a novel user study to explore
whether a computer's "posture" can influence its user's subsequent posture, and
if the interaction of the user's body state with their affective state during a
task leads to improved task measures such as persistence in problem solving. We
believe this is possible in light of new theories that link physical posture
and its influence on affect and cognition. Initial results with 71 subjects
support the hypothesis that RoCo's posture not only manipulates the user's
posture, but also is associated with hypothesized posture-affect interactions.
Specifically, we found effects on increased persistence on a subsequent
cognitive task, and effects on perceived level of comfort. Keywords: affect, embodiment, human-robot interaction, user studies | |||
| RSVP: an investigation of remote shared visual presence as common ground for human-robot teams | | BIBAK | Full-Text | 161-168 | |
| Jenny Burke; Robin Murphy | |||
| This study presents mobile robots as a way of augmenting communication in
distributed teams through a remote shared visual presence (RSVP) consisting of
the robot's view. By giving all team members access to the shared visual
display provided by a robot situated in a remote workspace, the robot can serve
as a source of common ground for the distributed team. In a field study
examining the effects of remote shared visual presence on team performance in
collocated and distributed Urban Search & Rescue technical search teams, data
were collected from 25 dyadic teams comprised of US&R task force personnel
drawn from high-fidelity training exercises held in California (2004) and New
Jersey (2005). They performed a 2 x 2 repeated measures search task entailing
robot-assisted search in a confined space rubble pile. Multilevel regression
analyses were used to predict team performance based upon use of RSVP (RSVP or
no-RSVP) and whether or not team members had visual access to other team
members. Results indicated that the use of RSVP technology predicted team
performance (ß= -1.24, p<.05). No significant differences emerged in
performance between teams with and without visual access to their team members.
Findings suggest RSVP may enable distributed teams to perform as effectively as
collocated teams. However, differences detected between sites suggest
efficiency of RSVP may depend on the user's domain experience and team
cohesion. Keywords: distributed teams, field robotics, remote presence | |||
| A field experiment of autonomous mobility: operator workload for one and two robots | | BIBAK | Full-Text | 169-176 | |
| Susan G. Hill; Barry Bodt | |||
| An experiment was conducted on aspects of human-robot interaction in a field
environment using the U.S. Army's Experimental Unmanned Vehicle (XUV). Goals of
this experiment were to examine the use of scalable interfaces and to examine
operator span of control when controlling one versus two autonomous unmanned
ground vehicles. We collected workload ratings from two Soldiers after they had
performed missions that included monitoring, downloading and reporting on
simulated reconnaissance, surveillance, and target acquisition (RSTA) images,
and responding to unplanned operator intervention requests from the XUV.
Several observations are made based on workload data, experimenter notes, and
informal interviews with operators. Keywords: UGV, XUV, human-robot interaction, operator interface, operator workload,
scalable interfaces, span of control, unmanned ground vehicle | |||
| HRI caught on film | | BIBAK | Full-Text | 177-183 | |
| Christoph Bartneck; Takayuki Kanda | |||
| The Human Robot Interaction 2007 conference hosted a video session, in which
movies of interesting, important, illustrative, or humorous HRI research
moments are shown. This paper summarizes the abstracts of the presented videos.
Robots and humans do not always behave as expected and the results can be
entertaining and even enlightening -- therefore instances of failures have also
been considered in the video session. Besides the importance of the lessons
learned and the novelty of the situation, the videos have also an entertaining
value. Keywords: film, human, interaction, robot | |||
| A cognitive robotics approach to comprehending human language and behaviors | | BIBAK | Full-Text | 185-192 | |
| D. Paul Benjamin; Deryle Lonsdale; Damian Lyons | |||
| The ADAPT project is a collaboration of researchers in linguistics, robotics
and artificial intelligence at three universities. We are building a complete
robotic cognitive architecture for a mobile robot designed to interact with
humans in a range of environments, and which uses natural language and models
human behavior. This paper concentrates on the HRI aspects of ADAPT, and
especially on how ADAPT models and interacts with humans. Keywords: cognitive robotics, learning, natural language | |||
| Android as a telecommunication medium with a human-like presence | | BIBAK | Full-Text | 193-200 | |
| Daisuke Sakamoto; Takayuki Kanda; Tetsuo Ono; Hiroshi Ishiguro; Norihiro Hagita | |||
| In this research, we realize human telepresence by developing a
remote-controlled android system called Geminoid HI-1. Experimental results
confirm that participants felt stronger presence of the operator when he talked
through the android than when he appeared on a video monitor in a video
conference system. In addition, participants talked with the robot naturally
and evaluated its human likeness as equal to a man on a video monitor. At this
paper's conclusion, we will discuss a remote-control system for telepresence
that uses a human-like android robot as a new telecommunication medium. Keywords: android science, humanoid robot, telecommunication, telepresence | |||
| Autonomous behavior design for robotic appliance | | BIBAK | Full-Text | 201-208 | |
| Hyunjeong Lee; Hyun Jin Kim; Changsu Kim | |||
| We consider robotic appliances, mobile and intelligent robots that perform
household tasks, and study design method for autonomous behaviors of robotic
appliances, a key component in human-robot and robot-environment interactions.
Specifically, we develop robot behavior diagram, a visualized, robot-centric,
chart-based behavior design method that is useful in describing robotic
behaviors and transitions among them. This tool enables systematic construction
of exhaustive set of autonomous behaviors, within the context of given set of
robotic functionalities. First, the set of functionalities for a given robot is
defined. Then, employing the defined functionalities, a diagram is constructed
with behavioral states and events that can trigger transitions to other
behaviors. For any given behavioral state, all plausible events are considered,
and this is repeated until the diagram is complete with all the possible
behaviors and events. We demonstrate that, via this method, a complete and
robust behavioral scenario design is indeed attainable, by applying robot
behavior diagram approach to an example household cleaning robot platform. Keywords: behavior design, domestic robots, human-robot interaction, interaction
design, robot appliance, service robots | |||
| Combining ubiquitous and on-board audio sensing for human-robot interaction | | BIBA | Full-Text | 209-216 | |
| Simon Thompson; Satoshi Kagami; Yoko Sasaki; Yoshifumi Nishida; Tadashi Enomoto; Hiroshi Mizoguchi | |||
| This paper reports on the development of a mobile robot system for operation
within a house equipped with a ubiquitous sensor network.
Human robot interaction is achieved through the combination of on-robot audio and laser range sensing and additional audio sensors mounted in the ceiling of the ubiquitous environment. The ceiling mounted microphone arrays can be used to summon a mobile robot from a location outside the robot's range of hearing. After the robot autonomously navigates to the desired location, the on-board microphone array can be used to locate the sound source and to recognise a series of greetings and commands. | |||
| "Daisy, Daisy, give me your answer do!": switching off a robot | | BIBAK | Full-Text | 217-222 | |
| Christoph Bartneck; Michel van der Hoek; Omar Mubin; Abdullah Al Mahmud | |||
| Robots can exhibit life like behavior, but are according to traditional
definitions not alive. Current robot users are confronted with an ambiguous
entity and it is important to understand the users perception of these robots.
This study analyses if a robot's intelligence and its agreeableness influence
its perceived animacy. The robot's animacy was measured, amongst other
measurements, by the users' hesitation to switch it off. The results show that
participants hesitated three times as long to switch off an agreeable and
intelligent robot as compared to a non agreeable and unintelligent robot. The
robots' intelligence had a significant influence on its perceived animacy. Our
results suggest that interactive robots should be intelligent and exhibit an
agreeable attitude to maximize its perceived animacy. Keywords: animacy, human, intelligence, interaction, robot, switching off | |||
| Directed stigmergy-based control for multi-robot systems | | BIBAK | Full-Text | 223-230 | |
| Fitzgerald, Jr. Steele; Geb Thomas | |||
| Multi-robot systems are particularly useful in tasks that require searching
large areas such as planetary science exploration, urban search and rescue, or
landmine remediation. In order to overcome the inherent complexity of
controlling multiple robots, the user must be able to give high-level, goal
driven direction to the robot team. Since human robot interaction is a
relatively new discipline, it is helpful to look to existing systems for
concepts, analogies, or metaphors that might be utilized in building useful
systems. Inspiration from natural decentralized systems guides the development
of a computer simulation for stigmergy-based control of multi-robot system, and
the interface with which an operator can interact and control mobile robots.
In-depth description of the design process includes a description of a basic
stigmergy-based control system and an innovative Directed Stigmergy control
system that facilitates operator control of the robot team in an interesting
and surprisingly effective way. Keywords: human robot interaction, multi-robot, robotics, stigmergy, supervisory
control, swarm, user interface | |||
| Elements of a spoken language programming interface for robots | | BIBAK | Full-Text | 231-237 | |
| Tim Miller; Andy Exley; William Schuler | |||
| In many settings, such as home care or mobile environments, demands on
users' attention, or users' anticipated level of formal training, or other
on-site conditions will make standard keyboard-and monitor-based robot
programming interfaces impractical. In such cases, a spoken language interface
may be preferable. However, the open-ended task of programming a machine is
very different from the sort of closed-vocabulary, data-rich applications (e.g.
call routing) for which most speaker-independent spoken language interfaces are
designed. This paper will describe some of the challenges of designing a spoken
language programming interface for robots, and will present an approach that
uses these semantic-level resources as extensively as possible in order to
address these challenges. Keywords: human-robot interaction, language modeling, natural language processing,
spoken language interfaces | |||
| Assessing the scalability of a multiple robot interface | | BIBAK | Full-Text | 239-246 | |
| Curtis M. Humphrey; Christopher Henk; George Sewell; Brian W. Williams; Julie A. Adams | |||
| As multiple robot systems become more common, it is necessary to develop
scalable human-robot interfaces that permit the inclusion of additional robots
without reducing the overall system performance. Workload and situational
awareness play key roles in determining the ratio of m operators to n robots. A
scalable interface, where m is much smaller than n, will have to manage the
operator's workload and promote a high level of situation awareness. This work
focused on the development of a scalable interface for a single human-multiple
robot system. This interface introduces a relational "halo display that
augments a camera view to promote situational awareness and the management of
multiple robots by providing information regarding the robots' relative
locations with respect to a selected robot. An evaluation was conducted to
determine the scalability of the interface focusing on the effects of
increasing the number of robots on workload, situation awareness, and robot
usage. Twenty participants completed two bomb defusing tasks: one employing six
robots, the other nine. The results indicated that increasing the number of
robots increased overall workload and the operator's situation awareness. Keywords: human robotic interaction, multiple robots, situational awareness, workload | |||
| Exploring adaptive dialogue based on a robot's awareness of human gaze and task progress | | BIBAK | Full-Text | 247-254 | |
| Cristen Torrey; Aaron Powers; Susan R. Fussell; Sara Kiesler | |||
| When a robot provides direction -- as a guide, an assistant, or as an
instructor -- the robot may have to interact with people of different
backgrounds and skill sets. Different people require information adapted to
their level of understanding. In this paper, we explore the use of two simple
forms of awareness that a robot might use to infer that a person needs further
verbal elaboration during a tool select on task. First, the robot could use an
eye tracker for inferring whether the person is looking at the robot and thus
in need of further elaboration. Second, the robot could monitor delays in the
individual's task progress, indicating that he or she could use further
elaboration. We investigated the effects of these two types of awareness on
performance time, selection mistakes, and the number of questions people asked
the robot. We did not observe any obvious benefits of our gaze awareness
manipulation. Awareness of task delays did reduce the number of questions
participants' asked compared to our control condition but did not significantly
reduce the number of select on mistakes. The mixed results of our investigation
suggest that more research is necessary before we can understand how awareness
of gaze and awareness of task delay can be successfully implemented in
human-robot dialogue. Keywords: adaptive dialogue, human-robot dialogue, human-robot interaction, social
robots | |||
| Incremental learning of gestures by imitation in a humanoid robot | | BIBAK | Full-Text | 255-262 | |
| Sylvain Calinon; Aude Billard | |||
| We present an approach to teach incrementally human gestures to a humanoid
robot. By using active teaching methods that puts the human teacher "in the
loop" of the robot's learning, we show that the essential characteristics of a
gesture can be efficiently transferred by interacting socially with the robot.
In a first phase, the robot observes the user demonstrating the skill while
wearing motion sensors. The motion of his/her two arms and head are recorded by
the robot, projected in a latent space of motion and encoded probabilistically
in a Gaussian Mixture Model (GMM). In a second phase, the user helps the robot
refine its gesture by kinesthetic teaching, i.e. by grabbing and moving its
arms throughout the movement to provide the appropriate scaffolds. To update
the model of the gesture, we compare the performance of two incremental
training procedures against a batch training procedure. We present experiments
to show that different modalities can be combined efficiently to teach
incrementally basketball officials' signals to a HOAP-3 humanoid robot. Keywords: gaussian mixture model, imitation learning, incremental learning,
programming by demonstration | |||
| Incremental natural language processing for HRI | | BIBAK | Full-Text | 263-270 | |
| Timothy Brick; Matthias Scheutz | |||
| Robots that interact with humans face-to-face using natural language need to
be responsive to the way humans use language in those situations. We propose a
psychologically-inspired natural language processing system for robots which
performs incremental semantic interpretation of spoken utterances, integrating
tightly with the robot's perceptual and motor systems. Keywords: HRI, embodied NLP, incremental processing | |||
| Influence of perspective-taking and mental rotation abilities in space teleoperation | | BIBAK | Full-Text | 271-278 | |
| M. Alejandra Menchaca-Brandan; Andrew M. Liu; Charles M. Oman; Alan Natapoff | |||
| Operator performance during Space Shuttle and International Space Station
robotic arm training can differ dramatically among astronauts. The difficulty
making appropriate camera selections and accurate use of hand controllers, two
of the more important aspects for performance, may be rooted in a problem
mentally relating the various reference frames used by the displays, hand
controllers and robot arm. In this paper, we examine whether the origin of such
individual differences can be found in certain components of spatial ability.
We have developed a virtual reality simulation of the Space Station Robotic
Workstation to investigate whether performance differences can be correlated
with subjects' perspective-taking and mental rotation abilities. Spatial test
scores were measured and correlated to their performance in a docking robotic
task. The preliminary results show that both mental rotation strategies and
perspective-taking strategies are used by the operator to move the robot arm
around the workspace. Further studies must be performed to confirm such
findings. If important correlations between performance and spatial abilities
are found, astronaut training could be designed in order to fulfill each
operator's needs, reducing both training time and cost. Keywords: mental rotations, perspective-taking, robotic arm, space teleoperation,
spatial ability | |||
| LASSOing HRI: analyzing situation awareness in map-centric and video-centric interfaces | | BIBAK | Full-Text | 279-286 | |
| Jill L. Drury; Brenden Keyes; Holly A. Yanco | |||
| Good situation awareness (SA) is especially necessary when robots and their
operators are not collocated, such as in urban search and rescue (USAR). This
paper compares how SA is attained in two systems: one that has an emphasis on
video and another that has an emphasis on a three-dimensional map. We performed
a within-subjects study with eight USAR domain experts. To analyze the
utterances made by the participants, we developed a SA analysis technique,
called LASSO, which includes five awareness categories: location, activities,
surroundings, status, and overall mission. Using our analysis technique, we
show that a map-centric interface is more effective in providing good location
and status awareness while a video-centric interface is more effective in
providing good surroundings and activities awareness. Keywords: human-robot interaction (HRI), situation awareness (SA), urban search and
rescue (USAR) | |||
| Non-facial/non-verbal methods of affective expression as applied to robot-assisted victim assessment | | BIBAK | Full-Text | 287-294 | |
| Cindy L. Bethel; Robin R. Murphy | |||
| This work applies a previously developed set of heuristics for determining
when to use non-facial/non-verbal methods of affective expression to the domain
of a robot being used for victim assessment in the aftermath of a disaster.
Robot-assisted victim assessment places a robot approximately three meters or
less from a victim, and the path of the robot traverses three proximity zones
(intimate (contact-0.46m), personal (0.46-1.22 m), and social (1.22-3.66 m)).
Robot- and victim-eye views of an Inuktun robot were collected as it followed a
path around the victim. The path was derived from observations of a prior
robot-assisted medical reachback study. The victim's-eye views of the robot
from seven points of interest on the path illustrate the appropriateness of
each of the five primary non-facial/non-verbal methods of affective expression:
(body movement, posture, orientation, illuminated color, and sound), offering
support for the heuristics as a design aid. In addition to supporting the
heuristics, the investigation identified three open research questions on
acceptable motions and impact of the surroundings on robot affect. Keywords: affective computing, human-robot interaction, non-verbal communication,
proxemics, robotic design guidelines | |||
| On-line behaviour classification and adaptation to human-robot interaction styles | | BIBAK | Full-Text | 295-302 | |
| Dorothée François; Daniel Polani; Kerstin Dautenhahn | |||
| This paper presents a proof-of-concept of a robot that is adapting its
behaviour on-line, during interactions with a human according to detected play
styles. The study is part of the AuRoRa project which investigates how robots
may be used to help children with autism overcome some of their impairments in
social interactions. The paper motivates why adaptation is a very desirable
feature of autonomous robots in human-robot interaction scenarios in general,
and in autism therapy in particular. Two different play styles namely 'strong'
and 'gentle', which refer to the user, are investigated experimentally. The
model relies on Self-Organizing Maps, used as a classifier, and on Fast Fourier
Transform to preprocess the sensor data. First experiments were carried out
which discuss the performance of the model. Related work on adaptation in
socially assistive and therapeutic work are surveyed. In future work, with
typically developing and autistic children, the concrete choice of the robot's
behaviours will be tailored towards the children's interests and abilities. Keywords: adaptation in interaction, behaviour classification, interaction styles | |||
| Realizing Hinokio: candidate requirements for physical avatar systems | | BIBAK | Full-Text | 303-308 | |
| Laurel D. Riek | |||
| This paper presents a set of candidate requirements and survey questions for
physical avatar systems as derived from the literature. These requirements will
be applied to analyze a fictional, yet well-envisioned, physical avatar system
depicted in the film Hinokio. It is hoped that these requirements and survey
questions can be used by other researchers as a guide when performing formal
engineering tradeoff analysis during the design phase of new physical avatar
systems, or during evaluation of existing systems. Keywords: collaboration, human-robot interaction, physical avatars, requirements,
tele-embodiment | |||
| Robot expressionism through cartooning | | BIBAK | Full-Text | 309-316 | |
| James E. Young; Min Xin; Ehud Sharlin | |||
| We present a new technique for human-robot interaction called robot
expressionism through cartooning. We suggest that robots utilise cartoon-art
techniques such as simplified and exaggerated facial expressions, stylised
text, and icons for intuitive social interaction with humans. We discuss
practical mixed reality solutions that allow robots to augment themselves or
their surroundings with cartoon art content. Our effort is part of what we call
robot expressionism, a conceptual approach to the design and analysis of
robotic interfaces that focuses on providing intuitive insight into robotic
states as well as the artistic quality of interaction. Our paper discusses a
variety of ways that allow robots to use cartoon art and details a test bed
design, implementation, and exploratory evaluation. We describe our test bed,
Jeeves, which uses a Roomba, an iRobot vacuum cleaner robot, and a
mixed-reality system as a platform for rapid prototyping of cartoon-art
interfaces. Finally, we present a set of interaction content scenarios which
use the Jeeves prototype: trash Roomba, the recycle police, and clean tracks,
as well as initial exploratory evaluation of our approach. Keywords: cartoon art, human-robot interaction, mixed reality, sociable robotics | |||
| Robotic etiquette: results from user studies involving a fetch and carry task | | BIBAK | Full-Text | 317-324 | |
| Michael L. Walters; Kerstin Dautenhahn; Sarah N. Woods; Kheng Lee Koay | |||
| This paper presents results, outcomes and conclusions from a series of Human
Robot Interaction (HRI) trials which investigated how a robot should approach a
human in a fetch and carry task. Two pilot trials were carried out, aiding the
development of a main HRI trial with four different approach contexts under
controlled experimental conditions. The findings from the pilot trials were
confirmed and expanded upon. Most subjects disliked a frontal approach when
seated. In general, seated humans do not like to be approached by a robot
directly from the front even when seated behind a table. A frontal approach is
more acceptable when a human is standing in an open area. Most subjects
preferred to be approached from either the left or right side, with a small
overall preference for a right approach by the robot. However, this is not a
strong preference and it may be disregarded if it is more physically convenient
to approach from a left front direction. Handedness and occupation were not
related to these preferences. Subjects do not usually like the robot to move or
approach from directly behind them, preferring the robot to be in view even if
this means the robot taking a physically non-optimum path. The subjects for the
main HRI trials had no previous experience of interacting with robots. Future
research aims are outlined and include the necessity of carrying out
longitudinal trials to see if these findings hold over a longer period of
exposure to robots. Keywords: human-robot interaction, live interactions, personal spaces, social robot,
social spaces, user trials | |||
| Robots as interfaces to haptic and locomotor spaces | | BIBAK | Full-Text | 325-331 | |
| Vladimir Kulyukin; Chaitanya Gharpure; Cassidy Pentico | |||
| Research on spatial cognition and navigation of the visually impaired
suggests that vision may be a primary sensory modality that enables humans to
align the egocentric (self to object) and allocentric (object to object) frames
of reference in space. In the absence of vision, the frames align best in the
haptic space. In the locomotor space, as the haptic space translates with the
body, lack of vision causes the frames to misalign, which negatively affects
action reliability. In this paper, we argue that robots can function as
interfaces to the haptic and locomotor spaces in supermarkets. In the locomotor
space, the robot eliminates the necessity of frame alignment and, in or near
the haptic space, it cues the shopper to the salient features of the
environment sufficient for product retrieval. We present a trichotomous
ontology of spaces in a supermarket induced by the presence of a robotic
shopping assistant and analyze the results of robot-assisted shopping
experiments with ten visually impaired participants conducted in a real
supermarket. Keywords: assistive robotics, haptic and locomotor interfaces, spatial cognition | |||
| The RUBI project: a progress report | | BIBAK | Full-Text | 333-339 | |
| Javier R. Movellan; Fumihide Tanaka; Ian R. Fasel; Cynthia Taylor; Paul Ruvolo; Micah Eckhardt | |||
| The goal of the RUBI project is to accelerate progress in the development of
social robots by addressing the problem at multiple levels, including the
development of a scientific agenda, research methods, formal approaches,
software, and hardware. The project is based on the idea that progress will go
hand-in-hand with the emergence of a new scientific discipline that focuses on
understanding the organization of adaptive behavior in real-time within the
environments in which organisms operate. As such, the RUBI project emphasizes
the process of design by immersion, i.e., embedding scientists, engineers and
robots in everyday life environments so as to have these environments shape the
hardware, software, and scientific questions as early as possible in the
development process. The focus of the project so far has been on social robots
that interact with 18 to 24 month old toddlers as part of their daily
activities at the Early Childhood Education Center at the University of
California, San Diego. In this document we present an overall assessment of the
lessons and progress through year two of the project. Keywords: architectures for social interaction, design by immersion, field studies,
social robots | |||
| Spatial dialog for space system autonomy | | BIBAK | Full-Text | 341-348 | |
| Scott Green; Scott Richardson; Vadim Slavin; Randy Stiles | |||
| Future space operations will increasingly demand cooperation between humans
and autonomous space systems such as robots, observer satellites, and
distributed components. Human team members use a combination of gestures, gaze,
posture, deictic references and speech to communicate effectively. When a human
team collaborates on a given task, they discuss the task, create a plan and
then review this plan prior to execution to ensure success. This is exactly the
process we envision for effective human-autonomous agent collaborative
teamwork. Visual spatial information is a common reference for increased shared
situation awareness between humans and autonomous systems. We use a spatial
dialog approach to enable multiple humans to naturally and effectively
communicate and work with multiple autonomous space systems. With this in mind,
we have created a prototype spatial dialog system to support teamwork between
humans and autonomous robotic agents. We combine augmented reality gesture
interaction and visualization with speech to realize a spatial dialog
capability. This paper describes our prototype, general approach, and related
issues for team-oriented spatial dialog interaction with autonomous space
systems. Keywords: adjustable autonomy, augmented reality, collaboration, human-robotic
interaction, multi-modal interfaces, spoken dialog system, virtual
tele-presence | |||
| Speed adaptation for a robot walking with a human | | BIBA | Full-Text | 349-356 | |
| Emma Sviestins; Noriaki Mitsunaga; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita | |||
| We have taken steps towards developing a method that enables an interactive humanoid robot to adapt its speed to a walking human that it is moving together with. This is difficult because the human is simultaneously adapting to the robot. From a case study in human-human walking interaction we established a hypothesis about how to read a human's speed preference based on a relationship between humans' walking speed and their relative position in the direction of walking. We conducted two experiments to verify this hypothesis: one with two humans walking together, and one with a human subject walking with a humanoid robot, Robovie-IV. For 11 out of 15 subjects who walked with the robot, the results were consistent with the speed-position relationship of the hypothesis. We also conducted a preferred speed estimation experiment for six of the subjects. All of them were satisfied with one or more of the speeds that our algorithm estimated and four of them answered one of the speeds as the best one if the algorithm was allowed to give three options. In the paper, we also discuss the difficulties and possibilities that we learned from this preliminary trial. | |||
| Young researchers' views on the current and future state of HRI | | BIBAK | Full-Text | 357-364 | |
| Kevin Gold; Ian Fasel; Nathan G. Freier; Cristen Torrey | |||
| This paper presents the results of a panel discussion titled "The Future of
HRI," held during an NSF workshop for graduate students on human-robot
interaction in August 2006. The panel divided the workshop into groups tasked
with inventing models of the field, and then asked these groups their opinions
on the future of the field. In general, the workshop participants shared the
belief that HRI can and should be seen as a single scientific discipline,
despite the fact that it encompasses a variety of beliefs, methods, and
philosophies drawn from several "core" disciplines in traditional areas of
study. HRI researchers share many interrelated goals, participants felt, and
enhancing the lines of communication between different areas would help speed
up progress in the field. Common concerns included the unavailability of common
robust platforms, the emphasis on human perception over robot perception, and
the paucity of longitudinal real-world studies. The authors point to the
current lack of consensus on research paradigms and platforms to argue that the
field is not yet in the phase that philosopher Thomas Kuhn would call "normal
science," but believe the field shows signs of approaching that phase. Keywords: HRI, Kuhn, future, history of science, human robot interaction, paradigms,
robotic platforms, student perspectives, workshop panel | |||
| Tracking human motion and actions for interactive robots | | BIBAK | Full-Text | 365-372 | |
| Odest Chadwicke Jenkins; German González; Matthew Maverick Loper | |||
| A method is presented for kinematic pose estimation and action recognition
from monocular robot vision through the use of dynamical human motion
vocabularies. We propose the utilization of dynamical motion vocabularies
towards bridging the decision making of observed humans and information from
robot sensing. Our motion vocabulary is comprised of learned primitives that
structure the action space for decision making and describe human movement
dynamics. Given image observations over time, each primitive infers on pose
independently using its prediction density on movement dynamics in the context
of a particle filter. Pose estimates from a set of primitives inferencing in
parallel are arbitrated to estimate the action being performed. The efficacy of
our approach is demonstrated through tracking and action recognition over
extended motion trials. Results evidence the robustness of the algorithm with
respect to unsegmented multi-action movement, movement speed, and camera
viewpoint. Keywords: action recognition, human tracking, human-robot interaction, markerless
motion capture | |||
| User-centered approach to path planning of cleaning robots: analyzing user's cleaning behavior | | BIBAK | Full-Text | 373-380 | |
| Hyunjin Kim; Hyunjeong Lee; Stanley Chung; Changsu Kim | |||
| Current research on robot navigation is focused on clear recognition of the
map and optimal path planning. The human cleaning path is, however, not optimal
regarding time but optimal to the cleaning purpose. We have analyzed in this
paper the cleaning behaviors in home environments and understood the user's
path planning behaviors through usage tests of various vacuuming robots. We
discovered that the actual user cleans with methods unique to specific areas of
the house rather than following an optimal cleaning path. We not only suggest a
path planning method for the vacuuming robot by using a layered map, but also a
cleaning area designating method reflecting each area's characteristics. Based
on these, we have designed a vacuuming robot's actions. Keywords: cleaning robots, design research, human-robot interaction design, path
planning | |||