[1]
Multiple robotic wheelchair system able to move with a companion using map
information
HRI2014 late breaking reports poster
/
Sato, Yoshihisa
/
Suzuki, Ryota
/
Arai, Masaya
/
Kobayashi, Yoshinori
/
Kuno, Yoshinori
/
Fukushima, Mihoko
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot
Interaction
2014-03-03
p.286-287
© Copyright 2014 ACM
Summary: In order to reduce the burden of caregivers facing an increased demand for
care, particularly for the elderly, we developed a system whereby multiple
robotic wheelchairs can automatically move alongside a companion. This enables
a small number of people to assist a substantially larger number of wheelchair
users effectively. This system utilizes an environmental map and an estimation
of position to accurately identify the positional relations among the caregiver
(or a companion) and each wheelchair. The wheelchairs are consequently able to
follow along even if the caregiver cannot be directly recognized. Moreover, the
system is able to establish and maintain appropriate positional relations.
[2]
Robotic wheelchair easy to move and communicate with companions
Interactivity: research
/
Kobayashi, Yoshinori
/
Suzuki, Ryota
/
Sato, Yoshihisa
/
Arai, Masaya
/
Kuno, Yoshinori
/
Yamazaki, Akiko
/
Yamazaki, Keiichi
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.3079-3082
© Copyright 2013 ACM
Summary: Although it is desirable for wheelchair users to go out alone by operating
wheelchairs on their own, they are often accompanied by caregivers or
companions. In designing robotic wheelchairs, therefore, it is important to
consider not only how to assist the wheelchair user but also how to reduce
companions' load and support their activities. We specially focus on the
communications among wheelchair users and companions because the face-to-face
communication is known to be effective to ameliorate elderly mental health.
Hence, we proposed a robotic wheelchair able to move alongside a companion. We
demonstrate our robotic wheelchair. All attendees can try to ride and control
our robotic wheelchair.
[3]
Question strategy and interculturality in human-robot interaction
HRI 2013 late breaking results and poster session
/
Fukushima, Mihoko
/
Fujita, Rio
/
Kurihara, Miyuki
/
Suzuki, Tomoyuki
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
/
Ikeda, Keiko
/
Kuno, Yoshinori
/
Kobayashi, Yoshinori
/
Ohyama, Takaya
/
Yoshida, Eri
Proceedings of the 2013 ACM/IEEE International Conference on Human-Robot
Interaction
2013-03-03
p.125-126
© Copyright 2013 ACM
Summary: This paper demonstrates the ways in which multi party human participants in
2 language groups, Japanese and English, engage with a quiz robot when they are
asked a question. We focus on both speech and bodily conducts where we
discovered both universalities and differences.
[4]
Care robot able to show the order of service provision through bodily
actions in multi-party settings
Work-in-progress
/
Kobayashi, Yoshinori
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
/
Gyoda, Masahiko
/
Tabata, Tomoya
/
Kuno, Yoshinori
/
Seki, Yukiko
Extended Abstracts of ACM CHI'12 Conference on Human Factors in Computing
Systems
2012-05-05
v.2
p.1889-1894
© Copyright 2012 ACM
Summary: Service robots, such as tea-serving robots, should be designed to show the
order of service provision in multi-party settings. An ethnographic study we
conducted at an elderly care center revealed that the gaze and bodily actions
of care workers can serve this function. To test this, we developed a robot
system able to utilize its gaze and other gestures in this way. Experimental
results demonstrated that the robot could effectively display the order of
service provision using this method, and highlighted the benefits of employing
the gaze for robots working in multi-party settings.
[5]
Establishment of spatial formation by a mobile guide robot
LBR highlights
/
Yousuf, Mohammad Abu
/
Kobayashi, Yoshinori
/
Kuno, Yoshinori
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
Proceedings of the 7th International Conference on Human-Robot Interaction
2012-03-05
p.281-282
© Copyright 2012 ACM
Summary: A mobile museum guide robot is expected to establish a proper spatial
formation with the visitors. After observing the videotaped scenes of human
guide-visitors interaction at actual museum galleries, we have developed a
mobile robot that can guide multiple visitors inside the gallery from one
exhibit to another. The mobile guide robot is capable of establishing spatial
formation known as "F-formation" at the beginning of explanation. It can also
use a systematic procedure known as "pause and restart" depending on the
situation through which a framework of mutual orientation between the speaker
(robot) and visitors is achieved. The effectiveness of our method has been
confirmed through experiments.
[6]
A techno-sociological solution for designing a museum guide robot: regarding
choosing an appropriate visitor
Conversation and proxemics
/
Yamazaki, Akiko
/
Yamazaki, Keiichi
/
Ohyama, Takaya
/
Kobayashi, Yoshinori
/
Kuno, Yoshinori
Proceedings of the 7th International Conference on Human-Robot Interaction
2012-03-05
p.309-316
© Copyright 2012 ACM
Summary: In this paper, we present our work designing a robot that explains an
exhibit to multiple visitors in a museum setting, based on ethnographic
analysis of interactions between expert human guides and visitors. During the
ethnographic analysis, we discovered that expert human guides employ some
identical strategies and practices in their explanations. In particular, one of
these is to involve all visitors by posing a question to an appropriate visitor
among them, which we call the "creating a puzzle" sequence. This is done in
order to draw visitors' attention towards not only the exhibit and but also the
guide's explanation. While creating a puzzle, the human guide can monitor
visitors' responses and choose an "appropriate" visitor (i.e. one who is likely
to provide an answer). Based on these findings, sociologists and engineers
together developed a guide robot that coordinates verbal and non-verbal actions
in posing a question or "a puzzle" that will draw visitors' attention, and then
explain the exhibit for multiple visitors. During the explanation, the robot
chooses an "appropriate" visitor. We tested the robot at an actual museum. The
results show that our robot increases visitors' engagement and interaction with
the guide, as well as interaction and engagement among visitors.
[7]
Implementing human questioning strategies into quizzing-robot
HRI 2012 video session
/
Ohyama, Takaya
/
Maeda, Yasutomo
/
Mori, Chiaki
/
Kobayashi, Yoshinori
/
Kuno, Yoshinori
/
Fujita, Rio
/
Yamazaki, Keiichi
/
Miyazawa, Shun
/
Yamazaki, Akiko
/
Ikeda, Keiko
Proceedings of the 7th International Conference on Human-Robot Interaction
2012-03-05
p.423-424
© Copyright 2012 ACM
Summary: From our ethnographic studies on various kinds of museums, we discovered
that guides routinely propose questions to visitors in order to draw their
attention towards both his/her explanation and the exhibit. The guides'
question sequences tend to begin with a pre-question which serves to not only
monitor visitors' behavior and responses, but to also alert visitors that a
primary question would follow. We implemented this questioning-strategy with
our robot system and investigated whether this strategy would also work in
human-robot interaction. We developed a vision system that enables the robot to
choose an appropriate visitor by monitoring a visitor's response from the
initiation of a pre-question to the following pause. Results indicate that this
questioning-strategy works effectively in human-robot interaction. In this
experiment, the robot asked visitors about a photograph. At the pre-question,
the robot delivered a rather easy question followed by a more challenging
question (Figure 1). More participants turned their head away from the
exhibition when they were not sure about their answer to the question. They
either faced away from the robot, or smiled wryly at the robot or at each
other. These types of behaviors index participants' states of knowledge, which
we could utilize to develop a system by which the robot could choose an
appropriate candidate by computational recognition.
[8]
Robotic wheelchair moving with caregiver collaboratively depending on
circumstances
Works-in-progress
/
Kobayashi, Yoshinori
/
Kinpara, Yuki
/
Takano, Erii
/
Kuno, Yoshinori
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems
2011-05-07
v.2
p.2239-2244
© Copyright 2011 ACM
Summary: This paper introduces a robotic wheelchair that can automatically move
alongside a caregiver. Because wheelchair users are often accompanied by
caregivers, it is vital to consider how to reduce a caregiver's load and
support their activities, while simultaneously facilitating communication
between the caregiver and the wheelchair user. Moreover, it has been pointed
out that when a wheelchair user is accompanied by a companion, the latter is
inevitably seen by others as a caregiver rather than a friend. To address this
situation, we devised a robotic wheelchair able to move alongside a caregiver
or companion, and facilitate easy communication between them and the wheelchair
user. To confirm the effectiveness of the wheelchair in real-world situations,
we conducted experiments at an elderly care center in Japan.
[9]
A wheelchair which can automatically move alongside a caregiver
Video session
/
Kobayashi, Yoshinori
/
Kinpara, Yuki
/
Takano, Erii
/
Kuno, Yoshinori
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
Proceedings of the 6th International Conference on Human-Robot Interaction
2011-03-06
p.407-408
© Copyright 2011 ACM
Summary: This video presents our ongoing work developing a robotic wheelchair that
can move automatically alongside a caregiver. Recently, several
robotic/intelligent wheelchairs possessing autonomous functions for reaching a
goal and/or user-friendly interfaces have been proposed. Although ideally
wheelchair users may wish to go out alone, they are often accompanied by
caregivers. Therefore, it is important to consider how to reduce the
caregivers' load and support their activities and facilitate communication
between the wheelchair user and caregiver. Moreover, a sociologist pointed out
that when a wheelchair user is accompanied by a companion, the latter is
inevitably seen as a caregiver [1]. In other words, the equality of the
relationship is publicly undermined when the wheelchair is pushed by a
companion. Hence, we propose a robotic wheelchair which can move alongside a
caregiver or companion, and facilitate easy communication between them and the
wheelchair user. However, it is not always desirable for a caregiver to be
alongside a wheelchair. For instance, a caregiver may step in front of the
wheelchair to open a door, and pedestrians may be encumbered by the wheelchair
and companion if they move along side-by-side in a narrow corridor. To cope
with these problems, our robotic wheelchair can move alongside a caregiver
collaboratively depending on the circumstances. A laser range sensor is
employed to track the caregiver and observe the environment around the
wheelchair [2]. When obstacles are detected in the wheelchair's path of motion,
it adjusts its position accordingly. In the video we demonstrate these
functions of our robotic wheelchair. We are now conducting experiments to
confirm the effectiveness of our wheelchair at an elderly care center in Japan.
[10]
Revealing gauguin: engaging visitors in robot guide's explanation in an art
museum
New media experiences 2
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
/
Okada, Mai
/
Kuno, Yoshinori
/
Kobayashi, Yoshinori
/
Hoshi, Yosuke
/
Pitsch, Karola
/
Luff, Paul
/
vom Lehn, Dirk
/
Heath, Christian
Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems
2009-04-04
v.1
p.1437-1446
Keywords: computer vision, conversation analysis, guide robot, human-robot
interaction, interaction analysis, museum
© Copyright 2009 ACM
Summary: Designing technologies that support the explanation of museum exhibits is a
challenging domain. In this paper we develop an innovative approach --
providing a robot guide with resources to engage visitors in an interaction
about an art exhibit. We draw upon ethnographical fieldwork in an art museum,
focusing on how tour guides interrelate talk and visual conduct, specifically
how they ask questions of different kinds to engage and involve visitors in
lengthy explanations of an exhibit. From this analysis we have developed a
robot guide that can coordinate its utterances and body movement to monitor the
responses of visitors to these. Detailed analysis of the interaction between
the robot and visitors in an art museum suggests that such simple devices
derived from the study of human interaction might be useful in engaging
visitors in explanations of complex artifacts.
[11]
Assisted-care robot initiation of communication in multiparty settings
Spotlight on work in progress session 1
/
Kobayashi, Yoshinori
/
Kuno, Yoshinori
/
Niwa, Hitoshi
/
Akiya, Naonori
/
Okada, Mai
/
Yamazaki, Keiichii
/
Yamazaki, Akiko
Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems
2009-04-04
v.2
p.3583-3588
Keywords: computer vision, ethnomethodology, human-robot interaction, non-verbal
communication, service robot
© Copyright 2009 ACM
Summary: This paper presents on-going work in developing service robots that provide
assisted-care to the elderly in multi-party settings. In typical Japanese
day-care facilities, multiple caregivers and visitors are co-present in the
same room and any caregiver may provide assistance to any visitor. In order to
effectively work in such settings, a robot should behave in a way that a person
who has a request can easily initiate communication with the robot. Based on
findings from observations at several day-care facilities, we have developed a
robot system that displays availability to multiple persons and then displays
recipiency to an individual person who wants to initiate interaction. Our robot
system and its experimental evaluation are detailed in this paper.
[12]
Effect of restarts and pauses on achieving a state of mutual orientation
between a human and a robot
Gaze and surveillance
/
Kuzuoka, Hideaki
/
Pitsch, Karola
/
Suzuki, Yuya
/
Kawaguchi, Ikkaku
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
/
Kuno, Yoshinori
/
Luff, Paul
/
Heath, Christian
Proceedings of ACM CSCW'08 Conference on Computer-Supported Cooperative Work
2008-11-08
p.201-204
© Copyright 2008 ACM
Summary: In this paper we consider the development of a museum guide robot that has
both autonomous and remotely controlled features. We focus on the capabilities
such a robot could have to help focus the attention of a visitor on an object
or artefact. Inspired by studies of social interaction, which investigate
whether the robot could deploy "restarts" and "pauses" at certain moments in
its talk to first elicit the visitor's attention/gaze towards the robot. We
report an experiment where we deployed such a robot to interact with real
visitors to a science museum. These experiments show that such a strategy does
seem to have a significant impact on obtaining the visitor's gaze.
[13]
Precision timing in human-robot interaction: coordination of head movement
and utterance
Human-Robot Interaction
/
Yamazaki, Akiko
/
Yamazaki, Keiichi
/
Kuno, Yoshinori
/
Burdelski, Matthew
/
Kawashima, Michie
/
Kuzuoka, Hideaki
Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems
2008-04-05
v.1
p.131-140
© Copyright 2008 ACM
Summary: As research over the last several decades has shown that non-verbal actions
such as face and head movement play a crucial role in human interaction, such
resources are also likely to play an important role in human-robot interaction.
In developing a robotic system that employs embodied resources such as face and
head movement, we cannot simply program the robot to move at random but rather
we need to consider the ways these actions may be timed to specific points in
the talk. This paper discusses our work in developing a museum guide robot that
moves its head at interactionally significant points during its explanation of
an exhibit. In order to proceed, we first examined the coordination of verbal
and non-verbal actions in human guide-visitor interaction. Based on this
analysis, we developed a robot that moves its head at interactionally
significant points in its talk. We then conducted several experiments to
examine human participant non-verbal responses to the robot's head and gaze
turns. Our results show that participants are likely to display non-verbal
actions, and do so with precision timing, when the robot turns its head and
gaze at interactionally significant points than when the robot turns its head
at not interactionally significant points. Based on these findings, we propose
several suggestions for the design of a guide robot.
[14]
Prior-to-request and request behaviors within elderly day care: Implications
for developing service robots for use in multiparty settings
/
Yamazaki, Keiichi
/
Kawashima, Michie
/
Kuno, Yoshinori
/
Akiya, Naonori
/
Burdelski, Matthew
/
Yamazaki, Akiko
/
Kuzuoka, Hideaki
Proceedings of the Tenth European Conference on Computer-Supported
Cooperative Work
2007-09-24
p.61-78
© Copyright 2007 Springer
Summary: The rapidly expanding elderly population in Japan and other industrialized
countries has posed an enormous challenge to the systems of healthcare that
serve elderly citizens. This study examines naturally occurring interaction
within elderly day care in Japan, and discusses the implications for developing
robotic systems that can provide service in elderly care contexts. The
interaction analysis focuses on prior-to-request and request behaviors
involving elderly visitors and caregivers in multiparty settings. In
particular, it delineates the ways caregivers' displays of availability affects
elderly visitors' behavior prior to initiating a request, revealing that
visitors observe caregivers prior to initiating a request, and initiation is
contingent upon caregivers' displayed availability. The findings are discussed
in relation to our work in designing an autonomous and remote controlled
robotic system that can be employed in elderly day care centers and other
service contexts.
[15]
Museum guide robot based on sociological interaction analysis
People, looking at people
/
Kuno, Yoshinori
/
Sadazuka, Kazuhisa
/
Kawashima, Michie
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
/
Kuzuoka, Hideaki
Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems
2007-04-28
v.1
p.1191-1194
© Copyright 2007 ACM
Summary: We are currently working on a museum guide robot with an emphasis on
"friendly" human-robot interaction displayed through nonverbal behaviors. In
this paper, we focus on head gestures during explanations of exhibits. The
outline of our research is as follows. We first examined human head gestures
through an experimental, sociological approach. From this research, we have
discovered how human guides coordinate their head movement along with their
talk when explaining exhibits. Second, we developed a robot system based on
these findings. Third, we evaluated human-robot interaction, again using an
experimental, sociological approach, and then modified the robot based on the
results. Our experimental results suggest that robot head turning may lead to
heightened engagement of museum visitors with the robot. Based on our
preliminary findings, we will describe a museum guide robot that first works
autonomously and, if necessary, can turn into remote-control mode operated by a
human to engage in more complex interaction with visitors.
[16]
Mediating dual ecologies
Gesturing, moving and talking together
/
Kuzuoka, Hideaki
/
Kosaka, Jun'ichi
/
Yamazaki, Keiichi
/
Suga, Yasuko
/
Yamazaki, Akiko
/
Luff, Paul
/
Heath, Christian
Proceedings of ACM CSCW'04 Conference on Computer-Supported Cooperative Work
2004-11-06
p.477-486
© Copyright 2004 ACM
Summary: In this paper we investigated systems for supporting remote collaboration
using mobile robots as communication media. It is argued that the use of a
remote-controlled robot as a device to support communication involves two
distinct ecologies: an ecology at the remote (instructor's) site and an ecology
at the operator's (robot) site. In designing a robot as a viable communication
medium, it is essential to consider how these ecologies can be mediated and
supported. In this paper, we propose design guidelines to overcome the problems
inherent in dual ecologies, and describe the development of a robot named
GestureMan-3 based on these guidelines. Our experiments with GestureMan-3
showed that the system supports sequential aspects of the organization of
communication.
[17]
Dual ecologies of robot as communication media: thoughts on coordinating
orientations and projectability
/
Kuzuoka, Hideaki
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
/
Kosaka, Jun'ichi
/
Suga, Yasuko
/
Heath, Christian
Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems
2004-04-24
v.1
p.183-190
© Copyright 2004 ACM
Summary: The aim of our study is to investigate systems for supporting remote
instruction via a mobile robot. In the real world, instructions are typically
given through words and body orientations such as head movements, which make it
possible to project others' actions. Projectability is an important resource in
organizing multiple actions among multiple participants in co-ordination with
one another. It can likewise be said that in the case of robot-human
collaboration, it is necessary to design a robot's head so that a local
participant can project the robot's (and remote person's) actions. GestureMan
is a robot that is designed to support such projectability properties. It is
argued that a remote controlled mobile robot, designed as a communication
medium, makes relevant dual ecologies: ecology at a remote (robot operator's)
site and at a local participant's (robot's) site. In order to design a robot as
a viable communication medium, it is essential to consider how these ecologies
can be mediated and supported.
[18]
Embodied Spaces: Designing Remote Collaboration Systems Based on Body
Metaphor
/
Kuzuoka, H.
/
Yamazaki, K.
/
Yamashita, J.
/
Oyama, S.
/
Yamazaki, A.
/
Kato, H.
/
Suzuki, H.
/
Miki, H.
Proceedings of the Ninth International Conference on Human-Computer
Interaction
2001-08
v.1
p.763-767
[19]
GestureMan: A Mobile Robot that Embodies a Remote Instructor's Actions
Video Presentations
/
Kuzuoka, Hideaki
/
Oyama, Shinya
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
/
Mitsuishi, Mamoru
/
Suzuki, Kenji
Proceedings of ACM CSCW'00 Conference on Computer-Supported Cooperative Work
2000-12-02
p.354
© Copyright 2000 ACM
[20]
GestureLaser and GestureLaser Car: Development of an embodied space to
support remote instruction
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
/
Kuzuoka, Hideaki
/
Oyama, Shinya
/
Kato, Hiroshi
/
Suzuki, Hideyuki
/
Miki, Hiroyuki
Proceedings of the Sixth European Conference on Computer-Supported
Cooperative Work
1999-09-12
p.239
[21]
Agora: supporting multi-participant telecollaboration
/
Yamashita, J.
/
Kuzuoka, H.
/
Yamazaki, K.
/
Miki, H.
/
Yamazaki, A.
/
Kato, H.
/
Suzuki, H.
Proceedings of the Eighth International Conference on Human-Computer
Interaction
1999-08-22
v.2
p.543-547
© Copyright 1999 Lawrence Erlbaum Associates
[22]
Agora: a remote collaboration system that enables mutual monitoring
Late-breaking results: novel collaborative paradigms
/
Kuzuoka, Hideaki
/
Yamashita, Jun
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
Proceedings of ACM CHI 99 Conference on Human Factors in Computing Systems
1999-05-15
v.2
p.190-191
© Copyright 1999 ACM
Summary: We introduce the video mediated remote collaboration system named Agora.
Agora is designed so that embodiment of participants'conducts can be monitored
naturally. Design principle, architecture, and initial impressions of the
system is described.
[23]
GestureLaser: Supporting Hand Gestures in Remote Instruction
Videos
/
Kuzuoka, Hideaki
/
Oyama, Shinya
/
Kato, Hiroshi
/
Suzuki, Hideyuki
/
Yamazaki, Keiichi
/
Yamazaki, Akiko
/
Miki, Hiroyuki
Proceedings of ACM CSCW'98 Conference on Computer-Supported Cooperative Work
1998-11-14
p.424
© Copyright 1998 ACM
Summary: GestureLaser is a remote controlled laser pointer which allows an instructor
to gesture at real world objects over distances. To control the position of
the GestureLaser's spot, a laser beam is reflected by two mirrors each rotated
by a stepping motor. The remote instructor controls the motion of the laser's
spot using a computer mouse in the same way an ordinary mouse pointer is
controlled. The instructor can thus show position, rotation and direction by
moving the spot. The laser's low illumination mode is used to indicate
transitions between gestures while still allowing operators to track the spot.
We have already undertaken a few experiments in order to understand how users
can effectively use the laser's spot as a substitute for real hand gestures.