HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,288,282
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Yamada_S* Results: 40 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 40 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 02 | 01 | 95 | 91 |
ReFabricator: Integrating Everyday Objects for Digital Fabrication Interactivity Demos / Yamada, Suguru / Morishige, Hironao / Nozaki, Hiroki / Ogawa, Masaki / Yonezawa, Takuro / Tokuda, Hideyuki Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3804-3807
ACM Digital Library Link
Summary: Since current digital fabrication relies on 3D printer very much, there are several concerns such as printing cost (i.e., both financial and temporal cost) and sometimes too homogeneous impression with plastic filament. To address and solve the problem, we propose ReFabricator, a computational fabrication tool integrating everyday objects into digital fabrication. ReFabrication is a concept of fabrication, mixing the idea of Reuse and Digital Fabrication, which aims to fabricate new functional shape with ready made products, effectively utilizing its behavior. As a system prototype, we have implemented a design tool which enables users to gather up every day objects and reassemble them to another functional shape with taking advantages of both analog and digital fabrication. In particular, the system calculates the optimized positional relationship among objects, and generates joint objects to bond the objects together in order to achieve a certain shape.

Effects of Agent Appearance on Customer Buying Motivations on Online Shopping Sites WIP Theme: AI and HCI / Terada, Kazunori / Jing, Liang / Yamada, Seiji Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.929-934
ACM Digital Library Link
Summary: Although product recommendation virtual agents (PRVAs) are used in a large number of online shopping websites, the optimal types of agents in this context remain unclear. In the present study, we tested whether agent appearance affects people's buying motivations and analyzed the key factors in persuading people to buy products. The experimental results confirmed that recommendation effects vary according to agent appearance. Furthermore, we obtained a partial order ranking of the agent types, representing the effectiveness of their recommendations. The factor analysis results indicated that the perceptions of familiarity and intelligence in relation to appearance are the key factors in persuading people to buy products.

Is Interpretation of Artificial Subtle Expressions Language-Independent?: Comparison among Japanese, German, Portuguese, and Mandarin Chinese WIP Theme: Users and UI Design / Komatsu, Takanori / Prada, Rui / Kobayashi, Kazuki / Yamada, Seiji / Funakoshi, Kotaro / Nakano, Mikio Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.2175-2180
ACM Digital Library Link
Summary: Up until now, several studies have shown that a speech interface system giving verbal suggestions with beeping sounds that decrease in pitch conveyed a low system confidence level to users intuitively, and these beeping sounds were named "artificial subtle expressions" (ASEs). However, all participants in these studies were only Japanese, so if the participants' mother tongue has different sensitivity to variations in pitch compared with Japanese, the interpretations of the ASEs might be different. We then investigated whether the ASEs are interpreted in the same way as with Japanese regardless of the users' mother tongues; specifically we focused on three language categories in traditional phonological typology. We conducted a web-based experiment to investigate whether the ways speakers of German, Portuguese (stress accent language), Mandarin Chinese (tone language) and Japanese (pitch accent language) interpret the ASEs are different or not. The results of this experiment showed that the ways of interpreting did not differ, so this suggests that these ways are language-independent.

Why Do Children Abuse Robots? Late-Breaking Reports -- Session 2 / Nomura, Tatsuya / Uratani, Takayuki / Kanda, Takayuki / Matsumoto, Kazutaka / Kidokoro, Hiroyuki / Suehiro, Yoshitaka / Yamada, Sachie Extended Abstracts of the 2015 ACM/IEEE International Conference on Human-Robot Interaction 2015-03-02 v.2 p.63-64
ACM Digital Library Link
Summary: We found that children sometimes abuse a social robot in a hallway of a shopping mall. They spoke bad words, repeatedly obstructed the robot's path, and sometimes even kicked and punched the robot. To investigate why they abused it, we conducted a field study, in which we let visiting children freely interact with the robot, and interviewed when they engaged in a serious abusive behavior including physical contacts. In total, we obtained valid interviews from twenty-three children over 13 days of observations. They are aged between five and nine. Adults and older children were rarely involved. We interviewed them to know whether they perceived the robot as human-like others, why they abused it, and whether they thought that the robot would suffer from their abusive behavior. We found that 1) the majority of the children abused because they were curious about the robot's reactions or enjoyed abusing it while considering it as human-like, and 2) about half of the children believed in the capability of the robot to perceive their abusive behaviors.

Augmenting expressivity of artificial subtle expressions (ASEs): preliminary design guideline for ASEs 7. Driving / Komatsu, Takanori / Kobayashi, Kazuki / Yamada, Seiji / Funakoshi, Kotaro / Nakano, Mikio Proceedings of the 2014 Augmented Human International Conference 2014-03-07 p.38
ACM Digital Library Link
Summary: Unfortunately, there is little hope that information-providing systems will ever be perfectly reliable. The results of some studies have indicated that imperfect systems can reduce the users' cognitive load in interacting with them by expressing their level of confidence to users. Artificial subtle expressions (ASEs), which are machine-like artificial sounds to express the confidence information to users added just after the system's suggestions, were keenly focused on because of their simplicity and efficiency. The purpose of the work reported here was to develop a preliminary design guideline for ASEs in order to determine the expandability of ASEs. We believe that augmenting the expressivity of ASEs would lead reducing the users' cognitive load for processing the information provided from the systems, and this would also lead augmenting users' various cognitive capacities. Our experimental results showed that ASEs with decreasing pitch conveyed a low confidence level to users. This result were used to formulate a concrete design guideline for ASEs.

Shape changing device for notification Adjunct 4: posters / Kobayashi, Kazuki / Yamada, Seiji Adjunct Proceedings of the 2013 ACM Symposium on User Interface Software and Technology 2013-10-08 v.2 p.71-72
ACM Digital Library Link
Summary: In this paper, we describe a notification method with peripheral cognition technology that uses a human cognitive characteristic. The method achieves notification without interrupting users' primary tasks. We developed a shape changing device that change its shape to notify the arrival of information. Such behavior enables a user to easily find and accept notifications without interruption when their attention on the primary task decreases. The result of an experiment showed that the successful notification rate was 45.5%.

Expressing a robot's confidence with motion-based artificial subtle expressions Emotions / Yamada, Seiji / Terada, Kazunori / Kobayashi, Kazuki / Komatsu, Takanori / Funakoshi, Kotaro / Nakano, Mikio Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing Systems 2013-04-27 v.2 p.1023-1028
ACM Digital Library Link
Summary: In this paper, motion-based Artificial Subtle Expression (ASE) as a novel implementation of ASE is described, in which a robot expresses confidence in its advice to a human. Confidence in advice is one of robot's useful internal states, and it is an important goal to develop a practical and inexpensive methodology to correctly express it. To achieve this goal, we propose motion-based ASE in which a robot slowly hesitates by turning to a human before giving advice with low confidence. We conducted experiments to evaluate the effectiveness of motion-based ASE with participants, and obtained promising results.

Peripheral agent: implementation of peripheral cognition technology UI design / Yamada, Seiji / Mori, Naoki / Kobayashi, Kazuki Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing Systems 2013-04-27 v.2 p.1701-1706
ACM Digital Library Link
Summary: Information notification on a display for e-mail arrival, micro-blog updates, and application updates is becoming increasingly important. We propose a novel information notification method, the peripheral agent (PA) as an implementation of peripheral cognition technology (PCT) that uses the human cognitive properties that a human does not recognize subtle changes in a peripheral area of cognition when he/she concentrates on a task and that he/she automatically recognizes the changes when not concentrating on the task. By only setting a PA in the peripheral area, a user automatically and easily accepts the notification only when his/her concentration breaks. We conducted two experiments to investigate a VFN area and evaluate the effectiveness of PAs.

Estimating user interruptibility by measuring table-top pressure UI design / Tani, Takahisa / Yamada, Seiji Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing Systems 2013-04-27 v.2 p.1707-1712
ACM Digital Library Link
Summary: A user working with his/her desktop computer would benefit from notifications (e.g., the arrival of e-mail, micro-blogs, and application updates) being given at adequate times when he/she is interruptible. To do so, a notification system needs to determine the user's state of activity. In this paper, we propose a novel method for estimating user states with a pressure sensor on a desk. We use a lattice-like pressure sensor sheet and distinguish between two simple user states: interruptible or not. The pressure can be measured without the user being aware of it, and changes in the pressure reflect useful information such as typing, an arm resting on the desk, mouse operation, and so on. We carefully developed features that can be extracted from the sensed raw data, and we used a machine learning technique to identify the user's interruptibility. We conducted experiments for two different tasks to evaluate the accuracy of our proposed method and obtained promising results.

Experimental investigation of human adaptation to change in agent's strategy through a competitive two-player game Understanding gamers / Terada, Kazunori / Yamada, Seiji / Ito, Akira Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems 2012-05-05 v.1 p.2807-2810
ACM Digital Library Link
Summary: We conducted an experimental investigation on human adaptation to change in an agent's strategy through a competitive two-player game. Modeling the process of human adaptation to agents is important for designing intelligent interface agents and adaptive user interfaces that learn a user's preferences and behavior strategy. However, few studies on human adaptation to such an agent have been done. We propose a human adaptation model for a two-player game. We prepared an on-line experimental system in which a participant and an agent play a repeated penny-matching game with a bonus round. We then conducted experiments in which different opponent agents (human or robot) change their strategy during the game. The experimental results indicated that, as expected, there is an adaptation phase when a human is confronted with a change in the opponent agent's strategy, and adaptation is faster when a human is competing with robot than with another human.

Can users live with overconfident or unconfident systems?: a comparison of artificial subtle expressions with human-like expression Work-in-progress / Komatsu, Takanori / Kobayashi, Kazuki / Yamada, Seiji / Funakoshi, Kotaro / Nakano, Mikio Extended Abstracts of ACM CHI'12 Conference on Human Factors in Computing Systems 2012-05-05 v.2 p.1595-1600
ACM Digital Library Citation
Summary: We assume that expressing the levels of confidence using human-like expressions will cause users to have a poorer impression of a system than if artificial subtle expressions (ASEs) were used when the quality of the presented information does not match the expressed level of confidence. We confirmed that this assumption was correct by conducting a psychological experiment.

Reminiscence Park Interface: personal spaces to listen to songs with memories and diffusions and overlaps of their spaces Multi-surface / Myojin, Seiko / Shimizu, Masumi / Nakatani, Mie / Yamada, Shuhei / Kato, Hirokazu / Nishida, Shogo Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011-11-13 p.248-249
ACM Digital Library Link
Summary: We propose Reminiscence Park Interface. This interface gives the personal spaces for listening to the favorite songs by using the original music boxes. And our interface also visualizes the diffusions and the overlaps of the users' spaces by computer graphics on the original resonance table. The users can enjoy listening to their favorite songs alone or with somebody.

Effects of different types of artifacts on interpretations of artificial subtle expressions (ASEs) Works-in-progress / Komatsu, Takanori / Yamada, Seiji / Kobayashi, Kazuki / Funakoshi, Kotaro / Nakano, Mikio Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011-05-07 v.2 p.1249-1254
ACM Digital Library Link
Summary: So far, we already confirmed that the artificial subtle expressions (ASEs) from a robot could convey its internal states to participants accurately and intuitively. In this paper, we investigated whether the ASEs from an on-screen artifact could also convey the artifact's internal states to participants in order to confirm whether the ASEs can be interpreted consistently for various types of artifacts. The results clearly showed that the ASEs' interpretations from on-screen artifact were consistent with the ones from robotic agent.

Exploring influences of robot anxiety into HRI Late-breaking reports/poster session / Nomura, Tatsuya / Kanda, Takayuki / Yamada, Sachie / Suzuki, Tomohiro Proceedings of the 6th International Conference on Human-Robot Interaction 2011-03-06 p.213-214
ACM Digital Library Link

Between real-world and virtual agents: the disembodied robot Late-breaking reports/poster session / Voisin, Thibault / Osawa, Hirotaka / Yamada, Seiji / Imai, Michita Proceedings of the 6th International Conference on Human-Robot Interaction 2011-03-06 p.281-282
ACM Digital Library Link
Summary: In this study, we propose a disembodied real-world agent and the study of the influence of this disembodiment on the social separation between the user and the agent. In order to give a clue to the user about the presence of the robot and to make possible a visual feedback, we decide to use independent robotic body parts that mimic human hands and eyes. This robot is also able to share real-world space with the user, and react to his presence, through 3d detection and oral communication. Thus, we can obtain an agent with an important presence while keeping good space efficiency, and as a result ban any existing social barrier.

Who explains it?: avoiding the feeling of third-person helpers in auditory instruction for older people Video session / Osawa, Hirotaka / Orszulak, Jarrod / Godfrey, Kathryn M. / Yamada, Seiji / Coughlin, Joseph F. Proceedings of the 6th International Conference on Human-Robot Interaction 2011-03-06 p.409-410
ACM Digital Library Link
Summary: Auditory instruction is a well used method for people of all ages because of its understandability. However the additional voice has the possibility to disturb the user's learning during the instruction because it strongly implies the support of third-person helpers. This risk increases with older people because their confidence in their ability may decline compared to the younger people. The authors propose a method to anthropomorphize an instructed target (a vacuum) to decrease the feeling of a third person during instruction. The authors conducted the experiment using our method to explain features of household appliance and evaluated the relationship between recalled features and older people's internal scale. The results show that older people remembered more features by using our method, and with female participants, their internal scales increased during the training. This demonstrates that our method can decrease the third-person feeling in female participants and increase the amount learned. Our findings suggest that auditory instructions may be an effective learning method for older adults.

How Does the Agents' Appearance Affect Users' Interpretation of the Agents' Attitudes: Experimental Investigation on Expressing the Same Artificial Sounds From Agents With Different Appearances / Komatsu, Takanori / Yamada, Seiji International Journal of Human-Computer Interaction 2011-02-08 v.27 n.3 p.260-279
Link to Article at Taylor & Francis
Summary: An experimental investigation into how the appearance of an agent such as a robot or PC affects people's interpretations of the agent's attitudes is presented. In general, people are said to create stereotypical agent behavioral models in their minds based on the agents' appearances, and these appearances significantly affect their way of interaction. Therefore, it is quite important to address with the following research question: How does an agent's appearance affect its interactions with people? Specifically, a preliminary experiment was conducted to select eight artificial sounds for which people can estimate two specific primitive attitudes (e.g., positive or negative). Then an experiment was conducted where the participants were presented with the selected artificial sounds through three kinds of agents: a MindStorms robot, AIBO robot, and laptop PC. In particular, the participants were asked to select the correct attitudes based on the sounds expressed by these three agents. The results showed that the participants had better interpretation rates when a PC presented the sounds and lower rates when the MindStorms and AIBO robots presented the sounds, even though the sounds expressed by these agents were the same. The results of this study contribute to the design policy of the interactive agents, such as, What types of appearances should agents have to effectively interact with people, and which kinds of information should these agents express to people?

An Ultrasonic Blind Guidance System for Street Crossings Blind and Partially Sighted People: Mobility and Interaction without Sight / Hashino, Satoshi / Yamada, Sho ICCHP'10: International Conference on Computers Helping People with Special Needs 2010-07-14 v.2 p.235-238
Keywords: guidance system; ultrasonic sensor; cross-correlation
Link to Digital Content at Springer
Summary: This paper addresses the technical feasibility of a guidance system based on ultrasonic sensors to aid visually impaired people to cross a road easily and safely. A computer processes ultrasonic signals emitted by a transmitter, which is carried by the impaired user, and provides real-time information on the direction and distance to keep user on the correct track. Instead of time of flight, the system estimates user position by the order of received ultrasonic signals at multiple receivers. Experimental results are presented to discuss feasibility of this method.

How do users interact with a pet-robot and a humanoid Work-in-progress, April 12-13 / Austermann, Anja / Yamada, Seiji / Funakoshi, Kotaro / Nakano, Mikio Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010-04-10 v.2 p.3727-3732
Keywords: aibo, asimo, human-robot interaction, robots, user studies
ACM Digital Library Link
Summary: In this paper, we compare users' interaction with the humanoid robot ASIMO and the dog-shaped robot AIBO. We conducted a user study in which the participants had to teach object names and simple commands and give feedback to either AIBO or ASIMO. We did not find significant differences in the users' evaluation of both robots and in the way commands were given to the two different robots. However, the way of giving positive and negative feedback differed significantly: We found that for the pet-robot AIBO users tend to give reward in a similar way as giving reward to a real dog by touching it and commenting on its performance by uttering feedback like "well done" or "that was right". For the humanoid ASIMO, users did not use touch as a reward and rather used personal expressions like "thank you" to give positive feedback to the robot.

Artificial subtle expressions: intuitive notification methodology of artifacts Subtle expressions through sound and text / Komatsu, Takanori / Yamada, Seiji / Kobayashi, Kazuki / Funakoshi, Kotaro / Nakano, Mikio Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010-04-10 v.1 p.1941-1944
Keywords: accurate, artificial subtle expressions (ases), complementary, intuitive, simple
ACM Digital Library Link
Summary: We describe artificial subtle expressions (ASEs) as intuitive notification methodology for artifacts' internal states for users. We prepared two types of audio ASEs; one was a flat artificial sound (flat ASE), and the other was a sound that decreased in pitch (decreasing ASE). These two ASEs were played after a robot made a suggestion to the users. Specifically, we expected that the decreasing ASE would inform users of the robot's lower level of confidence about the suggestions. We then conducted a simple experiment to observe whether the participants accepted or rejected the robot's suggestion in terms of the ASEs. The results showed that they accepted the robot's suggestion when the flat ASE was used, whereas they rejected it when the decreasing ASE was used. Therefore, we found that the ASEs succeeded in conveying the robot's internal state to the users accurately and intuitively.

Similarities and differences in users' interaction with a humanoid and a pet robot Late-breaking abstracts session/poster session 1 / Austermann, Anja / Yamada, Seiji / Funakoshi, Kotaro / Nakano, Mikio Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010-03-02 p.73-74
Keywords: human-robot interaction, humanoid, user study
ACM Digital Library Link
Summary: In this paper, we compare user behavior towards the humanoid robot ASIMO and the dog-shaped robot AIBO in a simple task, in which the users has to teach commands and feedback to the robot.

A biologically inspired approach to learning multimodal commands and feedback for human-robot interaction Spotlight on work in progress session 1 / Austermann, Anja / Yamada, Seiji Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009-04-04 v.2 p.3553-3558
Keywords: human-robot-interaction, machine learning, multimodality, speech perception, user feedback
ACM Digital Library Link
Summary: In this paper we describe a method to enable a robot to learn how a user gives commands and feedback to it by speech, prosody and touch. We propose a biologically inspired approach based on human associative learning. In the first stage, which corresponds to the stimulus encoding in natural learning, we use unsupervised training of HMMs to model the incoming stimuli. In the second stage, the associative learning, these models are associated with a meaning using an implementation of classical conditioning. Top-down processing is applied to take into account the context as a bias for the stimulus encoding. In an experimental study we evaluated the learning of user feedback with our learning method using special training tasks, which allow the robot to explore and provoke situated feedback from the user. In this first study, the robot learned to discriminate between positive and negative feedback with an average accuracy of 95.97%.

Influences of concerns toward emotional interaction into social acceptability of robots HRI late-breaking abstracts / Nomura, Tatsuya / Kanda, Takayuki / Suzuki, Tomohiro / Yamada, Sachie / Kato, Kensuke Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction 2009-03-09 p.231-232
Keywords: human-robot interaction, negative attitudes, social acceptance
ACM Digital Library Link

Smoothing human-robot speech interactions by using a blinking-light as subtle expression Multimodal systems II (poster session) / Funakoshi, Kotaro / Kobayashi, Kazuki / Nakano, Mikio / Yamada, Seiji / Kitamura, Yasuhiko / Tsujino, Hiroshi Proceedings of the 2008 International Conference on Multimodal Interfaces 2008-10-20 p.293-296
Keywords: human-robot interaction, speech overlap, subtle expression, turn-taking
ACM Digital Library Link
Summary: Speech overlaps, undesired collisions of utterances between systems and users, harm smooth communication and degrade the usability of systems. We propose a method to enable smooth speech interactions between a user and a robot, which enables subtle expressions by the robot in the form of a blinking LED attached to its chest. In concrete terms, we show that, by blinking an LED from the end of the user's speech until the robot's speech, the number of undesirable repetitions, which are responsible for speech overlaps, decreases, while that of desirable repetitions increases. In experiments, participants played a last-and-first game with the robot. The experimental results suggest that the blinking-light can prevent speech overlaps between a user and a robot, speed up dialogues, and improve user's impressions.

Genetic algorithm can optimize hierarchical menus Menu and Command Selection / Matsui, Shouichi / Yamada, Seiji Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems 2008-04-05 v.1 p.1385-1388
ACM Digital Library Link
Summary: Hierarchical menus are now ubiquitous. The performance of the menu depends on many factors: structure, layout, colors and so on. There has been extensive research on novel menus, but there has been little work on improving the performance by optimizing the menu's structure. This paper proposes an algorithm based on the genetic algorithm (GA) for optimizing the performance of menus. The algorithm aims to minimize the average selection time of menu items by considering movement and decision time. We show results on a static hierarchical menu of a cellular phone where a small screen and limited input device are assumed. Our work makes several contributions: a novel mathematical optimization model for hierarchical menus; novel optimization method based on the genetic algorithm (GA).
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 40 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 02 | 01 | 95 | 91 |