HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,876,776
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Sato_Y* Results: 27 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 27 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 05 | 03 | 02 | 01 | 00 | 96 | 93 |
Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks Eye Gaze / Higuch, Keita / Yonetani, Ryo / Sato, Yoichi Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.5180-5190
ACM Digital Library Link
Summary: In this work, we investigate how remote collaboration between a local worker and a remote collaborator will change if eye fixations of the collaborator are presented to the worker. We track the collaborator's points of gaze on a monitor screen displaying a physical workspace and visualize them onto the space by a projector or through an optical see-through head-mounted display. Through a series of user studies, we have found the followings: 1) Eye fixations can serve as a fast and precise pointer to objects of the collaborator's interest. 2) Eyes and other modalities, such as hand gestures and speech, are used differently for object identification and manipulation. 3) Eyes are used for explicit instructions only when they are combined with speech. 4) The worker can predict some intentions of the collaborator such as his/her current interest and next instruction.

Interactive Cheek Haptic Display with Air Vortex Rings for Stress Modification Late-Breaking Works: Extending User Capabilities / Ueoka, Ryoko / Yamaguchi, Mami / Sato, Yuka Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.1766-1771
ACM Digital Library Link
Summary: We conducted preliminary evaluation on how people perceive haptic stimuli generated by air vortex rings of different intensity and how these stimuli affect their emotional experience with respect to stress. We developed a prototype cheek haptic display that generates an air vortex ring at two types of intensity of air pressure to present a different haptic stimulus on the cheek when a subject feels stress while performing a task. Using the system, we conducted a preliminary experiment to evaluate cognitive awareness using task performance, physiological awareness using changes in autonomic activity, and subjective feelings using a visual analog scale test of three stress feelings for quantitative evaluation of emotional experience. Although further experiments are needed, the results show that it is a promising method for effectively reducing stress and modifying emotional experience.

Development of a Speech-Driven Embodied Entrainment Character System with Pupil Response Information and Interaction for Learning and Education / Sejima, Yoshihiro / Sato, Yoichiro / Watanabe, Tomio / Jindai, Mitsuru HIMI 2015: 17th International Conference on Human Interface and the Management of Information, Symposium on Human Interface, Part II: Information and Knowledge in Context 2015-08-02 v.2 p.378-386
Keywords: Human interaction; Nonverbal communication; Avatar-Mediated communication; Line-of-Sight; Pupil response
Link to Digital Content at Springer
Summary: We have developed a speech-driven embodied entrainment character called "InterActor" that had functions of both speaker and listener for supporting human interaction and communication. This character would generate communicative actions and movements such as nodding, body movements, and eyeball movements by using only speech input. In this paper, we analyze the pupil response during the face-to-face communication and non-face-to-face communication with the typical users of the character system. On the basis of the analysis results, we enhance the functionalities of the character and develop an advanced speech-driven embodied entrainment character system for expressing the pupil response.

Captioning System with Function of Inserting Mathematical Formula Images Accessible Media / Takeuchi, Yoshinori / Sato, Yuji / Horiike, Kazuki / Wakatsuki, Daisuke / Minagawa, Hiroki / Ohnishi, Noboru ICCHP'14: International Conference on Computers Helping People with Special Needs, Part 1 2014-07-09 v.1 p.33-40
Link to Digital Content at Springer
Summary: We propose a captioning system with a function of inserting mathematical formula images. [We match/The system matches?] mathematical formulas presented orally during a lecture with those simultaneously projected on a screen in the lecture room. We then manually extract the mathematical formula images from the screen for displaying on the monitor of the system. A captionist can input mathematical formulas by pressing a corresponding function key. This is much easier than inputting mathematical formulas by typing. We conducted an experiment in which participants evaluated the usefulness of the proposed captioning system. Experimental results showed that 14 of the 22 participants could input more sentences when using the function of inserting mathematical formula images than when not using it. Furthermore, from the results of a questionnaire, we could confirm that the proposed system is effective.

Towards explaining the cognitive efficacy of Euler diagrams in syllogistic reasoning: A relational perspective / Mineshima, Koji / Sato, Yuri / Takemura, Ryo / Okada, Mitsuhiro Journal of Visual Languages & Computing 2014-06 v.25 n.3 p.156-169
Keywords: Diagrammatic reasoning
Keywords: Euler diagram
Keywords: Efficacy
Keywords: Categorical syllogisms
Keywords: Relational inferences
Keywords: Mental model theory
Link to Article at sciencedirect
Summary: Although diagrams have been widely used as methods for introducing students to elementary logical reasoning, it is still open to debate in cognitive psychology whether logic diagrams can aid untrained people to successfully conduct deductive reasoning. In our previous work, some empirical evidence was provided for the effectiveness of Euler diagrams in the process of solving categorical syllogisms. In this paper, we discuss the question of why Euler diagrams have such inferential efficacy in the light of a logical and proof-theoretical analysis of categorical syllogisms and diagrammatic reasoning. As a step towards an explanatory theory of reasoning with Euler diagrams, we argue that the effectiveness of Euler diagrams in supporting syllogistic reasoning derives from the fact that they are effective ways of representing and reasoning about relational structures that are implicit in categorical sentences. A special attention is paid to how Euler diagrams can facilitate the task of checking the invalidity of an inference, a task that is known to be particularly difficult for untrained reasoners. The distinctive features of our conception of diagrammatic reasoning are made clear by comparing it with the model-theoretic conception of ordinary reasoning developed in the mental model theory.

Influence of stimulus and viewing task types on a learning-based visual saliency model Poster abstracts / Ye, Binbin / Sugano, Yusuke / Sato, Yoichi Proceedings of the 2014 Symposium on Eye Tracking Research & Applications 2014-03-26 p.271-274
ACM Digital Library Link
Summary: Learning-based approaches using actual human gaze data have been proven to be an efficient way to acquire accurate visual saliency models and attracted much interest in recent years. However, it still remains yet to be answered how different types of stimulus (e.g., fractal images, and natural images with or without human faces) and viewing tasks (e.g., free viewing or a preference rating task) affect learned visual saliency models. In this study, we quantitatively investigate how learned saliency models differ when using datasets collected in different settings (image contextual level and viewing task) and discuss the importance of choosing appropriate experimental settings.

Multiple robotic wheelchair system able to move with a companion using map information HRI2014 late breaking reports poster / Sato, Yoshihisa / Suzuki, Ryota / Arai, Masaya / Kobayashi, Yoshinori / Kuno, Yoshinori / Fukushima, Mihoko / Yamazaki, Keiichi / Yamazaki, Akiko Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction 2014-03-03 p.286-287
ACM Digital Library Link
Summary: In order to reduce the burden of caregivers facing an increased demand for care, particularly for the elderly, we developed a system whereby multiple robotic wheelchairs can automatically move alongside a companion. This enables a small number of people to assist a substantially larger number of wheelchair users effectively. This system utilizes an environmental map and an estimation of position to accurately identify the positional relations among the caregiver (or a companion) and each wheelchair. The wheelchairs are consequently able to follow along even if the caregiver cannot be directly recognized. Moreover, the system is able to establish and maintain appropriate positional relations.

Evaluating Human-like Behaviors of Video-Game Agents Autonomously Acquired with Biological Constraints Long Presentations / Fujii, Nobuto / Sato, Yuichi / Wakama, Hironori / Kazai, Koji / Katayose, Haruhiro Proceedings of the 2013 International Conference on Advances in Computer Entertainment 2013-11-12 p.61-76
Keywords: Autonomously strategy acquisition; Machine learning; Biological constraints; Video game agent; Infinite Mario Bros
Link to Digital Content at Springer
Summary: Designing the behavioral patterns of video game agents (Non-player character: NPC) is a crucial aspect in developing video games. While various systems that have aimed at automatically acquiring behavioral patterns have been proposed and some have successfully obtained stronger patterns than human players, those patterns have looked mechanical. When human players play video games together with NPCs as their opponents/supporters, NPCs' behavioral patterns have not only to be strong but also to be human-like. We propose the autonomous acquisition of NPCs' behaviors, which emulate the behaviors of human players. Instead of implementing straightforward heuristics, the behaviors are acquired using techniques of reinforcement learning with Q-Learning and pathfinding through an A* algorithm, where biological constraints are imposed. Human-like behaviors that imply human cognitive processes were obtained by imposing sensory error, perceptual and motion delay, physical fatigue, and balancing between repetition and novelty as the biological constraints in computational simulations using "Infinite Mario Bros.". We evaluated human-like behavioral patterns through subjective assessments, and discuss the possibility of implementing the proposed system.

Robotic wheelchair easy to move and communicate with companions Interactivity: research / Kobayashi, Yoshinori / Suzuki, Ryota / Sato, Yoshihisa / Arai, Masaya / Kuno, Yoshinori / Yamazaki, Akiko / Yamazaki, Keiichi Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing Systems 2013-04-27 v.2 p.3079-3082
ACM Digital Library Link
Summary: Although it is desirable for wheelchair users to go out alone by operating wheelchairs on their own, they are often accompanied by caregivers or companions. In designing robotic wheelchairs, therefore, it is important to consider not only how to assist the wheelchair user but also how to reduce companions' load and support their activities. We specially focus on the communications among wheelchair users and companions because the face-to-face communication is known to be effective to ameliorate elderly mental health. Hence, we proposed a robotic wheelchair able to move alongside a companion. We demonstrate our robotic wheelchair. All attendees can try to ride and control our robotic wheelchair.

Touch-consistent perspective for direct interaction under motion parallax Posters / Sugano, Yusuke / Harada, Kazuma / Sato, Yoichi Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces 2012-11-11 p.339-342
ACM Digital Library Link
Summary: A 3D display is a key component to present virtual space in an intuitive way to users. A motion parallax-based 3D display can be easily combined with multi-touch surfaces, and it is expected to bring a natural experience of viewing and controlling 3D space. However, since virtual objects are rendered in accordance with the head position of the user, their projected positions are not fixed on the display surface. We propose a novel formulation of head-coupled perspective that adaptively changes the position of the projection image plane to maintain touch consistency of direct interaction.

Autonomously Acquiring a Video Game Agent's Behavior: Letting Players Feel Like Playing with a Human Player Extended Abstracts / Fujii, Nobuto / Sato, Yuichi / Wakama, Hironori / Katayose, Haruhiro Proceedings of the 2012 International Conference on Advances in Computer Entertainment 2012-11-03 p.490-493
Keywords: Strategy acquisition; Biological constraints; Video game
Link to Digital Content at Springer
Summary: Designing behavior patterns of video game agents (COM players) is a crucial aspect of video game development. While various systems aiming to automatically acquire behavior patterns has been proposed and some have successfully obtained stronger patterns than human players, the obtained behavior patterns looks mechanical. We present herein an autonomous acquisition of video game agent behavior, which emulates the behavior of a human player. Instead of implementing straightforward heuristics, the behavior is acquired using Q-learning, a reinforcement learning, where, biological constraints are imposed. In the experiments using Infinite Mario Bros., we observe that behaviors that imply human behaviors are obtained by imposing sensory error, perceptual and motion delay, and fatigue as biological constraints.

Analysis of the correlation between the regularity of work behavior and stress indices based on longitudinal behavioral data Multimodal interaction / Okada, Shogo / Sato, Yusaku / Kamiya, Yuki / Yamada, Keiji / Nitta, Katsumi Proceedings of the 2012 International Conference on Multimodal Interfaces 2012-10-22 p.425-432
ACM Digital Library Link
Summary: Increasingly, longitudinal behavioral data captured by various sensors are being analyzed to improve workplace performance. In this paper, we analyze the correlation between the regularity of workers' behavior and their levels of stress. We used a 23-month behavioral dataset for 18 workers that recorded their use of PCs and their locations in the office. We found that the principal eigen-behaviors extracted from the dataset with PCA represented typical work behaviors such as overwork using a PC and routine times for meetings. We found that more than 80% of each of the 18 workers' individual behaviors could be reconstructed using nine principal eigenbehaviors. In addition, the deviation ranges for the reconstruction accuracies were significantly different for workers in different positions. We conducted the correlation analysis between work behaviors of the workers and their stress level. Our results show a significant negative correlation (r > 0.69, p < 0.01) between the accuracy of reconstructed work behaviors and physical stress levels; and a significant positive correlation between the accuracy of reconstructed behavior and stress dissolution abilities. Our results suggest that the correlation between the stress level of workers and the regularity of their work behavior exists. This correlation will be useful for occupational healthcare.

Incorporating visual field characteristics into a saliency map Systems, tools, methods / Kubota, Hideyuki / Sugano, Yusuke / Okabe, Takahiro / Sato, Yoichi / Sugimoto, Akihiro / Hiraki, Kazuo Proceedings of the 2012 Symposium on Eye Tracking Research & Applications 2012-03-28 p.333-336
ACM Digital Library Link
Summary: Characteristics of the human visual field are well known to be different in central (fovea) and peripheral areas. Existing computational models of visual saliency, however, do not take into account this biological evidence. The existing models compute visual saliency uniformly over the retina and, thus, have difficulty in accurately predicting the next gaze (fixation) point. This paper proposes to incorporate human visual field characteristics into visual saliency, and presents a computational model for producing such a saliency map. Our model integrates image features obtained by bottom-up computation in such a way that weights for the integration depend on the distance from the current gaze point where the weights are optimally learned using actual saccade data. The experimental results using a large number of fixation/saccade data with wide viewing angles demonstrate the advantage of our saliency map, showing that it can accurately predict the point where one looks next.

First Person Shooters as Collaborative Multiprocess Instruments / Berthaut, Florent / Katayose, Haruhiro / Wakama, Hironori / Totani, Naoyuki / Sato, Yuichi NIME 2011: New Interfaces for Musical Expression 2011-05-30 p.44-47
Keywords: the couacs, fps, first person shooters, collaborative, 3D interaction, multiprocess instrument
www.nime.org/proceedings/2011/nime2011_044.pdf
Summary: First Person Shooters are among the most played computer video games. They combine navigation, interaction and collaboration in 3D virtual environments using simple input devices, i.e. mouse and keyboard. In this paper, we study the possibilities brought by these games for musical interaction. We present the Couacs, a collaborative multiprocess instrument which relies on interaction techniques used in FPS together with new techniques adding the expressiveness required for musical interaction. In particular, the Faders For All game mode allows musicians to perform pattern-based electronic compositions.

Behavior-based stigmergic navigation Posters / Sato, Shin-ya / Nakamura, Tetsuya / Sato, Yoshiaki Proceedings of the 2010 International Conference on Ubiquitous Computing 2010-09-26 p.429-430
Keywords: ant colony optimization, destination advertisement, directional pheromone
ACM Digital Library Link
Summary: We propose a new approach for navigating people in a ubiquitous computing environment by using digital pheromone trails, similar to ants being led by pheromones to a food source. Unlike ants, humans can use their intelligence in selecting routes. Our idea is to compile such intelligence by accumulating the history of people's rational behaviors and leaving this history as digital pheromones in the environment for later use. In simulations of navigation services, we found that the original ant colony optimization (ACO), which is a metaheuristic based on the foraging activity of ants, does not completely fit our purpose. Therefore, two modifications were made to the original ACO. Our simulation results show that people can be successfully navigated by simulated services implemented using these modified ACOs.

Combining head tracking and mouse input for a GUI on multiple monitors Late breaking results: short papers / Ashdown, Mark / Oka, Kenji / Sato, Yoichi Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005-04-02 v.2 p.1188-1191
ACM Digital Library Link
Summary: The use of multiple LCD monitors is becoming popular as prices are reduced, but this creates problems for window management and switching between applications. For a single monitor, eye tracking can be combined with the mouse to reduce the amount of mouse movement, but with several monitors the head is moved through a large range of positions and angles which makes eye tracking difficult. We thus use head tracking to switch the mouse pointer between monitors and use the mouse to move within each monitor. In our experiment users required significantly less mouse movement with the tracking system, and preferred using it, although task time actually increased. A graphical prompt (flashing star) prevented the user losing the pointer when switching monitors. We present discussions on our results and ideas for further developments.

Ubiquitous display for dynamically changing environment Interactive posters: computers everywhere / Tokuda, Yasuhisa / Iwasaki, Shinsuke / Sato, Yoichi / Nakanishi, Yasuto / Koike, Hideki Proceedings of ACM CHI 2003 Conference on Human Factors in Computing Systems 2003-04-05 v.2 p.976-977
ACM Digital Library Link
Summary: This paper proposes a novel method for ubiquitous displays using projectors in indoor environments. In particular, our method consists of two distinct features: automatic scene modeling of a dynamically changing indoor environment, and automatic selection of surfaces onto which various contents are displayed by taking into account both geometric and photometric properties. As a result, our method can be applied to dynamically changing scenes such as a meeting room where furniture and other objects are moved frequently.

Vision-Based Face Tracking System for Large Displays Pereceptual Interfaces and Responsive Environments / Nakanishi, Yasuto / Fujii, Takashi / Kiatjima, Kotaro / Sato, Yoichi / Koike, Hideki Proceedings of the 2002 International Conference on Ubiquitous Computing 2002-09-29 p.152-159
Link to Digital Content at Springer
Summary: In this paper, we present a stereo-based face tracking system which can track the 3D position and orientation of a user in real-time, and the system's application for interaction with a large display. Our tracking system incorporates dynamic update of template images for tracking facial features so that the system can successfully track a user's face for a large angle of rotation. Another advantage of our tracking system is that it does not require a user to manually initialize the tracking process, which is critical for natural and intuitive interaction. Based on our face tracking system, we have implemented several prototype applications which change information shown on a large display adaptively according to the location looked at by a user.

Two-handed drawing on augmented desk system Focusing attention / Chen, Xinlei / Koike, Hideki / Nakanishi, Yasuto / Oka, Kenji / Sato, Yoichi Proceedings of the 2002 International Conference on Advanced Visual Interfaces 2002-05-22 p.219-222
Keywords: augmented reality, computer vision, direct manipulation, finger/hand recognition, gesture recognition, perceptive user interface, two-handed interaction
ACM Digital Library Link
Summary: This paper describes a two-handed drawing tool developed on our augmented desk system. Using our real-time finger tracking method, a user can draw and manipulate objects interactively by his/her own finger/hand. Based on the former work on two-handed interaction, different roles are assigned to each hand. The right hand is used to draw and to manipulate objects. Using gesture recognition, primitive objects can be drawn by users' handwriting. On the other hand, the left hand is used to manipulate menus and to assist the right hand. By closing all left hand fingers, users can initiate the appearance of structural radial menus around their left hands, and can select appropriate items by using a left hand finger. The left hand is also used to assist in the performance of drawing tasks, e.g., specifying the center of a circle or top-left corner of a rectangle, or specifying the object to be copied.

Two-handed drawing on augmented desk Short Talks / Koike, Hideki / Xinlei, Chen / Nakanishi, Yasoto / Oka, Kenji / Sato, Yoichi Proceedings of ACM CHI 2002 Conference on Human Factors in Computing Systems 2002-04-20 v.2 p.760-761
ACM Digital Library Link
Summary: This paper describes a two-handed drawing tool on Enhanced Desk. Through the experiments, our tool showed better performance when drawing simple figures than traditional drawing tools. The subjects also reported that it was easier to learn the usage of the tool.

SnapLink: Interactive Object Registration and Recognition for Augmented Desk Interface / Nishi, T. / Sato, Y. / Koike, H. Proceedings of IFIP INTERACT'01: Human-Computer Interaction 2001-07-09 p.240-246
Vision-based face tracking system for window interface: prototype application and empirical studies Short talks: input by hand, eye, and brain / Kitajima, Kotaro / Sato, Yoichi / Koike, Hideki Proceedings of ACM CHI 2001 Conference on Human Factors in Computing Systems 2001-03-31 v.2 p.359-360
ACM Digital Library Link
Summary: In this paper, we study the effective use of gaze information for human-computer interaction based on a stereo-based vision system which can track the 3D position and orientation of a user in real-time. We have integrated our face-tracking system into the X Window interface system, and conducted experiments to evaluate the effectiveness of our proposed framework for using gaze information for window interfaces.

Interactive object registration and recognition for augmented desk interface Short talks: displaying beyond desktop / Nishi, Takahiro / Sato, Yoichi / Koike, Hideki Proceedings of ACM CHI 2001 Conference on Human Factors in Computing Systems 2001-03-31 v.2 p.371-372
ACM Digital Library Link
Summary: Identification of objects in a real world plays a key role for human-computer interaction in a computer-augmented environment using augmented reality techniques. For providing natural interaction in such environments, it is necessary for an interface system to know which objects a user is using. In the previously developed interface systems, real objects are identified by using specially designed tags attached to objects. In this work, we propose a new method for interactive object recognition and registration for more natural and intuitive interaction without using any tags. In particular, we introduce interactive object registration and recognition by combining direct manipulation with user's hands and a color-based object recognition algorithm.

Integrating paper and digital information on EnhancedDesk: a method for realtime finger tracking on an augmented desk system / Koike, Hideki / Sato, Yoichi / Kobayashi, Yoshinori ACM Transactions on Computer-Human Interaction 2001 v.8 n.4 p.307-322
Keywords: Augmented reality, computer vision, computer-supported learning, education, finger/hand recognition, infrared camera, perceptive user interfaces
ACM Digital Library Link
Summary: This article describes a design and implementation of an augmented desk system, named EnhancedDesk, which smoothly integrates paper and digital information on a desk. The system provides users an intelligent environment that automatically retrieves and displays digital information corresponding to the real objects (e.g., books) on the desk by using computer vision. The system also provides users direct manipulation of digital information by using the users' own hands and fingers for more natural and more intuitive interaction. Based on the experiments with our first prototype system, some critical issues on augmented desk systems were identified when trying to pursue rapid and fine recognition of hands and fingers. To overcome these issues, we developed a novel method for realtime finger tracking on an augmented desk system by introducing a infrared camera, pattern matching with normalized correlation, and a pan-tilt camera. We then show an interface prototype on EnhancedDesk. It is an application to a computer-supported learning environment, named Interactive Textbook. The system shows how effective the integration of paper and digital information is and how natural and intuitive direct manipulation of digital information with users' hands and fingers is.

Interactive Textbook and Interactive Venn Diagram: Natural and Intuitive Interfaces on Augmented Desk System Tangible UI Systems / Koike, Hideki / Sato, Yoichi / Kobayashi, Yoshinori / Tobita, Hiroaki / Kobayashi, Motoki Proceedings of ACM CHI 2000 Conference on Human Factors in Computing Systems 2000-04-01 v.1 p.121-128
Keywords: Augmented reality, Computer vision, Finger/hand recognition, Information retrieval, Venn diagram, Education, Computer supported learning
1262 KB
Broken Link to ACM Digital Library
Summary: This paper describes two interface prototypes which we have developed on our augmented desk interface system, EnhancedDesk. The first application is Interactive Textbook, which is aimed at providing an effective learning environment. When a student opens a page which describes experiments or simulations, Interactive Textbook automatically retrieves digital contents from its database and projects them onto the desk. Interactive Textbook also allows the student hands-on ability to interact with the digital contents. The second application is the Interactive Venn Diagram, which is aimed at supporting effective information retrieval. Instead of keywords, the system uses real objects such as books or CDs as keys for retrieval. The system projects a circle around each book; data corresponding the book are then retrieved and projected inside the circle. By moving two or more circles so that the circles intersect each other, the user can compose a Venn diagram interactively on the desk. We also describe the new technologies introduced in EnhancedDesk which enable us to implement these applications.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 27 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 05 | 03 | 02 | 01 | 00 | 96 | 93 |