HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,288,286
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Yonezawa_T* Results: 28 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 28 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 08 | 07 | 06 | 05 | 02 | 01 |
ReFabricator: Integrating Everyday Objects for Digital Fabrication Interactivity Demos / Yamada, Suguru / Morishige, Hironao / Nozaki, Hiroki / Ogawa, Masaki / Yonezawa, Takuro / Tokuda, Hideyuki Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3804-3807
ACM Digital Library Link
Summary: Since current digital fabrication relies on 3D printer very much, there are several concerns such as printing cost (i.e., both financial and temporal cost) and sometimes too homogeneous impression with plastic filament. To address and solve the problem, we propose ReFabricator, a computational fabrication tool integrating everyday objects into digital fabrication. ReFabrication is a concept of fabrication, mixing the idea of Reuse and Digital Fabrication, which aims to fabricate new functional shape with ready made products, effectively utilizing its behavior. As a system prototype, we have implemented a design tool which enables users to gather up every day objects and reassemble them to another functional shape with taking advantages of both analog and digital fabrication. In particular, the system calculates the optimized positional relationship among objects, and generates joint objects to bond the objects together in order to achieve a certain shape.

Auditory Browsing Interface of Ambient and Parallel Sound Expression for Supporting One-to-many Communication Natural Interaction / Yonezawa, Tomoko DAPI 2014: 3rd International Conference on Distributed, Ambient, and Pervasive Interactions 2015-08-02 p.224-233
Keywords: Auditory space; One-to-many parallel communication; Browsing interface; Audience interaction
Link to Digital Content at Springer
Summary: In this paper, we introduce an auditory browsing system for supporting one-to-many communication in parallel with an ongoing discourse, lecture, or presentation. The live reactions of audiences should reflect the main speech from the viewpoint of active participation. In order to browse numerous live comments from audiences, the speaker stretches her/his neck toward a particular section of the virtual audience group. We adopt the metaphor of "looking inside" toward the direction of the seating position with repositioned and overlaid audiences' voices corresponding to the length of the voice regardless of the seating of real audiences. As a result, the speaker could browse the comments of the audience and show the communicative behaviors when she/he was interested in a particular group of the audience's utterances.

Indirect Monitoring of Cared Person by Onomatopoeic Text of Environmental Sound and User's Physical State Location, Motion and Activity Recognition / Naka, Yusuke / Yoshida, Naoto / Yonezawa, Tomoko DAPI 2014: 3rd International Conference on Distributed, Ambient, and Pervasive Interactions 2015-08-02 p.506-517
Link to Digital Content at Springer
Summary: In this paper, we propose a nonverbal, descriptive method for creating daily life logs, in text format, on behalf of people who require monitoring and/or assistance in taking care of themselves. The users environmental situations are converted into and recorded as onomatopoeic texts in order to preserve their privacy. The users ambient context is detected by the accelerometer, gyro sensor, and microphone in her/his smart device. We propose a soft monitoring system that utilizes nonverbal expressions of both onomatopoeic text logs and symbolic sound expressions that is named Soundgram. We have investigated impressions regarding the monitoring of the elderly and the proposed system via a questionnaire distributed to two groups of potential users, the elderly and middle-aged people, which captures the viewpoint of both the recipient and the caregiver.

Synchronized AR environment for multiple users using animation markers Poster abstracts / Yamazoe, Hirotake / Yonezawa, Tomoko Proceedings of the 2014 ACM Symposium on Virtual Reality Software and Technology 2014-11-11 p.237-238
ACM Digital Library Link
Summary: In this paper, we propose an AR environment in which multiple users can synchronously share and see AR contents based on our proposed animation markers with time-sequence information. Many AR systems have been proposed, some of which consider simultaneous use by multiple users. However, these systems require a synchronization mechanism among users (devices) for simultaneous displaying. Such mechanisms complicate the systems. Thus, we propose an animation marker that can transmit temporal (frame) information in a synchronized AR environment among multiple users using our animation markers.

SENSeTREAM: enhancing online live experience with sensor-federated video stream using animated two-dimensional code Mobile applications / Yonezawa, Takuro / Ogawa, Masaki / Kyono, Yutaro / Nozaki, Hiroki / Nakazawa, Jin / Nakamura, Osamu / Tokuda, Hideyuki Proceedings of the 2014 International Joint Conference on Pervasive and Ubiquitous Computing 2014-09-13 v.1 p.301-305
ACM Digital Library Link
Summary: We propose a novel technique that aggregates multiple sensor streams generated by totally different types of sensors into a visually enhanced video stream. This paper shows major features of SENSeTREAM and demonstrates enhancement of user experience in an online live music event. Since SENSeTREAM is a video stream with sensor values encoded in a two-dimensional graphical code, it can transmit multiple sensor data streams while maintaining their synchronization. A SENSeTREAM can be transmitted via existing live streaming services, and can be saved into existing video archive services. We have implemented a prototype SENSeTREAM generator and deployed it to an online live music event. Through the pilot study, we confirmed that SENSeTREAM works with popular streaming services, and provide a new media experience for live performances. We also indicate future direction for establishing visual stream aggregation and its applications.

LiPS: linked participatory sensing for optimizing social resource allocation 3rd International Workshop on Mobile Systems for Computational Social Science / Sakamura, Mina / Yonezawa, Takuro / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Adjunct Proceedings of the 2014 International Joint Conference on Pervasive and Ubiquitous Computing 2014-09-13 v.2 p.1015-1024
ACM Digital Library Link
Summary: This paper proposes a concept of linked participatory sensing, called LiPS, that divide a complex sensing task into small tasks and link each other to optimize social resource allocation. Recently participatory sensing have been spreading, but its sensing tasks are still very simple and easy for participants to deal with (e.g. Please input the number of people standing in a queue. etc.). To adapt to high-level tasks which require specific skills such as those in engineering, the medical profession or authority such as the organizer of the event, we need to optimize social resource allocation because the number of such professionals are limited. To achieve the complex sensing tasks efficiently, LiPS enables to divide a complex sensing task into small tasks and link each other by assigning proper sensors. LiPS can treat physical sensors and human as hybrid multi-level sensors, and task provider can arrange social resource allocation for the goal of each divided sensing task. In this paper, we describe the design and development of the LiPS system. We also implemented an in-lab experiment as the first prototype of hybrid sensing system and discussed the model of further system through users' feedback.

A Structure of Wearable Message-Robot for Ubiquitous and Pervasive Services User Experience in Intelligent Environments / Yonezawa, Tomoko / Yamazoe, Hirotake DAPI 2014: 2nd International Conference on Distributed, Ambient, and Pervasive Interactions 2014-06-22 p.400-411
Link to Digital Content at Springer
Summary: In this paper, we introduce a haptic message-robot which gives user-friendly physical contacts while it tells message to the user. This robot is expected to help elderly people who need outings but have anxiety. The pervasive support of the robot via network will provide the user a human-like service as though it were a real caregiver. The system makes haptic stimiuli corresponding to the user's clothing and posture. We investigated two types of implementations: the first implementation combines haptic stimiuli and anthropomorphic motion to express the physical contact, and the second one is an simplified system for application on smartphones to provide ubiquitous services. The subjective evaluations in a course with two diverges showed the effectiveness of both the robot's motion and the haptic stimuli on the intelligibleness and affective communication.

Breatter: a simulation of living presence with breath that corresponds to utterances HRI2014 late breaking reports poster / Nakatani, Yukari / Yonezawa, Tomoko Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction 2014-03-03 p.254-255
ACM Digital Library Link
Summary: We propose an expressive breathing method for a robot in accordance with its utterances. The synchronized expression is expected as a new physical modality to become like a living presence. Especially when the human-robot distance is near in intimate communications, anthropomorphized presences should naturally imitate activities of living beings, such as breathing. A stuffed-toy robot contains a speaker and a fan motor in its head to make simultaneous expressions of breathing and vocal sounds. The strength of the breath is determined by the total volume and power in the high frequency band.

Involuntary expression of embodied robot adopting goose bumps HRI2014 late breaking reports poster / Yonezawa, Tomoko / Meng, Xiaoshun / Yoshida, Naoto / Nakatani, Yukari Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction 2014-03-03 p.322-323
ACM Digital Library Link
Summary: In this paper, we propose an involuntary expression of embodied robots by adopting goose bumps. The goose bumps are caused by not only external stimuli such as cold temperature but also the internal state of the robot such as fear. For more natural anthropomorphism, the combination of involuntary and voluntary expressions should enable realistic animacy and life-like agency. The bumps on the robot's skin are generated by changing lengths of thin rods from each hole. The lengths are controlled by a servo motor which pulls nylon strings connected to the base of thin rods.

EverCopter: continuous and adaptive over-the-air sensing with detachable wired flying objects Poster, demo, & video presentations / Kyono, Yutaro / Yonezawa, Takuro / Nozaki, Hiroki / Ogawa, Masaki / Ito, Tomotaka / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.299-302
ACM Digital Library Link
Summary: The paper proposes EverCopter, which provides continuous and adaptive over-the-air sensing with detachable wired flying objects. While a major advantage of sensing systems with battery-operated MAVs is a wide sensing coverage, sensing time is limited due to its limited amount of energy. We propose dynamically rechargeable flying objects, called EverCopter. EverCopter achieves both long sensing time and wide sensing coverage by the following two characteristics. First, multiple EverCopters can be tied in a row by power supply cables. Since the root EverCopter in a row is connected to DC power supply on the ground, each EverCopter can fly without battery. This makes their sensing time forever, unless the power supply on the ground fails. Second, the leaf EverCopter can detach itself from the row in order to enjoy wider sensing coverage. An EverCopter, while it is detached, runs with its own battery-supplied energy. When the remaining energy becomes low, it flies back to the row to recharge the battery.

FRAGWRAP: fragrance-encapsulated and projected soap bubble for scent mapping Poster, demo, & video presentations / Kyono, Yutaro / Yonezawa, Takuro / Nozaki, Hiroki / Nakazawa, Jin / Tokuda, Hideyuki Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.311-314
ACM Digital Library Link
Summary: This paper proposes FRAGWRAP which maps scent to real objects in real-time. To achieve this purpose, we leverage fragrance-encapsulated soap bubble with projection mapping technique. Since human olfaction is known as combined utilization of his/her eyes and nose, we encapsulate fragrance into bubble soap to stimulate the nose and also project 3D image of the fragrance to the bubble soap in real-time. In this video, we present our first prototype which automatically inserts fragrance into a soap bubble and also projects images to the moving bubble. All system is activated by speech recognition.

Reinforcing co-located communication practices through interactive public displays Workshop: human interfaces for civic and urban engagement / Ogawa, Masaki / Jurmu, Marko / Ito, Tomotaka / Yonezawa, Takuro / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.737-740
ACM Digital Library Link
Summary: In recent years, the steady emergence of digital communication, especially social media, has increased the "placelessness" of inter-person communication practices, i.e., lessening the need to reside co-located in order to communicate. When these communication practices carry over to co-located settings, they introduce redundancy and potentially even harm the co-located context, since use of personal technologies tends to isolate users from their surroundings. In this position paper, we want to raise awareness on how interactive public displays could alleviate this redundancy and potential isolation. We present a model of reinforcing co-located communications, and illustrate it through example use cases.

pARnorama: 360 degree interactive video for augmented reality prototyping Workshop: wearable systems for industrial augmented reality applications / Berning, Matthias / Yonezawa, Takuro / Riedel, Till / Nakazawa, Jin / Beigl, Michael / Tokuda, Hide Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.1471-1474
ACM Digital Library Link
Summary: Designing novel and meaningful interactions in the domain of Augmented Reality (AR) requires an efficient and appropriate methodology. A user centered design process requires the construction and evaluation of several prototypes with increasing technical fidelity. Although the main content of the application can already be conveyed with prerendered video, one of the main interactions in AR -- the user-selected viewpoint -- is only available in a very late stage. We propose the use of panoramic 360° video for scenario based user evaluation, where the user can select his point of view during playback. Initial users report a high degree of immersion in the constructed scenario, even for handheld AR.

Wearable partner agent with anthropomorphic physical contact with awareness of user's clothing and posture Context and awareness / Yonezawa, Tomoko / Yamazoe, Hirotake Proceedings of the 2013 International Symposium on Wearable Computers 2013-09-08 p.77-80
ACM Digital Library Link
Summary: In this paper, we introduce a wearable partner agent, that makes physical contacts corresponding to the user's clothing, posture, and detected contexts. Physical contacts are generated by combining haptic stimuli and anthropomorphic motions of the agent. The agent performs two types of the behaviors: a) it notifies the user of a message by patting the user's arm and b) it generates emotional expression by strongly enfolding the user's arm. Our experimental results demonstrated that haptic communication from the agent increases the intelligibility of the agent's messages and familiar impressions of the agent.

Enhancing communication and dramatic impact of online live performance with cooperative audience control On the body and on the move / Yonezawa, Takuro / Tokuda, Hideyuki Proceedings of the 2012 International Conference on Ubiquitous Computing 2012-09-05 p.103-112
ACM Digital Library Link
Summary: Recent progress in information technology enables people to easily broadcast events live on the Internet. Although the advantage of the Internet is live communication between a performer and listeners, the current mode of communication is writing comments using Twitter or Facebook, or some similar messaging network. In one type of live broadcast, musical performances, it is difficult for a musician, when playing an instrument, to communicate with listeners by writing comments. We propose a new communication mode between performers who play musical instruments, and their listeners by enabling listeners to control the performer's camera or illumination remotely. The results of four weeks of experiment confirm the emergence of nonverbal communication between a performer and listeners, and among listeners, which increases camaraderie amongst listeners and performers. Additionally, the dramatic impact of a performance is increased by enabling listeners to control various camera actions such as zoom-in or pan in real time. The results also provide implications for design of future interactive live broadcasting services.

LiDSN: a method to deploy wireless sensor networks securely based on light communication Demos / Doan, Giang / Nguyen, Minh / Takimoto, Takuya / Yonezawa, Takuro / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Proceedings of the 2012 International Conference on Ubiquitous Computing 2012-09-05 p.539-540
Summary: Deploying Wireless Sensor Networks (WSN) securely still requires users to have certain skills and exert effort. In the near "sensor everywhere" future, a much simpler method for deploying WSN will be necessary for end-users. We propose LiDSN (Light Communication for Deploying Secure Wireless Sensor Networks) which enables users to achieve deployment tasks via simple interaction. LiDSN leverages light-based communication between an LED and a light sensor in order to add a new sensor node securely into existing WSN. Through touching interaction, a new sensor node ID and secret key can be transmitted to the WSN, and then the WSN is able to identify which node should be added while maintaining the security of the WSN.

Detection, classification and visualization of place-triggered geotagged tweets Workshop on Location-Based Social Networks (LBSN 2012) / Hiruta, Shinya / Yonezawa, Takuro / Jurmu, Marko / Tokuda, Hideyuki Proceedings of the 2012 International Conference on Ubiquitous Computing 2012-09-05 p.956-963
Summary: This paper proposes and evaluates a method to detect and classify tweets that are triggered by places where users locate. Recently, many related works address to detect real world events from social media such as Twitter. However, geotagged tweets often contain noise, which means tweets which are not content-wise related to users' location. This noise is problem for detecting real world events. To address and solve the problem, we define the Place-Triggered Geotagged Tweet, meaning tweets which have both geotag and content-based relation to users' location. We designed and implemented a keyword-based matching technique to detect and classify place-triggered geotagged tweets. We evaluated the performance of our method against a ground truth provided by 18 human classifiers, and achieved 82% accuracy. Additionally, we also present two example applications for visualizing place-triggered geotagged tweets.

Lupe: information access method based on distance between user and sensor nodes using AR technology Demonstration sessions / Takimoto, Takuya / Karatsu, Yutaka / Yonezawa, Takuro / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Proceedings of the 2011 International Conference on Ubiquitous Computing 2011-09-17 p.479-480
ACM Digital Library Link
Summary: This paper proposes the information access method that is based on the distance between users and objects. In Addition, demonstrate Lupe system, which visualizes WSN status information utilizing our method. The evaluative experiment shows that our method is useful in where a number of sensors are setup. As a result our method and Lupe system enable to easily brows WSN status information for end-user.

Transferring information from mobile devices to personal computers by using vibration and accelerometer Demonstration sessions / Yonezawa, Takuro / Ito, Tomotaka / Tokuda, Hideyuki Proceedings of the 2011 International Conference on Ubiquitous Computing 2011-09-17 p.487-488
ACM Digital Library Link
Summary: We propose a simple interaction to transfer information on smart phone to laptop/tablet PCs. We often encounter the situation that we need to send URL, which is preliminary accessed in mobile devices, from mobile devices to personal computers (PCs) to see the web page with wider screen. To support this information transfer, we utilize combination between vibrator in smart phones and accelerometer in laptop/tablet PCs. URL information is encoded to vibration patterns, and the patterns are detected and decoded by accelerometer in PCs. We demonstrate the interaction's efficiency and reasonability with actual products.

The 5th ACM international workshop on context-awareness for self-managing systems (CASEMANS 2011) Workshop summaries / Yonezawa, Tomoko / Dargie, Waltenegus Proceedings of the 2011 International Conference on Ubiquitous Computing 2011-09-17 p.623-624
ACM Digital Library Link
Summary: The Casemans 2011 workshop opens a platform to researchers of context-aware computing and self-managing systems to investigate the usefulness of context-awareness in emerging applications such as rescue applications, disaster avoidance and overcoming mechanisms, social networking, etc. These applications typically require timely context information to localise people and to share information based on shared interest as well as situations. An interesting research question is how to define and capture mutual context and how to share information in an efficient manner. Hence, the workshop focuses on context acquisition, modelling, reasoning, actuating techniques.

Remote gaze estimation with a single camera based on facial-feature tracking without special calibration actions Prediction, bias, estimation / Yamazoe, Hirotake / Utsumi, Akira / Yonezawa, Tomoko / Abe, Shinji Proceedings of the 2008 Symposium on Eye Tracking Research & Applications 2008-03-26 p.245-250
Keywords: daily-life situations, non-intrusive, remote gaze tracking
ACM Digital Library Link
Summary: We propose a real-time gaze estimation method based on facial-feature tracking using a single video camera that does not require any special user action for calibration. Many gaze estimation methods have been already proposed; however, most conventional gaze tracking algorithms can only be applied to experimental environments due to their complex calibration procedures and lacking of usability. In this paper, we propose a gaze estimation method that can apply to daily-life situations. Gaze directions are determined as 3D vectors connecting both the eyeball and the iris centers. Since the eyeball center and radius cannot be directly observed from images, the geometrical relationship between the eyeball centers and the facial features and eyeball radius (face/eye model) are calculated in advance. Then, the 2D positions of the eyeball centers can be determined by tracking the facial features. While conventional methods require instructing users to perform such special actions as looking at several reference points in the calibration process, the proposed method does not require such special calibration action of users and can be realized by combining 3D eye-model-based gaze estimation and circle-based algorithms for eye-model calibration. Experimental results show that the gaze estimation accuracy of the proposed method is 5° horizontally and 7° vertically. With our proposed method, various application such as gaze-communication robots, gaze-based interactive signboards, etc. that require gaze information in daily-life situations are possible.

Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking Poster session 1 / Yonezawa, Tomoko / Yamazoe, Hirotake / Utsumi, Akira / Abe, Shinji Proceedings of the 2007 International Conference on Multimodal Interfaces 2007-11-12 p.140-145
Keywords: eye contact, gaze communication, joint attention, stuffed-toy robot
ACM Digital Library Link
Summary: This paper proposes a gaze-communicative stuffed-toy robot system with joint attention and eye-contact reactions based on ambient gaze-tracking. For free and natural interaction, we adopted our remote gaze-tracking method. Corresponding to the user's gaze, the gaze-reactive stuffed-toy robot is designed to gradually establish 1) joint attention using the direction of the robot's head and 2) eye-contact reactions from several sets of motion. From both subjective evaluations and observations of the user's gaze in the demonstration experiments, we found that i) joint attention draws the user's interest along with the user-guessed interest of the robot, ii) "eye contact" brings the user a favorable feeling for the robot, and iii) this feeling is enhanced when "eye contact" is used in combination with "joint attention." These results support the approach of our embodied gaze-communication model.

Cross-modal coordination of expressive strength between voice and gesture for personified media Poster Session 1 / Yonezawa, Tomoko / Suzuki, Noriko / Abe, Shinji / Mase, Kenji / Kogure, Kiyoshi Proceedings of the 2006 International Conference on Multimodal Interfaces 2006-11-02 p.43-50
Keywords: cross-modality, perceptual experiment, personified puppet-interface, vocal-gestural expression
ACM Digital Library Link
Summary: The aim of this paper is to clarify the relationship between the expressive strengths of gestures and voice for embodied and personified interfaces. We conduct perceptual tests using a puppet interface, while controlling singing-voice expressions, to empirically determine the naturalness and strength of various combinations of gesture and voice. The results show that (1) the strength of cross-modal perception is affected more by gestural expression than by the expressions of a singing voice, and (2) the appropriateness of cross-modal perception is affected by expressive combinations between singing voice and gestures in personified expressions. As a promising solution, we propose balancing a singing voice and gestural expressions by expanding and correcting the width and shape of the curve of expressive strength in the singing voice.

u-Texture: Self-Organizable Universal Panels for Creating Smart Surroundings Systems / Kohtake, Naohiko / Ohsawa, Ryo / Yonezawa, Takuro / Matsukura, Yuki / Iwai, Masayuki / Takashio, Kazunori / Tokuda, Hideyuki Proceedings of the 2005 International Conference on Ubiquitous Computing 2005-09-11 p.19-36
Link to Digital Content at Springer
Summary: This paper introduces a novel way to allow non-expert users to create smart surroundings. Non-smart everyday objects such as furniture and appliances found in homes and offices can be converted to smart ones by attaching computers, sensors, and devices. In this way, non-smart components that form non-smart objects are made smart in advance. For our first prototype, we have developed u-Texture, a self-organizable universal panel that works as a building block. The u-Texture can change its own behavior autonomously through recognition of its location, its inclination, and surrounding environment by assembling these factors physically. We have demonstrated several applications to confirm that u-Textures can create smart surroundings easily without expert users.

HandySinger: Expressive Singing Voice Morphing using Personified Hand-puppet Interface Papers and Report Sessions / Yonezawa, Tomoko / Suzuki, Noriko / Mase, Kenji / Kogure, Kiyoshi NIME 2005: New Interfaces for Musical Expression 2005-05-26 p.121-126
www.nime.org/proceedings/2005/nime2005_121.pdf
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 28 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 08 | 07 | 06 | 05 | 02 | 01 |