HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,288,284
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Nozaki_H* Results: 6 Sorted by: Date  Comments?
Help Dates
Limit:   
ReFabricator: Integrating Everyday Objects for Digital Fabrication Interactivity Demos / Yamada, Suguru / Morishige, Hironao / Nozaki, Hiroki / Ogawa, Masaki / Yonezawa, Takuro / Tokuda, Hideyuki Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3804-3807
ACM Digital Library Link
Summary: Since current digital fabrication relies on 3D printer very much, there are several concerns such as printing cost (i.e., both financial and temporal cost) and sometimes too homogeneous impression with plastic filament. To address and solve the problem, we propose ReFabricator, a computational fabrication tool integrating everyday objects into digital fabrication. ReFabrication is a concept of fabrication, mixing the idea of Reuse and Digital Fabrication, which aims to fabricate new functional shape with ready made products, effectively utilizing its behavior. As a system prototype, we have implemented a design tool which enables users to gather up every day objects and reassemble them to another functional shape with taking advantages of both analog and digital fabrication. In particular, the system calculates the optimized positional relationship among objects, and generates joint objects to bond the objects together in order to achieve a certain shape.

Reducing users' perceived mental effort due to interruptive notifications in multi-device mobile environments Reshaping UbiComp environments / Okoshi, Tadashi / Ramos, Julian / Nozaki, Hiroki / Nakazawa, Jin / Dey, Anind K. / Tokuda, Hideyuki Proceedings of the 2015 International Conference on Ubiquitous Computing 2015-09-07 p.475-486
ACM Digital Library Link
Summary: In today's ubiquitous computing environment where users carry, manipulate, and interact with an increasing number of networked devices, applications and web services, human attention is the new bottleneck in computing. It is therefore important to minimize a user's mental effort due to notifications, especially in situations where users are mobile and using multiple wearable and mobile devices. To this end, we propose Attelia II, a novel middleware that identifies breakpoints in users' lives while using those devices, and delivers notifications at these moments. Attelia II works in real-time and uses only the mobile and wearable devices that users naturally use and wear, without any modifications to applications, and without any dedicated psycho-physiological sensors. Our in-the-wild evaluation in users' multi-device environment (smart phones and smart watches) with 41 participants for 1 month validated the effectiveness of Attelia. Our new physical activity-based breakpoint detection, in addition to the UI Event-based breakpoint detection, resulted in a 71.8% greater reduction of users' perception of workload, compared with our previous system that used UI events only. Adding this functionality to a smart watch reduced workload perception by 19.4% compared to random timing of notification deliveries. Our multi-device breakpoint detection across smart phones and watches resulted in about 3 times greater reduction in workload perception than our previous system.

SENSeTREAM: enhancing online live experience with sensor-federated video stream using animated two-dimensional code Mobile applications / Yonezawa, Takuro / Ogawa, Masaki / Kyono, Yutaro / Nozaki, Hiroki / Nakazawa, Jin / Nakamura, Osamu / Tokuda, Hideyuki Proceedings of the 2014 International Joint Conference on Pervasive and Ubiquitous Computing 2014-09-13 v.1 p.301-305
ACM Digital Library Link
Summary: We propose a novel technique that aggregates multiple sensor streams generated by totally different types of sensors into a visually enhanced video stream. This paper shows major features of SENSeTREAM and demonstrates enhancement of user experience in an online live music event. Since SENSeTREAM is a video stream with sensor values encoded in a two-dimensional graphical code, it can transmit multiple sensor data streams while maintaining their synchronization. A SENSeTREAM can be transmitted via existing live streaming services, and can be saved into existing video archive services. We have implemented a prototype SENSeTREAM generator and deployed it to an online live music event. Through the pilot study, we confirmed that SENSeTREAM works with popular streaming services, and provide a new media experience for live performances. We also indicate future direction for establishing visual stream aggregation and its applications.

Flying display: a movable display pairing projector and screen in the air Student research competition / Nozaki, Hiroki Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.909-914
ACM Digital Library Link
Summary: We developed Flying Display, a novel movable public display system which can provide information to the people anywhere at anytime. This system consists of two UAVs (Unmanned Aerial Vehicles) with a projector and a screen. Flying Display achieves moving freely and keeping stable in 3-D space. Flying Display moves closer to people and gives information directly to them. To evaluate performance of Flying Display, we performed two experiments for adapting a flying control algorithm. We also showed the stability of Flying Display systems by trajectories of each UAV. This paper highlights the performance of Flying Display and discusses the Flying Display's potential for public displays in physical space.

EverCopter: continuous and adaptive over-the-air sensing with detachable wired flying objects Poster, demo, & video presentations / Kyono, Yutaro / Yonezawa, Takuro / Nozaki, Hiroki / Ogawa, Masaki / Ito, Tomotaka / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.299-302
ACM Digital Library Link
Summary: The paper proposes EverCopter, which provides continuous and adaptive over-the-air sensing with detachable wired flying objects. While a major advantage of sensing systems with battery-operated MAVs is a wide sensing coverage, sensing time is limited due to its limited amount of energy. We propose dynamically rechargeable flying objects, called EverCopter. EverCopter achieves both long sensing time and wide sensing coverage by the following two characteristics. First, multiple EverCopters can be tied in a row by power supply cables. Since the root EverCopter in a row is connected to DC power supply on the ground, each EverCopter can fly without battery. This makes their sensing time forever, unless the power supply on the ground fails. Second, the leaf EverCopter can detach itself from the row in order to enjoy wider sensing coverage. An EverCopter, while it is detached, runs with its own battery-supplied energy. When the remaining energy becomes low, it flies back to the row to recharge the battery.

FRAGWRAP: fragrance-encapsulated and projected soap bubble for scent mapping Poster, demo, & video presentations / Kyono, Yutaro / Yonezawa, Takuro / Nozaki, Hiroki / Nakazawa, Jin / Tokuda, Hideyuki Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.311-314
ACM Digital Library Link
Summary: This paper proposes FRAGWRAP which maps scent to real objects in real-time. To achieve this purpose, we leverage fragrance-encapsulated soap bubble with projection mapping technique. Since human olfaction is known as combined utilization of his/her eyes and nose, we encapsulate fragrance into bubble soap to stimulate the nose and also project 3D image of the fragrance to the bubble soap in real-time. In this video, we present our first prototype which automatically inserts fragrance into a soap bubble and also projects images to the moving bubble. All system is activated by speech recognition.