HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,288,287
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Tokuda_H* Results: 20 Sorted by: Date  Comments?
Help Dates
Limit:   
ReFabricator: Integrating Everyday Objects for Digital Fabrication Interactivity Demos / Yamada, Suguru / Morishige, Hironao / Nozaki, Hiroki / Ogawa, Masaki / Yonezawa, Takuro / Tokuda, Hideyuki Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3804-3807
ACM Digital Library Link
Summary: Since current digital fabrication relies on 3D printer very much, there are several concerns such as printing cost (i.e., both financial and temporal cost) and sometimes too homogeneous impression with plastic filament. To address and solve the problem, we propose ReFabricator, a computational fabrication tool integrating everyday objects into digital fabrication. ReFabrication is a concept of fabrication, mixing the idea of Reuse and Digital Fabrication, which aims to fabricate new functional shape with ready made products, effectively utilizing its behavior. As a system prototype, we have implemented a design tool which enables users to gather up every day objects and reassemble them to another functional shape with taking advantages of both analog and digital fabrication. In particular, the system calculates the optimized positional relationship among objects, and generates joint objects to bond the objects together in order to achieve a certain shape.

Reducing users' perceived mental effort due to interruptive notifications in multi-device mobile environments Reshaping UbiComp environments / Okoshi, Tadashi / Ramos, Julian / Nozaki, Hiroki / Nakazawa, Jin / Dey, Anind K. / Tokuda, Hideyuki Proceedings of the 2015 International Conference on Ubiquitous Computing 2015-09-07 p.475-486
ACM Digital Library Link
Summary: In today's ubiquitous computing environment where users carry, manipulate, and interact with an increasing number of networked devices, applications and web services, human attention is the new bottleneck in computing. It is therefore important to minimize a user's mental effort due to notifications, especially in situations where users are mobile and using multiple wearable and mobile devices. To this end, we propose Attelia II, a novel middleware that identifies breakpoints in users' lives while using those devices, and delivers notifications at these moments. Attelia II works in real-time and uses only the mobile and wearable devices that users naturally use and wear, without any modifications to applications, and without any dedicated psycho-physiological sensors. Our in-the-wild evaluation in users' multi-device environment (smart phones and smart watches) with 41 participants for 1 month validated the effectiveness of Attelia. Our new physical activity-based breakpoint detection, in addition to the UI Event-based breakpoint detection, resulted in a 71.8% greater reduction of users' perception of workload, compared with our previous system that used UI events only. Adding this functionality to a smart watch reduced workload perception by 19.4% compared to random timing of notification deliveries. Our multi-device breakpoint detection across smart phones and watches resulted in about 3 times greater reduction in workload perception than our previous system.

SENSeTREAM: enhancing online live experience with sensor-federated video stream using animated two-dimensional code Mobile applications / Yonezawa, Takuro / Ogawa, Masaki / Kyono, Yutaro / Nozaki, Hiroki / Nakazawa, Jin / Nakamura, Osamu / Tokuda, Hideyuki Proceedings of the 2014 International Joint Conference on Pervasive and Ubiquitous Computing 2014-09-13 v.1 p.301-305
ACM Digital Library Link
Summary: We propose a novel technique that aggregates multiple sensor streams generated by totally different types of sensors into a visually enhanced video stream. This paper shows major features of SENSeTREAM and demonstrates enhancement of user experience in an online live music event. Since SENSeTREAM is a video stream with sensor values encoded in a two-dimensional graphical code, it can transmit multiple sensor data streams while maintaining their synchronization. A SENSeTREAM can be transmitted via existing live streaming services, and can be saved into existing video archive services. We have implemented a prototype SENSeTREAM generator and deployed it to an online live music event. Through the pilot study, we confirmed that SENSeTREAM works with popular streaming services, and provide a new media experience for live performances. We also indicate future direction for establishing visual stream aggregation and its applications.

Attelia: sensing user's attention status on smart phones Posters / Okoshi, Tadashi / Tokuda, Hideyuki / Nakazawa, Jin Adjunct Proceedings of the 2014 International Joint Conference on Pervasive and Ubiquitous Computing 2014-09-13 v.2 p.139-142
ACM Digital Library Link
Summary: In progressing ubiquitous computing where number of devices, applications and the web services are ever increasing, human user's attention is a new bottleneck in computing. This paper proposes Attelia, a novel middleware that senses user's attention status on user's smart phones in real-time, without any dedicated psycho-physiological sensors. To find better delivery timings of interruptive notifications from various applications and services to mobile users, Attelia detects breakpoint[16] of user's activity on the smart phones, with our novel "Application as a Sensor"(AsaS) approach and machine learning technique. Our initial evaluation of Attelia shows it can detect breakpoints of users with accuracy of 80-90%.

LiPS: linked participatory sensing for optimizing social resource allocation 3rd International Workshop on Mobile Systems for Computational Social Science / Sakamura, Mina / Yonezawa, Takuro / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Adjunct Proceedings of the 2014 International Joint Conference on Pervasive and Ubiquitous Computing 2014-09-13 v.2 p.1015-1024
ACM Digital Library Link
Summary: This paper proposes a concept of linked participatory sensing, called LiPS, that divide a complex sensing task into small tasks and link each other to optimize social resource allocation. Recently participatory sensing have been spreading, but its sensing tasks are still very simple and easy for participants to deal with (e.g. Please input the number of people standing in a queue. etc.). To adapt to high-level tasks which require specific skills such as those in engineering, the medical profession or authority such as the organizer of the event, we need to optimize social resource allocation because the number of such professionals are limited. To achieve the complex sensing tasks efficiently, LiPS enables to divide a complex sensing task into small tasks and link each other by assigning proper sensors. LiPS can treat physical sensors and human as hybrid multi-level sensors, and task provider can arrange social resource allocation for the goal of each divided sensing task. In this paper, we describe the design and development of the LiPS system. We also implemented an in-lab experiment as the first prototype of hybrid sensing system and discussed the model of further system through users' feedback.

EverCopter: continuous and adaptive over-the-air sensing with detachable wired flying objects Poster, demo, & video presentations / Kyono, Yutaro / Yonezawa, Takuro / Nozaki, Hiroki / Ogawa, Masaki / Ito, Tomotaka / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.299-302
ACM Digital Library Link
Summary: The paper proposes EverCopter, which provides continuous and adaptive over-the-air sensing with detachable wired flying objects. While a major advantage of sensing systems with battery-operated MAVs is a wide sensing coverage, sensing time is limited due to its limited amount of energy. We propose dynamically rechargeable flying objects, called EverCopter. EverCopter achieves both long sensing time and wide sensing coverage by the following two characteristics. First, multiple EverCopters can be tied in a row by power supply cables. Since the root EverCopter in a row is connected to DC power supply on the ground, each EverCopter can fly without battery. This makes their sensing time forever, unless the power supply on the ground fails. Second, the leaf EverCopter can detach itself from the row in order to enjoy wider sensing coverage. An EverCopter, while it is detached, runs with its own battery-supplied energy. When the remaining energy becomes low, it flies back to the row to recharge the battery.

FRAGWRAP: fragrance-encapsulated and projected soap bubble for scent mapping Poster, demo, & video presentations / Kyono, Yutaro / Yonezawa, Takuro / Nozaki, Hiroki / Nakazawa, Jin / Tokuda, Hideyuki Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.311-314
ACM Digital Library Link
Summary: This paper proposes FRAGWRAP which maps scent to real objects in real-time. To achieve this purpose, we leverage fragrance-encapsulated soap bubble with projection mapping technique. Since human olfaction is known as combined utilization of his/her eyes and nose, we encapsulate fragrance into bubble soap to stimulate the nose and also project 3D image of the fragrance to the bubble soap in real-time. In this video, we present our first prototype which automatically inserts fragrance into a soap bubble and also projects images to the moving bubble. All system is activated by speech recognition.

Reinforcing co-located communication practices through interactive public displays Workshop: human interfaces for civic and urban engagement / Ogawa, Masaki / Jurmu, Marko / Ito, Tomotaka / Yonezawa, Takuro / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.737-740
ACM Digital Library Link
Summary: In recent years, the steady emergence of digital communication, especially social media, has increased the "placelessness" of inter-person communication practices, i.e., lessening the need to reside co-located in order to communicate. When these communication practices carry over to co-located settings, they introduce redundancy and potentially even harm the co-located context, since use of personal technologies tends to isolate users from their surroundings. In this position paper, we want to raise awareness on how interactive public displays could alleviate this redundancy and potential isolation. We present a model of reinforcing co-located communications, and illustrate it through example use cases.

pARnorama: 360 degree interactive video for augmented reality prototyping Workshop: wearable systems for industrial augmented reality applications / Berning, Matthias / Yonezawa, Takuro / Riedel, Till / Nakazawa, Jin / Beigl, Michael / Tokuda, Hide Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.1471-1474
ACM Digital Library Link
Summary: Designing novel and meaningful interactions in the domain of Augmented Reality (AR) requires an efficient and appropriate methodology. A user centered design process requires the construction and evaluation of several prototypes with increasing technical fidelity. Although the main content of the application can already be conveyed with prerendered video, one of the main interactions in AR -- the user-selected viewpoint -- is only available in a very late stage. We propose the use of panoramic 360° video for scenario based user evaluation, where the user can select his point of view during playback. Initial users report a high degree of immersion in the constructed scenario, even for handheld AR.

Waving to a touch interface: descriptive field study of a multipurpose multimodal public display Proxemic interaction / Jurmu, Marko / Ogawa, Masaki / Boring, Sebastian / Riekki, Jukka / Tokuda, Hideyuki Proceedings of the 2013 ACM International Symposium on Pervasive Displays 2013-06-04 p.7-12
ACM Digital Library Link
Summary: Multipurpose public displays are a promising platform, but more understanding is required in how users perceive and engage them. In this paper, we present and discuss results and findings from a two-day descriptive field trial with a multipurpose public display prototype called FluiD. Our main objective was to uncover emerging issues of interaction to inform future evaluations. During the field trial within a public research exhibition, people were able to freely interact with the prototype. Twenty-six persons filled out short questionnaires and gave free-form feedback. In addition, researchers in the vicinity of the display gathered observation data. Our main findings include the difficulties encountered with mid-air gesture commands, the lack of agency in case of larger interaction area, and the possibility for stepping out from the implicit-explicit continuum in the face of potential social conflicts.

Enhancing communication and dramatic impact of online live performance with cooperative audience control On the body and on the move / Yonezawa, Takuro / Tokuda, Hideyuki Proceedings of the 2012 International Conference on Ubiquitous Computing 2012-09-05 p.103-112
ACM Digital Library Link
Summary: Recent progress in information technology enables people to easily broadcast events live on the Internet. Although the advantage of the Internet is live communication between a performer and listeners, the current mode of communication is writing comments using Twitter or Facebook, or some similar messaging network. In one type of live broadcast, musical performances, it is difficult for a musician, when playing an instrument, to communicate with listeners by writing comments. We propose a new communication mode between performers who play musical instruments, and their listeners by enabling listeners to control the performer's camera or illumination remotely. The results of four weeks of experiment confirm the emergence of nonverbal communication between a performer and listeners, and among listeners, which increases camaraderie amongst listeners and performers. Additionally, the dramatic impact of a performance is increased by enabling listeners to control various camera actions such as zoom-in or pan in real time. The results also provide implications for design of future interactive live broadcasting services.

LiDSN: a method to deploy wireless sensor networks securely based on light communication Demos / Doan, Giang / Nguyen, Minh / Takimoto, Takuya / Yonezawa, Takuro / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Proceedings of the 2012 International Conference on Ubiquitous Computing 2012-09-05 p.539-540
Summary: Deploying Wireless Sensor Networks (WSN) securely still requires users to have certain skills and exert effort. In the near "sensor everywhere" future, a much simpler method for deploying WSN will be necessary for end-users. We propose LiDSN (Light Communication for Deploying Secure Wireless Sensor Networks) which enables users to achieve deployment tasks via simple interaction. LiDSN leverages light-based communication between an LED and a light sensor in order to add a new sensor node securely into existing WSN. Through touching interaction, a new sensor node ID and secret key can be transmitted to the WSN, and then the WSN is able to identify which node should be added while maintaining the security of the WSN.

DHT-based sensor data management for geographical range query Posters / Terayama, Junki / Nakazawa, Jin / Tokuda, Hideyuki Proceedings of the 2012 International Conference on Ubiquitous Computing 2012-09-05 p.623-624
Summary: Nowadays, since each sensor network is managed within a single organization, sensor data cannot be obtained externally. When these sensor networks are virtualized that means everyone is able to obtain data anywhere without minding which sensor network the data belongs, two features will be required. One of these is geographical range query. This research realizes it using Z-order in the same way with related works [1][2][3][4]. The other requirement is distributed sensor data management. Current systems adapt the way that stores the data in a (or some) centralized server(s), or that stores the data in many servers, having one centralized server to store indexes of the address of the data. This research proposes a method not relating real space geographical information and relative position of peer in ID space. By using this method, in the place where density of people and smart phones with many sensors increase suddenly such as Super Bowl and new year countdown in NY, by using DHT, sensor data don't concentrate on a specified peer on managing the data. This research simulates and evaluates this method.

Detection, classification and visualization of place-triggered geotagged tweets Workshop on Location-Based Social Networks (LBSN 2012) / Hiruta, Shinya / Yonezawa, Takuro / Jurmu, Marko / Tokuda, Hideyuki Proceedings of the 2012 International Conference on Ubiquitous Computing 2012-09-05 p.956-963
Summary: This paper proposes and evaluates a method to detect and classify tweets that are triggered by places where users locate. Recently, many related works address to detect real world events from social media such as Twitter. However, geotagged tweets often contain noise, which means tweets which are not content-wise related to users' location. This noise is problem for detecting real world events. To address and solve the problem, we define the Place-Triggered Geotagged Tweet, meaning tweets which have both geotag and content-based relation to users' location. We designed and implemented a keyword-based matching technique to detect and classify place-triggered geotagged tweets. We evaluated the performance of our method against a ground truth provided by 18 human classifiers, and achieved 82% accuracy. Additionally, we also present two example applications for visualizing place-triggered geotagged tweets.

Lupe: information access method based on distance between user and sensor nodes using AR technology Demonstration sessions / Takimoto, Takuya / Karatsu, Yutaka / Yonezawa, Takuro / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Proceedings of the 2011 International Conference on Ubiquitous Computing 2011-09-17 p.479-480
ACM Digital Library Link
Summary: This paper proposes the information access method that is based on the distance between users and objects. In Addition, demonstrate Lupe system, which visualizes WSN status information utilizing our method. The evaluative experiment shows that our method is useful in where a number of sensors are setup. As a result our method and Lupe system enable to easily brows WSN status information for end-user.

Transferring information from mobile devices to personal computers by using vibration and accelerometer Demonstration sessions / Yonezawa, Takuro / Ito, Tomotaka / Tokuda, Hideyuki Proceedings of the 2011 International Conference on Ubiquitous Computing 2011-09-17 p.487-488
ACM Digital Library Link
Summary: We propose a simple interaction to transfer information on smart phone to laptop/tablet PCs. We often encounter the situation that we need to send URL, which is preliminary accessed in mobile devices, from mobile devices to personal computers (PCs) to see the web page with wider screen. To support this information transfer, we utilize combination between vibrator in smart phones and accelerometer in laptop/tablet PCs. URL information is encoded to vibration patterns, and the patterns are detected and decoded by accelerometer in PCs. We demonstrate the interaction's efficiency and reasonability with actual products.

User grouping method for ad-hoc conversations based on proximity of users and speaking volumes acquired from portable sensors Poster presentations / Karatsu, Yutaka / Nakazawa, Jin / Tokuda, Hideyuki Proceedings of the 2011 International Conference on Ubiquitous Computing 2011-09-17 p.577-578
ACM Digital Library Link
Summary: Analyzing groups of people having a conversation enable to provide context-aware services, such as life log, group-wares, and the virtualization of social networks. We propose a novel method for extract chatting groups by leveraging Bluetooth RSSI and voice data acquired from smart phones. Neighboring people are detected from Bluetooth RSSI, and conversation groups are extract by talking states. The purpose of this paper is to define algorithm that works on efficiently on smart phones that are general and widespread mobile devices.

Smart Furoshiki: A Context Sensitive Cloth for Supporting Everyday Activities Part 5: Emerging Interactive Technologies / Ohsawa, Ryo / Suzuki, Kei / Imaeda, Takuya / Iwai, Masayuki / Takashio, Kazunori / Tokuda, Hideyuki HCI International 2007: 12th International Conference on Human-Computer Interaction, Part II: Interaction Platforms and Techniques 2007-07-22 v.2 p.1193-1199
Keywords: Furoshiki; Smart Cloth; RFID; Context Awareness
Link to Digital Content at Springer
Summary: This paper introduces a novel system for supporting everyday activities. Recent researches have proposed the embedding of computers and sensors in user environments so as to provide assistance in certain scenarios [1]. However, it is difficult for users to make the environments. Our goal is to develop a technology that will enable novice users to create such environments easily. In order to achieve this goal, we have developed a sensorized cloth called "Smart Furoshiki."

u-Texture: Self-Organizable Universal Panels for Creating Smart Surroundings Systems / Kohtake, Naohiko / Ohsawa, Ryo / Yonezawa, Takuro / Matsukura, Yuki / Iwai, Masayuki / Takashio, Kazunori / Tokuda, Hideyuki Proceedings of the 2005 International Conference on Ubiquitous Computing 2005-09-11 p.19-36
Link to Digital Content at Springer
Summary: This paper introduces a novel way to allow non-expert users to create smart surroundings. Non-smart everyday objects such as furniture and appliances found in homes and offices can be converted to smart ones by attaching computers, sensors, and devices. In this way, non-smart components that form non-smart objects are made smart in advance. For our first prototype, we have developed u-Texture, a self-organizable universal panel that works as a building block. The u-Texture can change its own behavior autonomously through recognition of its location, its inclination, and surrounding environment by assembling these factors physically. We have demonstrated several applications to confirm that u-Textures can create smart surroundings easily without expert users.

u-Photo: Interacting with Pervasive Services Using Digital Still Images Handheld Devices / Suzuki, Genta / Aoki, Shun / Iwamoto, Takeshi / Maruyama, Daisuke / Koda, Takuya / Kohtake, Naohiko / Takashio, Kazunori / Tokuda, Hideyuki Proceedings of Pervasive 2005: International Conference on Pervasive Computing 2005-05-08 p.190-207
Link to Digital Content at Springer
Summary: This paper presents u-Photo which is an interactive digital still image including information of pervasive services associated with networked appliances and sensors in pervasive computing environment. U-Photo Tools can generate a u-Photo and provide methods for discovering contextual information about these pervasive services. Users can easily find out this information through the metaphor of 'taking a photograph'; the users use u-Photo by clicking on a physical entity in a digital still image. In addition, u-Photo makes managing information more efficient because the still image has embedded visual information. Using u-Photo and u-Photo Tools, we conducted various demonstrations and performed usability tests. The results of these tests show that u-Photo Tools are easy to learn. We also present that the time that expert u-Photo users take to find the object in piles of u-Photos is shorter than the time it take to find the object in piles of text-based descriptions.