HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,288,285
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Ogawa_M* Results: 12 Sorted by: Date  Comments?
Help Dates
Limit:   
ReFabricator: Integrating Everyday Objects for Digital Fabrication Interactivity Demos / Yamada, Suguru / Morishige, Hironao / Nozaki, Hiroki / Ogawa, Masaki / Yonezawa, Takuro / Tokuda, Hideyuki Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3804-3807
ACM Digital Library Link
Summary: Since current digital fabrication relies on 3D printer very much, there are several concerns such as printing cost (i.e., both financial and temporal cost) and sometimes too homogeneous impression with plastic filament. To address and solve the problem, we propose ReFabricator, a computational fabrication tool integrating everyday objects into digital fabrication. ReFabrication is a concept of fabrication, mixing the idea of Reuse and Digital Fabrication, which aims to fabricate new functional shape with ready made products, effectively utilizing its behavior. As a system prototype, we have implemented a design tool which enables users to gather up every day objects and reassemble them to another functional shape with taking advantages of both analog and digital fabrication. In particular, the system calculates the optimized positional relationship among objects, and generates joint objects to bond the objects together in order to achieve a certain shape.

Food Detection and Recognition Using Convolutional Neural Network Posters 3 / Kagaya, Hokuto / Aizawa, Kiyoharu / Ogawa, Makoto Proceedings of the 2014 ACM International Conference on Multimedia 2014-11-03 p.1085-1088
ACM Digital Library Link
Summary: In this paper, we apply a convolutional neural network (CNN) to the tasks of detecting and recognizing food images. Because of the wide diversity of types of food, image recognition of food items is generally very difficult. However, deep learning has been shown recently to be a very powerful image recognition technique, and CNN is a state-of-the-art approach to deep learning. We applied CNN to the tasks of food detection and recognition through parameter optimization. We constructed a dataset of the most frequent food items in a publicly available food-logging system, and used it to evaluate recognition performance. CNN showed significantly higher accuracy than did traditional support-vector-machine-based methods with handcrafted features. In addition, we found that the convolution kernels show that color dominates the feature extraction process. For food image detection, CNN also showed significantly higher accuracy than a conventional method did.

Examination of human factors for wearable line-of-sight detection system Posters / Ogawa, Miho / Sampei, Kota / Cortes, Carlos / Miki, Norihisa Proceedings of the 2014 International Symposium on Wearable Computers 2014-09-13 v.1 p.139-140
ACM Digital Library Link
Summary: We proposed a wearable line-of-sight detection system that utilizes micro-fabricated transparent optical sensors on eyeglasses. These sensors can detect the reflection of light from the eye, in which the intensity from the white of the eye is stronger than that of the pupil, and can thus deduce the position of the pupil. LOS detection was successfully demonstrated by using the proposed system, but careful calibration was required for each user. Therefore, in the current study, we investigated the dominant factors that affected the LOS detection accuracy. It was experimentally found that the distance between the sensors on the eyeglasses and the pupil was a dominant factor. Thus, we designed a frame that can be adjusted according to this distance, which enabled LOS detection for all subjects.

SENSeTREAM: enhancing online live experience with sensor-federated video stream using animated two-dimensional code Mobile applications / Yonezawa, Takuro / Ogawa, Masaki / Kyono, Yutaro / Nozaki, Hiroki / Nakazawa, Jin / Nakamura, Osamu / Tokuda, Hideyuki Proceedings of the 2014 International Joint Conference on Pervasive and Ubiquitous Computing 2014-09-13 v.1 p.301-305
ACM Digital Library Link
Summary: We propose a novel technique that aggregates multiple sensor streams generated by totally different types of sensors into a visually enhanced video stream. This paper shows major features of SENSeTREAM and demonstrates enhancement of user experience in an online live music event. Since SENSeTREAM is a video stream with sensor values encoded in a two-dimensional graphical code, it can transmit multiple sensor data streams while maintaining their synchronization. A SENSeTREAM can be transmitted via existing live streaming services, and can be saved into existing video archive services. We have implemented a prototype SENSeTREAM generator and deployed it to an online live music event. Through the pilot study, we confirmed that SENSeTREAM works with popular streaming services, and provide a new media experience for live performances. We also indicate future direction for establishing visual stream aggregation and its applications.

Frequency statistics of words used in Japanese food records of FoodLog CEA 2014 -- Smart Technology for Cooking and Eating Activities / Amano, Sosuke / Ogawa, Makoto / Aizawa, Kiyoharu Adjunct Proceedings of the 2014 International Joint Conference on Pervasive and Ubiquitous Computing 2014-09-13 v.2 p.547-552
ACM Digital Library Link
Summary: Recording foods enable us to improve our dietary habits. In food records, there are a variety of descriptions of meals because there is no standard way to express meal names. In this study, we analyze Japanese food records from the view of word frequency. We show very small numbers of words are satisfactory to describe the majority of the record.

EverCopter: continuous and adaptive over-the-air sensing with detachable wired flying objects Poster, demo, & video presentations / Kyono, Yutaro / Yonezawa, Takuro / Nozaki, Hiroki / Ogawa, Masaki / Ito, Tomotaka / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.299-302
ACM Digital Library Link
Summary: The paper proposes EverCopter, which provides continuous and adaptive over-the-air sensing with detachable wired flying objects. While a major advantage of sensing systems with battery-operated MAVs is a wide sensing coverage, sensing time is limited due to its limited amount of energy. We propose dynamically rechargeable flying objects, called EverCopter. EverCopter achieves both long sensing time and wide sensing coverage by the following two characteristics. First, multiple EverCopters can be tied in a row by power supply cables. Since the root EverCopter in a row is connected to DC power supply on the ground, each EverCopter can fly without battery. This makes their sensing time forever, unless the power supply on the ground fails. Second, the leaf EverCopter can detach itself from the row in order to enjoy wider sensing coverage. An EverCopter, while it is detached, runs with its own battery-supplied energy. When the remaining energy becomes low, it flies back to the row to recharge the battery.

Reinforcing co-located communication practices through interactive public displays Workshop: human interfaces for civic and urban engagement / Ogawa, Masaki / Jurmu, Marko / Ito, Tomotaka / Yonezawa, Takuro / Nakazawa, Jin / Takashio, Kazunori / Tokuda, Hideyuki Adjunct Proceedings of the 2013 International Joint Conference on Pervasive and Ubiquitous Computing 2013-09-08 v.2 p.737-740
ACM Digital Library Link
Summary: In recent years, the steady emergence of digital communication, especially social media, has increased the "placelessness" of inter-person communication practices, i.e., lessening the need to reside co-located in order to communicate. When these communication practices carry over to co-located settings, they introduce redundancy and potentially even harm the co-located context, since use of personal technologies tends to isolate users from their surroundings. In this position paper, we want to raise awareness on how interactive public displays could alleviate this redundancy and potential isolation. We present a model of reinforcing co-located communications, and illustrate it through example use cases.

Waving to a touch interface: descriptive field study of a multipurpose multimodal public display Proxemic interaction / Jurmu, Marko / Ogawa, Masaki / Boring, Sebastian / Riekki, Jukka / Tokuda, Hideyuki Proceedings of the 2013 ACM International Symposium on Pervasive Displays 2013-06-04 p.7-12
ACM Digital Library Link
Summary: Multipurpose public displays are a promising platform, but more understanding is required in how users perceive and engage them. In this paper, we present and discuss results and findings from a two-day descriptive field trial with a multipurpose public display prototype called FluiD. Our main objective was to uncover emerging issues of interaction to inform future evaluations. During the field trial within a public research exhibition, people were able to freely interact with the prototype. Twenty-six persons filled out short questionnaires and gave free-form feedback. In addition, researchers in the vicinity of the display gathered observation data. Our main findings include the difficulties encountered with mid-air gesture commands, the lack of agency in case of larger interaction area, and the possibility for stepping out from the implicit-explicit continuum in the face of potential social conflicts.

Software evolution storylines New visualization and interaction techniques / Ogawa, Michael / Ma, Kwan-Liu Proceedings of the ACM Symposium on Software Visualization 2010-10-25 p.35-42
ACM Digital Library Link
Summary: This paper presents a technique for visualizing the interactions between developers in software project evolution. The goal is to produce a visualization that shows more detail than animated software histories, like code_swarm [15], but keeps the same focus on aesthetics and presentation. Our software evolution storylines technique draws inspiration from XKCD's "Movie Narrative Charts" and the aesthetic design of metro maps. We provide the algorithm, design choices, and examine the results of using the storylines technique. Our conclusion is that the it is able to show more details when compared to animated software project history videos. However, it does not scale to the largest projects, such as Eclipse and Mozilla.

Usage of IT and Electronic Devices, and Its Structure, for Community-Dwelling Elderly People with Cognitive Problems and the Aging Population / Ogawa, Madoka / Inagaki, Hiroki / Gondo, Yasuyuki ICCHP'06: International Conference on Computers Helping People with Special Needs 2006-07-11 p.752-758
Link to Digital Content at Springer
Summary: Electrical household appliances and IT (information technology) are believed to increase the QOL and well-being of the people who use them. The benefits of electronic devices for elderly people would be more evident than for younger people because it is assumed that such equipment would compensate for the decline of functional ability in the elderly. However, there has been only very limited research on the actual usage and influence of such devices in relation to generation and age. The purposes of the present study were to clarify the actual situation with regard to the use of IT and electronic devices by community-dwelling elderly, and to characterize individuals according to their familiarity with such devices.

New findings on pupil response in gazing to flashed divided displays Work-in-progress / Tano, Shun'ichi / Ogawa, Masaru / Iwata, Mitsuru / Hashiyama, Tomonori Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006-04-22 v.2 p.1403-1408
ACM Digital Library Link
Summary: We show that the pupil responds to the small area being gazed when it is flashed. The experiments revealed that the pupil reacts to the brightness of the small area being gazed at. Surprisingly, the resolution is 1.6 degrees at the viewing angle. This is equivalent to the spatial resolution of a 21 x 16 mm area at a distance of 60 cm. Finally, the feasibility of the new findings is demonstrated by applying it to character input.

Experimental Method for Construction of a Knowledge-Based System for Shipping Berth Scheduling II. Software Tools / Ogawa, Masaichiro / Saito, Norio / Tabe, Tsutomu / Sugimura, Shinji Proceedings of the Fifth International Conference on Human-Computer Interaction 1993-08-08 v.2 p.315-320
Summary: This paper gives preliminary information on the methodology needed for the construction of a knowledge-based system for shipping berth scheduling. For this purpose, a computer simulator for shipping berth scheduling was developed for eliciting knowledge from humans. The results of scheduling by humans were analyzed by the protocol analysis using a GOMS model which is a useful tool to elicit knowledge for shipping berth scheduling in a real-time interactive environment. Furthermore, the elicited knowledge was transferred into a computer as a knowledge-based system for shipping berth scheduling.