HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,359,010
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Dublon_G* Results: 8 Sorted by: Date  Comments?
Help Dates
Limit:   
Cilllia: 3D Printed Micro-Pillar Structures for Surface Texture, Actuation and Sensing Designing New Materials and Manufacturing Techniques / Ou, Jifei / Dublon, Gershon / Cheng, Chin-Yi / Heibeck, Felix / Willis, Karl / Ishii, Hiroshi Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.5753-5764
ACM Digital Library Link
Summary: This work presents a method for 3D printing hair-like structures on both flat and curved surfaces. It allows a user to design and fabricate hair geometries that are smaller than 100 micron. We built a software platform to let users quickly define the hair angle, thickness, density, and height. The ability to fabricate customized hair-like structures not only expands the library of 3D-printable shapes, but also enables us to design passive actuators and swipe sensors. We also present several applications that show how the 3D-printed hair can be used for designing everyday interactive objects.

HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality / Russell, Spencer / Dublon, Gershon / Paradiso, Joseph A. Proceedings of the 2016 Augmented Human International Conference 2016-02-25 p.20
ACM Digital Library Link
Summary: In this paper we present a vision for scalable indoor and outdoor auditory augmented reality (AAR), as well as HearThere, a wearable device and infrastructure demonstrating the feasibility of that vision. HearThere preserves the spatial alignment between virtual audio sources and the user's environment, using head tracking and bone conduction headphones to achieve seamless mixing of real and virtual sounds. To scale between indoor, urban, and natural environments, our system supports multi-scale location tracking, using fine-grained (20-cm) Ultra-WideBand (UWB) radio tracking when in range of our infrastructure anchors and mobile GPS otherwise. In our tests, users were able to navigate through an AAR scene and pinpoint audio source locations down to 1m. We found that bone conduction is a viable technology for producing realistic spatial sound, and show that users' audio localization ability is considerably better in UWB coverage zones than with GPS alone. HearThere is a major step towards realizing our vision of networked sensory prosthetics, in which sensor networks serve as collective sensory extensions into the world around us. In our vision, AAR would be used to mix spatialized data sonification with distributed, livestreaming microphones. In this concept, HearThere promises a more expansive perceptual world, or umwelt, where sensor data becomes immediately attributable to extrinsic phenomena, externalized in the wearer's perception. We are motivated by two goals: first, to remedy a fractured state of attention caused by existing mobile and wearable technologies; and second, to bring the distant or often invisible processes underpinning a complex natural environment more directly into human consciousness.

ListenTree: Audio-Haptic Display In The Natural Environment Interactivity / Portocarrero, Edwina / Dublon, Gershon / Paradiso, Joseph / Bove, V. Michael, Jr. Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.395-398
ACM Digital Library Link
Summary: In this paper, we present ListenTree, an audio-haptic display embedded in the natural environment. A visitor to our installation notices a faint sound appearing to emerge from a tree, and might feel a slight vibration under their feet as they approach. By resting their head against the tree, they are able to hear sound through bone conduction. To create this effect, an audio exciter transducer is weatherproofed and attached to the tree trunk underground, transforming the tree into a living speaker that channels audio through its branches. Any source of sound can be played through the tree, including live audio or pre-recorded tracks. For example, we used the ListenTree to display live streaming sound from an outdoor ecological monitoring sensor network, bringing an urban audience into contact with a faraway wetland. Our intervention is motivated by a need for forms of display that fade into the background, inviting attention rather than requiring it. ListenTree points to a future where digital information might become a seamless part of the physical world.

Posters NIME 2014: New Interfaces for Musical Expression 2014-06-30 p.26
sched.co/RIsCh5
A Gesture Detection with Guitar Pickup and Earphones
	+ Suh, Sangwon
	+ Lee, Jeong-seob
	+ Yeo, Woon Seung
A Max/MSP Approach for Incorporating Digital Music via Laptops in Live Performances of Music Bands
	+ Amo, Yehiel
	+ Zissu, Gil
	+ Eloul, Shaltiel
	+ Shlomi, Eran
	+ Schukin, Dima
	+ Kalifa, Almog
A Real Time Common Chord Progression Guide on the Smartphone for Jamming Pop Song on the Music Keyboard
	+ Lui, Simon
An Exploration of Peg Solitaire as a Compositional Tool
	+ Keatch, Kirsty
Auraglyph: Handwritten Computer Music Composition and Design
	+ Salazar, Spencer
	+ Wang, Ge
Body As Instrument: Performing with Gestural Interfaces
	+ Mainsbridge, Mary
	+ Beilharz, Kirsty
Circle Squared and Circle Keys -- Performing on and with an unstable live algorithm for the Disklavier
	+ Dahlstedt, Palle
Composing Embodied Sonic Play Experiences: Towards Acoustic Feedback Ecology
	+ van Troyer, Akito
Design & Evaluation of an Accessible Hybrid Violin Platform
	+ Overholt, Dan
	+ Gelineck, Steven
Dynamical Interactions with Electronic Instruments
	+ Mudd, Tom
	+ Dalton, Nick
	+ Holland, Simon
	+ Mulholland, Paul
eMersion | Sensor-controlled Electronic Music Modules & Digital Data Workstation
	+ Udell, Chet
	+ Sain, James Paul
FingerSynth: Wearable Transducers for Exploring the Environment and Playing Music Everywhere
	+ Dublon, Gershon
	+ Paradiso, Joseph A.
Hand and Finger Motion-Controlled Audio Mixing Interface
	+ Ratcliffe, Jarrod
How to Make Embedded Acoustic Instruments
	+ Berdahl, Edgar
Interactive Parallax Scrolling Score Interface for Composed Networked Improvisation
	+ Canning, Rob
Mobile Device Percussion Parade
	+ Snyder, Jeff
	+ Sarwate, Avneesh
	+ Chen, Carolyn
	+ Fishman, Noah
	+ Collins, Quinn
	+ Ergun, Cenk
	+ Mulshine, Michael
Musical Interface to Audiovisual Corpora of Arbitrary Instruments
	+ Neupert, Max
	+ Goßmann, Joachim
New Open-Source Interfaces for Group Based Participatory Performance of Live Electronic Music
	+ Barraclough, Timothy J
	+ Murphy, Jim
	+ Kapur, Ajay
Orphion: A gestural multi-touch instrument for the iPad
	+ Trump, Sebastian
	+ Bullock, Jamie
Pd-L2Ork Raspberry Pi Toolkit as a Comprehensive Arduino Alternative in K-12 and Production Scenarios
	+ Bukvic, Ivica
PiaF: A Tool for Augmented Piano Performance Using Gesture Variation Following
	+ Van Zandt-Escobar, Alejandro
	+ Caramiaux, Baptiste
	+ Tanaka, Atau
Pitch Canvas: Touchscreen Based Mobile Music Instrument
	+ Strylowski, Bradley
	+ Allison, Jesse
Reappropriating Museum Collections: Performing Geology Specimens and Meterology Data as New Instruments for Musical Expression
	+ Bowers, John
	+ Shaw, Tim
Rub Synth: A Study of Implementing Intentional Physical Difficulty Into Touch Screen Music Controllers
	+ Sarier, Ozan
Sound Analyser: A Plug-in for Real-Time Audio Analysis in Live Performances and Installations
	+ Stark, Adam
Tangle: a Flexible Framework for Performance with Advanced Robotic Musical Instruments
	+ Mathews, Paul
	+ Morris, Ness
	+ Murphy, Jim
	+ Kapur, Ajay
	+ Carnegie, Dale
The Politics of Laptop Ensembles
	+ Knotts, Shelly
	+ Collins, Nick

Patchwork: Multi-User Network Control of a Massive Modular Synthesizer Posters / Mayton, Brian / Dublon, Gershon / Joliat, Nicholas / Paradiso, Joseph A. NIME 2012: New Interfaces for Musical Expression 2012-05-21 p.293
Keywords: Modular synthesizer, HTML5, tangible interface, collaborative musical instrument
www.eecs.umich.edu/nime2012/Proceedings/papers/293_Final_Manuscript.pdf
Summary: We present Patchwerk, a networked synthesizer module with tightly coupled web browser and tangible interfaces. Patchwerk connects to a pre-existing modular synthesizer using the emerging cross-platform HTML5 WebSocket standard to enable low-latency, high-bandwidth, concurrent control of analog signals by multiple users. Online users control physical outputs on a custom-designed cabinet that reflects their activity through a combination of motorized knobs and LEDs, and streams the resultant audio. In a typical installation, a composer creates a complex physical patch on the modular synth that exposes a set of analog and digital parameters (knobs, buttons, toggles, and triggers) to the web-enabled cabinet. Both physically present and online audiences can control those parameters, simultaneously seeing and hearing the results of each other's actions. By enabling collaborative interaction with a massive analog synthesizer, Patchwerk brings a broad audience closer to a rare and historically important instrument. Patchwerk is available online at synth.media.mit.edu.

Tongueduino: hackable, high-bandwidth sensory augmentation Video presentations / Dublon, Gershon / Paradiso, Joseph A. Extended Abstracts of ACM CHI'12 Conference on Human Factors in Computing Systems 2012-05-05 v.2 p.1453-1454
ACM Digital Library Link
Summary: The tongue is known to have an extremely dense sensing resolution, as well as an extraordinary degree of neuroplasticity, the ability to adapt to and internalize new input. Research has shown that electro-tactile tongue displays paired with cameras can be used as vision prosthetics for the blind or visually impaired; users quickly learn to read and navigate through natural environments, and many describe the signals as an innate sense. However, existing displays are expensive and difficult to adapt. Tongueduino is an inexpensive, vinyl-cut tongue display designed to interface with many types of sensors besides cameras. Connected to a magnetometer, for example, the system provides a user with an internal sense of direction, like a migratory bird. Piezo whiskers allow a user to sense orientation, wind, and the lightest touch. Through tongueduino, we hope to bring electro-tactile sensory substitution beyond the discourse of vision replacement, towards open-ended sensory augmentation that anyone can access.

Identifying people in camera networks using wearable accelerometers / Teixeira, Thiago / Jung, Deokwoo / Dublon, Gershon / Savvides, Andreas Proceedings of the 2nd International Conference on PErvasive Technologies Related to Assistive Environments 2009-07-09 p.20
Keywords: association problem, consistent labelling, unique identification
ACM Digital Library Link
Summary: We propose a system to identify people in a sensor network. The system fuses motion information measured from wearable accelerometer nodes with motion traces of each person detected by a camera node. This allows people to be uniquely identified with the IDs the accelerometer-node that they wear, while their positions are measured using the cameras. The system can run in real time, with high precision and recall results. A prototype implementation using iMote2s with camera boards and wearable TI EZ430 nodes with accelerometer sensorboards is also described.

Methods of 3D Printing Micro-pillar Structures on Surfaces Demonstrations / Ou, Jifei / Cheng, Chin-Yi / Zhou, Liang / Dublon, Gershon / Ishii, Hiroshi Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.59-60
ACM Digital Library Link
Summary: This work presents a method of 3D printing hair-like structures on both flat and curved surfaces. It allows a user to design and fabricate hair geometry that is smaller than 100 micron. We built a software platform to let one quickly define a hair's angle, thickness, density, and height. The ability to fabricate customized hair-like structures expands the library of 3D-printable shape. We then present several applications to show how the 3D-printed hair can be used for designing toy objects.