HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,228,366
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Serafin_S* Results: 28 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 28 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 |
From Ecological Sounding Artifacts Towards Sonic Artifact Ecologies alt.chi: See this, hear this, touch this, keep this / Erkut, Cumhur / Serafin, Stefania Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.560-570
ACM Digital Library Link
Summary: The discipline of sonic interaction design has been focused on the interaction between a single user and an artifact. This strongly limits one of the fundamental aspects of music as a social and interactive experience. In this paper we propose sonic artifact ecologies as a mean to examine interactions between one or many users with one or many artifacts. Case studies from a recently run workshop on product sound design are examined.

Product Sound Design: Form, Function, and Experience / Erkut, Cumhur / Serafin, Stefania / Hoby, Michael / Sårde, Jonniy Proceedings of Audio Mostly 2015: A Conference on Interaction with Sound 2015-10-07 p.10
ACM Digital Library Link
Summary: Current interactive products, services, and environments are appraised by their sensory attributes, in addition to their form and function. Sound is an important factor in these multisensory product appraisals. Integrating this sound opportunity into the design and development of interactive products, which are fit for real-world, yet constitute a strong brand identity, remains a challenge. We address this challenge by applying the research know-how of an academic institution and business practices of a sound agency SME within the core R&D and production process of the third industrial partner. Our approach has clear application scenarios in, e.g., extended wireless headsets, car audio appliances, and portable entertainment devices. We describe the prototypes developed during the project life span, and the activities and outcomes of a half-day workshop designed to disseminate the project results.

Spatial Sound and Multimodal Interaction in Immersive Environments / Grani, Francesco / Overholt, Dan / Erkut, Cumhur / Gelineck, Steven / Triantafyllidis, Georgios / Nordahl, Rolf / Serafin, Stefania Proceedings of Audio Mostly 2015: A Conference on Interaction with Sound 2015-10-07 p.17
ACM Digital Library Link
Summary: Spatial sound and interactivity are key elements of investigation at the Sound And Music Computing master program at Aalborg University Copenhagen.
    We present a collection of research directions and recent results from work in these areas, with the focus on our multifaceted approaches to two primary problem areas: 1) creation of interactive spatial audio experiences for immersive virtual and augmented reality scenarios, and 2) production and mixing of spatial audio for cinema, music, and other artistic contexts. Several ongoing research projects are described, wherein the latest developments are discussed.
    These include elements in which we have provided sonic interaction in virtual environments, interactivity with volumetric sound sources using VBAP and Wave Field Synthesis (WFS), and binaural sound for virtual environments and spatial audio mixing. We show that the variety of approaches presented here are necessary in order to optimize interactivity with spatial audio for each particular type of task.

The StringPhone: a novel voice driven physically based synthesizer / Teglbjærg, David Stubbe / Andersen, Jesper S. / Serafin, Stefania Proceedings of Audio Mostly 2015: A Conference on Interaction with Sound 2015-10-07 p.31
ACM Digital Library Link
Summary: This paper describes the development of TheStringPhone, a physical modeling based polyphonic digital musical instrument that uses the human voice as input excitation. The core parts of the instrument include digital filters, waveguide sections and feedback delay networks for reverberation. We describe the components of the instrument and the results of an informal evaluation with different musicians.

The influence of step frequency on the range of perceptually natural visual walking speeds during walking-in-place and treadmill locomotion Perception / Nilsson, Niels Christian / Serafin, Stefania / Nordahl, Rolf Proceedings of the 2014 ACM Symposium on Virtual Reality Software and Technology 2014-11-11 p.187-190
ACM Digital Library Link
Summary: Walking-In-Place (WIP) techniques make relatively natural walking experiences within immersive virtual environments possible when the physical interaction space is limited in size. In order to facilitate such experiences it is necessary to establish a natural connection between steps in place and virtual walking speeds. This paper details a study investigating the effects of movement type (treadmill walking and WIP) and step frequency (1.4, 1.8 and 2.2 steps per second) on the range of perceptually natural visual walking speeds. The results suggests statistically significant main effects of both movement type and step frequency but no significant interaction between the two variables.

The role of sound in the sensation of ownership of a pair of virtual wings in immersive VR / Sikström, Erik / de Götzen, Amalia / Serafin, Stefania Proceedings of Audio Mostly 2014: A Conference on Interaction with Sound 2014-10-01 p.24
ACM Digital Library Link
Summary: This paper describes an evaluation of the role of self-produced sounds in participants' sensation of ownership and control of virtual wings in an immersive virtual reality scenario where the participants were asked to complete an obstacle course flight while exposed to four different sound conditions The experiment resulted in either none or very small differences between the experimental conditions.

Posters NIME 2014: New Interfaces for Musical Expression 2014-06-30 p.67
sched.co/1sEbd4Q
CHIMAERA -- the poly-magneto-phonic theremin -- an expressive touch-less hall-effect sensor array
	+ Portner, Hanspeter
Collaborative Live-Coding with an Immersive Instrument
	+ Wakefield, Graham
	+ Roberts, Charlie
	+ Wright, Matthew
	+ Wood, Timothy
	+ Yerkes, Karl
Composing for DMIs -- Entoa, a dedicate piece for Intonaspacio
	+ Mamedes, Clayton
	+ Rodrigues, Mailis
	+ Wanderley, Marcelo M.
	+ Manzolli, Jônatas
	+ Garcia, Denise H. L.
	+ Ferreira-Lopes, Paulo
Conducting collective instruments: A case study
	+ Comajuncosas, Josep
	+ Guaus, Enric
Conductive Music: Teaching Innovative Interface Design and Composition Techniques with Open-Source Hardware
	+ Bertelli, Enrico
	+ Robertson, Emily
Controlling Physically Based Virtual Musical Instruments Using The Gloves
	+ Serafin, Stefania
	+ Stereo, Stefano
	+ Mitchell, Tom
	+ Grani, Francesco
	+ Madgwick, Seb
	+ Perner-Wilson, Hannah
Designing Mappings for the Sponge: Towards Spongistic Music
	+ Marier, Martin
Designing Sound for Recreation and Well-Being
	+ Andersson, Anders-Petter
	+ Cappelen, Birgitta
	+ Olofsson, Fredrik
Distributing Mobile Music Applications for Audience Participation Using Mobile Ad-hoc Network (MANET)
	+ Lee, Sang Won
	+ Essl, Georg
	+ Mao, Z. Morley
El-Lamellophone -- A Low-cost, DIY, Open Framework for Acoustic Lemellophone Based Hyperinstruments
	+ Trail, Shawn
Gesture and Embodied Metaphor in Spatial Music Performance Systems Design.
	+ Graham, Ricky
	+ Bridges, Brian
Improvasher: a real-time mashup system for live musical input
	+ Davies, Matthew
	+ Stark, Adam
	+ Goto, Masataka
	+ Gouyon, Fabien
In A State: Live Emotion Detection and Visualisation for Music Performance
	+ Klooster, Adinda van 't
	+ Collins, Nick
Musical composition by regressional mapping of physiological responses to acoustic features
	+ Wikström, Valtteri
Notation, mapping and composition for the Karlax
	+ Mays, Tom
	+ Faber, Francis
Polus: The Design and Development of a New, Mechanically Bowed String Instrument Ensemble
	+ Johnston, Blake
	+ Thrush, Henry Dengate
	+ Moleta, Tane
	+ Murphy, Jim
	+ Kapur, Ajay
Reunion2012: A novel interface for sound producing actions through the game of chess
	+ Bugge, Magnus
	+ Wilmers, Hans
	+ Tveit, Anders
	+ Thelle, Notto
	+ Johansen, Thom
	+ Sæther, Eskil Muan
Robot: Tune Yourself! Automatic Tuning for Musical Robotics
	+ Murphy, Jim
	+ Mathews, Paul
	+ Carnegie, Dale
	+ Kapur, Ajay
Sketch-Based Musical Composition and Performance
	+ Diao, Haojing
	+ Zhou, Yanchao
	+ Harte, Christopher Andrew
	+ Bryan-Kinns, Nick
Smartphone-based Music conducting
	+ Lim, Yang Kyu
	+ Yeo, Woon Seung
SOUND TOSSING Audio Devices in the Context of Street Art
	+ Gupfinger, Reinhard
	+ Kaltenbrunner, Martin
The Birl: An Electronic Wind Instrument Based on an Artificial Neural Network Parameter Mapping Structure
	+ Snyder, Jeff
	+ Ryan, Danny
The Manipuller II: Strings within a Force Sensing Ring
	+ Barenca, Adrian
	+ Corak, Milos
The Space Between Us. A live performance with musical score generated via emotional levels measured in EEG of one performer and an audience member
	+ Eaton, Joel
	+ Jin, Weiwei
	+ Miranda, Eduardo
Unsounding Objects: Audio Feature for Control of Sound Synthesis in a Digital Percussion Instrument
	+ Hattwick, Ian
	+ Beebe, Preston
	+ Hale, Zachary
	+ Wanderley, Marcelo
	+ Leroux, Philippe
	+ Marandola, Fabrice
Use of Body Motion to Enhance Traditional Musical Instruments
	+ Visi, Federico
	+ Schramm, Rodrigo
	+ Miranda, Eduardo
Visualizing Gestures in the Control of a Digital Musical Instrument
	+ Perrotin, Olivier
	+ d'Alessandro, Christophe

Design and evaluation of interactive musical fruit Wednesday short papers / Erkut, Cumhur / Serafin, Stefania / Fehr, Jonas / Figueira, Henrique M. R. Fernandes / Hansen, Theis B. / Kirwan, Nicholas J. / Zakarian, Mariam R. Proceedings of ACM IDC'14: Interaction Design and Children 2014-06-17 p.197-200
ACM Digital Library Link
Summary: In this paper we describe the design and evaluation of a novel, tangible user interface for interaction with sound, to be implemented in a museum setting. Our work-in-progress is part of a larger concept for an installation prioritizing a collaborative, explorative, multimodal experience. Focus has been centered on novice children, in order to accommodate all potential users of the museum, and to minimize the risk of excluding users based on skill or previous musical knowhow. We have developed four instances of a multimodal device for interacting with sounds via a tangible interface, and called them Interactive Musical Fruits (IMFs). The IMF consists of an embedded processing system, which can detect its orientation. Qualitative testing with children has been performed, to better evaluate the current design state. Positive feedback from the test subjects upholds the validity and the potential of the IMF as an interface in a museum context. However, further research is required to improve the interactive and collaborative aspects of the device, as well as the aural and visual properties of the IMF.

SiMPE: 8th workshop on speech and sound in mobile and pervasive environments Workshops / Nanavati, Amit A. / Rajput, Nitendra / Srivastava, Saurabh / Erkut, Cumhur / Jylhä, Antti / Rudnicky, Alexander I. / Serafin, Stefania / Turunen, Markku Proceedings of 2013 Conference on Human-computer interaction with mobile devices and services 2013-08-27 2013-08-27 p.626-629
ACM Digital Library Link
Summary: The SiMPE workshop series started in 2006 with the goal of enabling speech processing on mobile and embedded devices. The SiMPE 2012 workshop extended the notion of audio to non-speech "Sounds" and thus the expansion became "Speech and Sound". SiMPE 2010 and 2011 brought together researchers from the speech and the HCI communities. Speech User interaction in cars was a focus area in 2009. Multimodality got more attention in SiMPE 2008. In SiMPE 2007, the focus was on developing regions.
    With SiMPE 2013, the 8th in the series, we continue to explore the area of speech along with sound. Akin to language processing and text-to-speech synthesis in the voice-driven interaction loop, sensors can track continuous human activities such as singing, walking, or shaking the mobile phone, and non-speech audio can facilitate continuous interaction. The technologies underlying speech processing and sound processing are quite different and these communities have been working mostly independent of each other. And yet, for multimodal interactions on the mobile, it is perhaps natural to ask whether and how speech and sound can be mixed and used more effectively and naturally.

Mobile rhythmic interaction in a sonic tennis game Demos (2) / Baldan, Stefano / De Götzen, Amalia / Serafin, Stefania NIME 2013: New Interfaces for Musical Expression 2013-05-27 p.96
Keywords: Audio game, mobile devices, sonic interaction design, rhythmic interaction, motion-based
nime2013.kaist.ac.kr/program/papers/day2/demo2/288/288_Paper.pdf
Summary: This paper presents an audio-based tennis simulation game for mobile devices, which uses motion input and non-verbal audio feedback as exclusive means of interaction. Players have to listen carefully to the provided auditory clues, like racquet hits and ball bounces, rhythmically synchronizing their movements in order to keep the ball into play. The device can be swung freely and act as a full-edged motion-based controller, as the game does not rely at all on visual feedback and the device display can thus be ignored. The game aims to be entertaining but also effective for educational purposes, such as ear training or improvement of the sense of timing, and enjoyable both by visually-impaired and sighted users.

Mobile rhythmic interaction in a sonic tennis game Interactivity: exploration / Baldan, Stefano / de Götzen, Amalia / Serafin, Stefania Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing Systems 2013-04-27 v.2 p.2903-2906
ACM Digital Library Link
Summary: This paper presents a game for mobile devices which simulates a tennis match between two players. It is an audio-based game, so the majority of information and feedback to the user is given through sound instead of being displayed on a screen. As users are not requested to keep their eyes on the display, the device can be used as a motion-based controller, exploiting its internal motion sensors to their full potential. The game aims to be useful for both entertainment and educational purposes, and enjoyable both by visually-impaired (the main target audience for audio-based games nowadays) and sighted users.

Rhythmic walking interactions with auditory feedback: an exploratory study / Jylhä, Antti / Serafin, Stefania / Erkut, Cumhur Proceedings of the 2012 Audio Mostly Conference: A Conference on Interaction with Sound 2012-09-26 p.68-75
ACM Digital Library Link
Summary: Walking is a natural rhythmic activity that has become of interest as a means of interacting with software systems such as computer games. Therefore, designing multimodal walking interactions calls for further examination. This exploratory study presents a system capable of different kinds of interactions based on varying the temporal characteristics of the output, using the sound of human walking as the input. The system either provides a direct synthesis of a walking sound based on the detected amplitude envelope of the user's footstep sounds, or provides a continuous synthetic walking sound as a stimulus for the walking human, either with a fixed tempo or a tempo adapting to the human gait. In a pilot experiment, the different interaction modes are studied with respect to their effect on the walking tempo and the experience of the subjects. The results tentatively outline different user profiles in interacting with such a system.

Audio-Haptic Simulation of Walking on Virtual Ground Surfaces to Enhance Realism Supporting Experiences and Activities / Nilsson, Niels C. / Nordahl, Rolf / Turchet, Luca / Serafin, Stefania HAID 2012: International Workshop on Haptic and Audio Interaction Design 2012-08-23 p.61-70
Link to Digital Content at Springer
Summary: In this paper we describe two experiments whose goal is to investigate the role of physics-based auditory and haptic feedback provided at feet level to enhance realism in a virtual environment. To achieve this goal, we designed a multimodal virtual environment where subjects could walk on a platform overlooking a canyon. Subjects were asked to visit the environment wearing an head-mounted display and a custom made pair of sandals enhanced with sensors and actuators. A 12-channels surround sound system delivered a soundscape which was consistent with the visual environment. In the first experiment, passive haptics was provided by having a physical wooden platform present in the laboratory. In the second experiment, no passive haptics was present. In both experiments, subjects reported of having a more realistic experience while auditory and haptic feedback are present. However, measured physiological data and post-experimental presence questionnaire do not show significant differences when audio-haptic feedback is provided.

Towards an open sound card: bare-bones FPGA board in context of PC-based digital audio: based on the AudioArduino open sound card system / Dimitrov, Smilen / Serafin, Stefania Proceedings of the 2011 Audio Mostly Conference: A Conference on Interaction with Sound 2011-09-07 p.47-54
ACM Digital Library Link
Summary: The architecture of a sound card can, in simple terms, be described as an electronic board containing a digital bus interface hardware, and analog-to-digital (A/D) and digital-to-analog (D/A) converters; then, a soundcard driver software on a personal computer's (PC) operating system (OS) can control the operation of the A/D and D/A converters on board the soundcard, through a particular bus interface of the PC -- acting as an intermediary for high-level audio software running in the PC's OS.
    This project provides open-source software for a do-it-yourself (DIY) prototype board based on a Field-Programmable Gate Array (FPGA), that interfaces to a PC through the USB bus -- and demonstrates full-duplex, mono 8-bit/44.1 kHz soundcard operation. Thus, the inclusion of FPGA technology in this paper -- along with previous work with discrete part- and microcontroller-based designs -- completes an overview of architectures, currently available for DIY implementations of soundcards; serving as a broad introductory tutorial to practical digital audio.

Audio Arduino -- an ALSA (Advanced Linux Sound Architecture) Audio Driver for FTDI-based Arduinos / Dimitrov, Smilen / Serafin, Stefania NIME 2011: New Interfaces for Musical Expression 2011-05-30 p.211-216
Keywords: Sound card, Arduino, audio, driver, ALSA, Linux
www.nime.org/proceedings/2011/nime2011_211.pdf
Summary: A contemporary PC user, typically expects a sound card to be a piece of hardware, that: can be manipulated by 'audio' software (most typically exemplified by 'media players'); and allows interfacing of the PC to audio reproduction and/or recording equipment. As such, a 'sound card' can be considered to be a system, that encompasses design decisions on both hardware and software levels that also demand a certain understanding of the architecture of the target PC operating system. This project outlines how an Arduino Duemillanove board (containing a USB interface chip, manufactured by Future Technology Devices International Ltd [FTDI] company) can be demonstrated to behave as a full-duplex, mono, 8-bit 44.1 kHz soundcard, through an implementation of: a PC audio driver for ALSA (Advanced Linux Sound Architecture); a matching program for the Arduino's ATmega microcontroller and nothing more than headphones (and a couple of capacitors). The main contribution of this paper is to bring a holistic aspect to the discussion on the topic of implementation of soundcards also by referring to open-source driver, microcontroller code and test methods; and outline a complete implementation of an open yet functional soundcard system.

Influence of auditory and visual feedback for perceiving walking over bumps and holes in desktop VR Interactions / Turchet, Luca / Marchal, Maud / Lécuyer, Anatole / Nordahl, Rolf / Serafin, Stefania Proceedings of the 2010 ACM Symposium on Virtual Reality Software and Technology 2010-11-22 p.139-142
ACM Digital Library Link
Summary: In this paper, we present an experiment whose goal is to investigate the role of sound and vision in the recognition of different surface profiles in a walking scenario. Fifteen subjects participated to two within-subjects experiments where they were asked to interact with a desktop system simulating bumps, holes and flat surfaces by means of audio, visual and audio-visual cues. Results of the first experiment show that participants are able to successfully identify the surface profiles provided through the proposed audio-visual techniques. Results of a second experiment in which conflictual audiovisual stimuli were presented, reveal that for some of the proposed visual effects the visual feedback is dominant on the auditory one, while for the others the role of dominance is inverted.

Conflicting Audio-haptic Feedback in Physically Based Simulation of Walking Sounds Walking and Navigation Interfaces / Turchet, Luca / Serafin, Stefania / Dimitrov, Smilen / Nordahl, Rolf HAID 2010: International Workshop on Haptic and Audio Interaction Design 2010-09-16 p.97-106
Link to Digital Content at Springer
Summary: We describe an audio-haptic experiment conducted using a system which simulates in real-time the auditory and haptic sensation of walking on different surfaces. The system is based on physical models, that drive both the haptic and audio synthesizers, and a pair of shoes enhanced with sensors and actuators. Such experiment was run to examine the ability of subjects to recognize the different surfaces with both coherent and incoherent audio-haptic stimuli. Results show that in this kind of tasks the auditory modality is dominant on the haptic one.

A Quantitative Evaluation of the Differences between Knobs and Sliders / Gelineck, Steven / Serafin, Stefania NIME 2009: New Interfaces for Musical Expression 2009-06-04 p.13-18
www.nime.org/proceedings/2009/nime2009_013.pdf

Sound design and perception in walking interactions Sonic Interaction Design / Visell, Y. / Fontana, F. / Giordano, B. L. / Nordahl, R. / Serafin, S. / Bresin, R. International Journal of Human-Computer Studies 2009 v.67 n.11 p.947-959
Keywords: Auditory display; Vibrotactile display; Interaction design; Walking interfaces
Link to Article at ScienceDirect
1. Introduction
1.1. Foot-ground interactions and their signatures
1.2. Overview
2. Human perception
2.1. Isolated impact sounds
2.2. Acoustic and multimodal walking events
3. Augmented ground surfaces as walking interfaces
3.1. Physical interaction design
3.2. Control design
3.3. Sound synthesis
3.3.1. Solid surfaces
3.3.2. Aggregate surfaces
3.4. Augmented ground surfaces developed to date
3.5. Example: Eco Tile
4. Affective footstep sounds
5. VR applications and presence studies
5.1. Auditory feedback and motion
6. Conclusions
Summary: This paper reviews the state of the art in the display and perception of walking generated sounds and tactile vibrations, and their current and potential future uses in interactive systems. As non-visual information sources that are closely linked to human activities in diverse environments, such signals are capable of communicating about the spaces we traverse and activities we encounter in familiar and intuitive ways. However, in order for them to be effectively employed in human-computer interfaces, significant knowledge is required in areas including the perception of acoustic signatures of walking, and the design, engineering, and evaluation of interfaces that utilize them. Much of this expertise has accumulated in recent years, although many questions remain to be explored. We highlight past work and current research directions in this multidisciplinary area of investigation, and point to potential future trends.

Developing Block-Movement, Physical-Model Based Objects for the Reactable / Dimitrov, Smilen / Alonso, Marcos / Serafin, Stefania NIME 2007: New Interfaces for Musical Expression 2008-06-05 p.211-214
www.nime.org/proceedings/2008/nime2008_211.pdf

Sonic interaction design: sound, information and experience Workshops / Rocchesso, Davide / Serafin, Stefania / Behrendt, Frauke / Bernardini, Nicola / Bresin, Roberto / Eckel, Gerhard / Franinovic, Karmen / Hermann, Thomas / Pauletto, Sandra / Susini, Patrick / Visell, Yon Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems 2008-04-05 v.2 p.3969-3972
ACM Digital Library Link
Summary: Sonic Interaction Design (SID) is an emerging field that is positioned at the intersection of auditory display, ubiquitous computing, interaction design, and interactive arts. SID can be used to describe practice and inquiry into any of various roles that sound may play in the interaction loop between users and artifacts, services, or environments, in applications that range from the critical functionality of an alarm, to the artistic significance of a musical creation. This field is devoted to the privileged role the auditory channel can assume in exploiting the convergence of computing, communication, and interactive technologies. An over-emphasis on visual displays has constrained the development of interactive systems that are capable of making more appropriate use of the auditory modality. Today the ubiquity of computing and communication resources allows us to think about sounds in a proactive way. This workshop puts a spotlight on such issues in the context of the emerging domain of SID.

PHYSMISM: A Control Interface for Creative Exploration of Physical Models / Bottcher, Niels / Gelineck, Steven / Serafin, Stefania NIME 2007: New Interfaces for Musical Expression 2007-06-06 p.31-36
www.nime.org/proceedings/2007/nime2007_031.pdf

Synthesis and control of everyday sound reconstructing Russolo's Intonarumori Paper Session 4: Instrument Design / Serafin, Stefania / de Götzen, Amalia / Böttcher, Niels / Gelineck, Steven NIME 2006: New Interfaces for Musical Expression 2006-06-04 p.240-245
www.nime.org/proceedings/2006/nime2006_240.pdf

A simple practical approach to a wireless data acquisition board Poster Session 2: Gesture Controlled Audio Systems / Dimitrov, Smilen / Serafin, Stefania NIME 2006: New Interfaces for Musical Expression 2006-06-04 p.184-187
www.nime.org/proceedings/2006/nime2006_184.pdf

Connecting strangers at a train station Papers and Report Sessions / Gregersen, Ole / Pellarin, Lars / Olsen, Jakob / Böttcher, Niels / Guglielmi, Michel / Serafin, Stefania NIME 2005: New Interfaces for Musical Expression 2005-05-26 p.152-155
www.nime.org/proceedings/2005/nime2005_152.pdf
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 28 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 |