| NEXUS: Collaborative Performance for the Masses, Handling Instrument Interface Distribution through the Web | | BIBAK | PDF | 1 | |
| Jesse Allison; Yemin Oh; Benjamin Taylor | |||
| Distributed performance systems present many challenges to the artist in
managing performance information, distribution and coordination of interface to
many users, and cross platform support to provide a reasonable level of
interaction to the widest possible user base.
Now that many features of HTML 5 are implemented, powerful browser based interfaces can be utilized for distribution across a variety of static and mobile devices. The author proposes leveraging the power of a web application to handle distribution of user interfaces and passing interactions via OSC to and from realtime audio/video processing software. Interfaces developed in this fashion can reach potential performers by distributing a unique user interface to any device with a browser anywhere in the world. Keywords: NIME, distributed performance systems, Ruby on Rails, collaborative
performance, distributed instruments, distributed interface, HTML5, browser
based interface | |||
| MirrorFugue III: Conjuring the Recorded Pianist | | BIBAK | PDF | 2 | |
| Xiao Xiao; Anna Pereira; Hiroshi Ishii | |||
| The body channels rich layers of information when playing music, from
intricate manipulations of the instrument to vivid personifications of
expression. But when music is captured and replayed across distance and time,
the performer's body is too often trapped behind a small screen or absent
entirely.
This paper introduces MirrorFugue III, an interface to conjure the recorded performer by combining the moving keys of a player piano with life-sized projection of the pianist's hands and upper body. Inspired by reflections on a lacquered grand piano, our interface evokes the sense that the virtual pianist is playing the physically moving keys. Through MirrorFugue III, we explore the question of how to viscerally simulate a performer's presence to create immersive experiences. We discuss design choices, outline a space of usage scenarios and report reactions from users. Keywords: piano performance, musical expressivity, body language, recorded music,
player piano, augmented reality, embodiment | |||
| Expressive Control of Indirect Augmented Reality During Live Music Performances | | BIBAK | PDF | 3 | |
| Lode Hoste; Beat Signer | |||
| Nowadays many music artists rely on visualisations and light shows to
enhance and augment their live performances. However, the visualisation and
triggering of lights in popular music concerts is normally scripted in advance
and synchronised with the music, limiting the artist's freedom for
improvisation, expression and ad-hoc adaptation of their show. We argue that
these limitations can be overcome by combining emerging non-invasive tracking
technologies with an advanced gesture recognition engine.
We present a solution that uses explicit gestures and implicit dance moves to control the visual augmentation of a live music performance. We further illustrate how our framework overcomes limitations of existing gesture classification systems by providing a precise recognition solution based on a single gesture sample in combination with expert knowledge. The presented approach enables more dynamic and spontaneous performances and|in combination with indirect augmented reality|leads to a more intense interaction between artist and audience. Keywords: Expressive control, augmented reality, live music performance, 3D gesture
recognition, Kinect, declarative language | |||
| Muscular Interactions. Combining EMG and MMG sensing for musical practice | | BIBAK | PDF | 4 | |
| Marco Donnarumma; Baptiste Caramiaux; Atau Tanaka | |||
| We present the first combined use of the electromyogram (EMG) and
mechanomyogram (MMG), two biosignals that result from muscular activity, for
interactive music applications. We exploit differences between these two
signals, as reported in the biomedical literature, to create bi-modal
sonification and sound synthesis mappings that allow performers to distinguish
the two components in a single complex arm gesture. We study non-expert
players' ability to articulate the different modalities. Results show that
purposely designed gestures and mapping techniques enable novices to rapidly
learn to independently control the two biosignals. Keywords: NIME, sensorimotor system, EMG, MMG, biosignal, multimodal, mapping | |||
| Fluid Simulation as Full Body Audio-Visual Instrument | | BIBAK | PDF | 5 | |
| Andrew Johnston | |||
| This paper describes an audio-visual performance system based on real-time
fluid simulation. The aim is to provide a rich environment for works which blur
the boundaries between dance and instrumental performance -- and sound and
visuals -- while maintaining transparency for audiences and new performers. The
system uses infra-red motion tracking to allow performers to manipulate a
real-time fluid simulation, which in turn provides control data for
computer-generated audio and visuals. It also provides a control and
configuration system which allows the behaviour of the interactive system to be
changed over time, enabling the structure within which interactions take place
to be 'composed'. Keywords: performance, dance, fluid simulation, composition | |||
| Digiti Sonus: Advanced Interactive Fingerprint Sonification Using Visual Feature Analysis | | BIBAK | PDF | 6 | |
| Yoon Chung Han; Byeong-jun Han; Matthew Wright | |||
| This paper presents a framework that transforms fingerprint patterns into
audio. We describe Digiti Sonus, an interactive installation performing
fingerprint sonification and visualization, including novel techniques for
representing user-intended fingerprint expression as audio parameters. In order
to enable personalized sonification and broaden timbre of sound, the
installation employs sound synthesis based on various visual feature analyses
such as minutiae extraction, area, angle, and push pressure of fingerprints.
The sonification results are discussed and the diverse timbres of sound
retrieved from different fingerprints are compared. Keywords: Fingerprint, Fingerprint sonification, interactive sonification, sound
synthesis, biometric data | |||
| Filtering Motion Capture Data for Real-Time Applications | | BIBA | PDF | 7 | |
| Ståle A. Skogstad; Kristian Nymoen; Mats Hovin; Sverre Holm; Alexander Refsum Jensenius | |||
| In this paper we present some custom designed filters for real-time motion capture applications. Our target application is motion controllers, i.e. systems that interpret hand motion for musical interaction. In earlier research we found effective methods to design nearly optimal filters for realtime applications. However, to be able to design suitable filters for our target application, it is necessary to establish the typical freq frequency content of the motion capture data we want to filter. This will again allow us to determine a reasonable cutoff frequency for the filters. We have therefore conducted an experiment in which we recorded the hand motion of 20 subjects. The frequency spectra of these data together with a method similar to the residual analysis method were then used to determine reasonable cutoff frequencies. Based on this experiment, we propose three cutoff frequencies for different scenarios and filtering needs: 5, 10 and 15 Hz, which correspond to heavy, medium and light filtering, respectively. Finally, we propose a range of real-time filters applicable to motion controllers. In particular, low-pass filters and low-pass differentiators of degrees one and two, which in our experience are the most useful filters for our target application. | |||
| Musical Poi (mPoi) | | BIBAK | PDF | 8 | |
| Sangbong Nam; Jay Kim; Benjamin Martinson; Mara Helmuth | |||
| This paper describes the Musical Poi (mPoi), a unique sensor-based musical
instrument inspired by the ancient art of Jwibulnori and Poi spinning. The
trajectory of circular motion drawn by the performance and the momentum of the
mPoi instrument is converted to the energetic and vibrant sound, which creates
a spiritual and meditative soundscape that opens everyone up the aura and
clears the mind. The mPoi project and its concepts will be introduced first,
followed by a discussion of its interaction with a performer. Keywords: mPoi, Musical Poi, Jwibulnori, Poi, sensor-based musical instrument | |||
| Portable Measurement and Mapping of Continuous Piano Gesture | | BIBAK | PDF | 9 | |
| Andrew McPherson | |||
| This paper presents a portable optical measurement system for capturing
continuous key motion on any piano. Very few concert venues have MIDI-enabled
pianos, and many performers depend on the versatile but discontinued Moog
PianoBar to provide MIDI from a conventional acoustic instrument. The scanner
hardware presented in this paper addresses the growing need for alternative
solutions while surpassing existing systems in the level of detail measured.
Continuous key position on both black and white keys is gathered at 1kHz sample
rate. Software extracts traditional and novel features of keyboard touch from
each note, which can be flexibly mapped to sound using MIDI or Open Sound
Control. RGB LEDs provide rich visual feedback to assist the performer in
interacting with more complex sound mapping arrangements. An application is
presented to the magnetic resonator piano, an electromagnetically-augmented
acoustic grand piano which is performed using continuous key position
measurements. Keywords: Piano, keyboard, optical sensing, gesture sensing, visual feedback, mapping,
magnetic resonator piano | |||
| Reactive Environment for Network Music Performance | | BIBA | PDF | 10 | |
| Dalia El-Shimy; Jeremy R. Cooperstock | |||
| For a number of years, musicians in different locations have been able to perform with one another over a network as though present on the same stage. However, rather than attempt to re-create an environment for Network Music Performance (NMP) that mimics co-present performance as closely as possible, we propose focusing on providing musicians with novel controls that can help increase the level of interaction between them. To this end, we have developed a reactive environment for distributed performance that provides participants with dynamic, real-time control over several aspects of their performance, enabling them to change volume levels and experience exaggerated stereo panning. In addition, our reactive environment reinforces a feeling of a "shared space" between musicians. Our system -- intended for use in more relaxed, informal settings, such as loose rehearsals and jam sessions, rather than live performances before an audience -- differs most notably from standard ventures into the design of novel musical interfaces and installations in its reliance on user-centric methodologies borrowed from the field of Human-Computer Interaction (HCI). Not only does this research enable us to closely examine the communicative aspects of performance, it also allows us to explore new interpretations of the network as a performance space. This paper describes the motivation and background behind our project, the work that has been undertaken towards its realization and the future steps that have yet to be explored. | |||
| Rouages: Revealing the Mechanisms of Digital Musical Instruments to the Audience | | BIBAK | PDF | 11 | |
| Florent Berthaut; Mark T. Marshall; Sriram Subramanian; Martin Hachet | |||
| Digital musical instruments bring new possibilities for musical performance.
They are also more complex for the audience to understand, due to the diversity
of their components and the magical aspect of the musicians' actions when
compared to acoustic instruments. This complexity results in a loss of liveness
and possibly a poor experience for the audience. Our approach, called Rouages,
is based on a mixed-reality display system and a 3D visualization application.
Rouages reveals the mechanisms of digital musical instruments by amplifying
musicians' gestures with virtual extensions of the sensors, by representing the
sound components with 3D shapes and specific behaviors and by showing the
impact of musicians' gestures on these components. In its current
implementation, it focuses on MIDI controllers as input devices. We believe
that Rouages opens up new perspectives for instrument makers and musicians to
improve audience experience with their digital musical instruments. Keywords: rouages; digital musical instruments; mappings; 3D interface; mixed-reality | |||
| Audience Experience in Sound Performance | | BIBAK | PDF | 12 | |
| Chi-Hsia Lai; Till Bovermann | |||
| This paper presents observations from investigating audience experience of a
practice-based research in live sound performance with electronics. In seeking
to understand the communication flow and the engagement between performer and
audience in this particular performance context, we designed an experiment that
involved the following steps: (a) performing WOSAWIP at a new media festival,
(b) conducting a qualitative research study with audience members and (c)
analyzing the data for new insights. Although this study is only at an initial
stage, we already found that the post-performance interviews with the audience
members is a valuable method to help identifying instrument design and
performance considerations. Keywords: Audience Experience Study; Live Performance; Evaluation, Research Methods | |||
| SWARMED: Captive Portals, Mobile Devices, and Audience Participation in Multi-User Music Performance | | BIBAK | PDF | 13 | |
| Abram Hindle | |||
| Audience participation in computer music has long been limited by resources
such as sensor technology or the material goods necessary to share such an
instrument. A recent paradigm is to take advantage of the incredible popularity
of the smart-phone, a pocket sized computer, and other mobile devices, to
provide the audience an interface into a computer music instrument. In this
paper we discuss a method of sharing a computer music instrument's interface
with an audience to allow them to interact via their smartphones. We propose a
method that is relatively cross-platform and device-agnostic, yet still allows
for a rich user-interactive experience. By emulating a captive-portal or
hotspot we reduce the adoptability issues and configuration problems facing
performers and their audience. We share our experiences with this system, as
well as an implementation of the system itself. Keywords: WiFi, Smartphone, Audience Interaction, Adoption, Captive Portal,
Multi-User, Hotspot | |||
| Towards an Interface for Music Mixing based on Smart Tangibles and Multitouch | | BIBAK | PDF | 14 | |
| Steven Gelineck; Dan Overholt; Morten Büchert; Jesper Andersen | |||
| This paper presents the continuous work towards the development of an
interface for music mixing targeted towards expert sound technicians and
producers. The mixing interface uses a stage metaphor mapping scheme where
audio channels are represented as digital widgets on a 2D surface. These can be
controlled by multi touch or by smart tangibles, which are tangible blocks with
embedded sensors. The smart tangibles developed for this interface are able to
sense how they are grasped by the user. The paper presents the design of the
mixing interface including the smart tangible as well as a preliminary user
study involving a hands-on focus group session where 5 different control
technologies are contrasted and discussed. Preliminary findings suggest that
smart tangibles were preferred, but that an optimal interface would include a
combination of touch, smart tangibles and an extra function control tangible
for extending the functionality of the smart tangibles. Finally, the interface
should incorporate both an edit and mix mode -- the latter displaying very
limited visual feedback in order to force users to focus their attention to
listening instead of the interface. Keywords: music mixing, tangibles, smart objects, multi-touch, control surface,
graspables, physical-digital interface, tangible user interface, wireless
sensing, sketching in hardware | |||
| Adaptive mapping for improved pitch accuracy on touch user interfaces | | BIBAK | PDF | 15 | |
| Olivier Perrotin; Christophe d'Alessandro | |||
| Touch user interfaces such as touchpad or pen tablet are often used for
continuous pitch control in synthesis devices. Usually, pitch is set at the
contact point on the interface, thus introducing possible pitch inaccuracies at
the note onset. This paper proposes a new algorithm, based on an adaptive
attraction mapping, for improving initial pitch accuracy with touch user
interfaces with continuous control. At each new contact on the interface, the
algorithm adjusts the mapping to produce the most likely targeted note of the
scale in the vicinity of the contact point. Then, pitch remains continuously
adjustable as long as the contact is maintained, allowing for vibrato,
portamento and other subtle melodic control. The results of experiments
comparing the users' pitch accuracy with and without the help of the algorithm
show that such a correction enables to play sharply in tune at the contact with
the interface, regardless the musical background of the player. Therefore, the
dynamic mapping algorithm allows for a clean and accurate attack when playing
touch user interfaces for controlling continuous pitch instruments like voice
synthesizers. Keywords: Sound synthesis control, touch user interfaces, pen tablet, automatic
correction, accuracy, precision | |||
| LOLOL: Laugh Out Loud On Laptop | | BIBAK | PDF | 16 | |
| Jieun Oh; Ge Wang | |||
| Significant progress in the domains of speech- and singing-synthesis has
enhanced communicative potential of machines. To make computers more vocally
expressive, however, we need a deeper understanding of how nonlinguistic social
signals are patterned and perceived. In this paper, we focus on laughter
expressions: how a phrase of vocalized notes that we call "laughter" may be
modeled and performed to implicate nuanced meaning imbued in the acoustic
signal. In designing our model, we emphasize (1) using high-level descriptors
as control parameters, (2) enabling real-time performable laughter, and (3)
prioritizing expressiveness over realism. We present an interactive system
implemented in ChucK that allows users to systematically play with the musical
ingredients of laughter. A crowdsourced study on the perception of synthesized
laughter showed that our model is capable of generating a range of laughter
types, suggesting an exciting potential for expressive laughter synthesis. Keywords: laughter, vocalization, synthesis model, real-time controller, interface for
musical expression | |||
| Toward The Future Practice Room: Empowering Musical Pedagogy through Hyperinstruments | | BIBAK | PDF | 17 | |
| Jordan Hochenbaum; Ajay Kapur | |||
| Music education is a rich subject with many approaches and methodologies
that have developed over hundreds of years. More than ever, technology plays
important roles at many levels of a musician's practice. This paper begins to
explore some of the ways in which technology developed out of the NIME
community (specifically hyperinstruments), can inform a musician's daily
practice, through short and long term metrics tracking and data visualization. Keywords: Hyperinstruments, Pedagogy, Metrics, Ezither, Practice Room | |||
| The Web Browser As Synthesizer And Interface | | BIBAK | PDF | 18 | |
| Charles Roberts; Graham Wakefield; Matthew Wright | |||
| Our research examines the use and potential of native web technologies for
musical expression. We introduce two JavaScript libraries towards this end:
Gibberish.js, a heavily optimized audio DSP library, and Interface.js, a GUI
toolkit that works with mouse, touch and motion events. Together these
libraries provide a complete system for defining musical instruments that can
be used in both desktop and mobile web browsers. Interface.js also enables
control of remote synthesis applications via a server application that
translates the socket protocol used by web interfaces into both MIDI and OSC
messages. Keywords: mobile devices, javascript, browser-based NIMEs, web audio, websockets | |||
| Rainboard and Musix: Building Dynamic Isomorphic Interfaces | | BIBAK | PDF | 19 | |
| Brett Park; David Gerhard | |||
| Musix (an iOS application) and Rainboard (a physical device) are two new
musical instruments built to overcome limitations of existing isomorphic
instruments. Musix was developed to allow experimentation with a wide variety
of different isomorphic layouts to assess the advantages and disadvantages of
each. The Rainboard consists of a hexagonal array of arcade buttons embedded
with RGB-LEDs, which are used to indicate characteristics of the isomorphism
currently in use on the Rainboard. The creation of these two
instruments/experimentation platforms allows for isomorphic layouts to be
explored in ways that are not possible with existing isomorphic instruments. Keywords: isomorphic, mobile application, hexagon, keyboard | |||
| Embedded Networking and Hardware-Accelerated Graphics with Satellite CCRMA | | BIBAK | PDF | 20 | |
| Edgar Berdahl; Spencer Salazar; Myles Borins | |||
| Satellite CCRMA is a platform for making embedded musical instruments and
embedded installations. The project aims to help prototypes live longer by
providing a complete prototyping platform in a single, small, and stand-alone
embedded form factor. A set of scripts makes it easier for artists and
beginning technical students to access powerful features, while advanced users
enjoy the flexibility of the open-source software and open-source hardware
platform.
This paper focuses primarily on networking capabilities of Satellite CCRMA and new software for enabling interactive control of the hardware-accelerated graphical output. In addition, new results are presented showing that the Satellite CCRMA distribution allows the lifespan of the flash memory to be greatly increased in comparison with other embedded Linux distributions. Consequently, we believe that embedded instrument and installation designers will prefer using Satellite CCRMA for enhanced long-term reliability. Keywords: Satellite CCRMA, embedded musical instruments, embedded installations,
Node.js, Interface.js, hardware-accelerated graphics, OpenGLES,
SimpleGraphicsOSC, union file system, write endurance | |||
| Digitartic: Bi-manual Gestural Control of Articulation in Performative Singing Synthesis | | BIBAK | PDF | 21 | |
| Lionel Feugère; Christophe d'Alessandro | |||
| Digitartic, a system for bi-manual gestural control of VowelConsonant-Vowel
performative singing synthesis is presented. This system is an extension of a
real-time gesture-controlled vowel singing instrument developed in the Max MSP
language. In addition to pitch, vowels and voice strength controls, Digitartic
is designed for gestural control of articulation parameters, including various
places and manners of articulation. The phases of articulation between two
phonemes are continuously controlled and can be driven in real time without
noticeable delay, at any stage of the synthetic phoneme production. Thus, as in
natural singing, very accurate rhythmic patterns are produced and adapted while
playing with other musicians. The instrument features two (augmented) pen
tablets for controlling voice production: one is dealing with the glottal
source and vowels, the second one is dealing with consonant/vowel articulation.
The results show very natural consonant and vowel synthesis. Virtual choral
practice confirms the effectiveness of Digitartic as an expressive musical
instrument. Keywords: singing voice synthesis, gestural control, syllabic synthesis, articulation,
formants synthesis | |||
| A New Wi-Fi based Platform for Wireless Sensor Data Collection | | BIBAK | PDF | 22 | |
| Jim Torresen; Yngve Hafting; Kristian Nymoen | |||
| A custom designed WLAN (Wireless Local Area Network) based sensor interface
is presented in this paper. It is aimed at wirelessly interfacing a large
variety of sensors to supplement built-in sensors in smart phones and media
players. The target application area is collection of human related motions and
condition to be applied in musical applications. The interface is based on
commercially available units and allows for up to nine sensors. The benefit of
using WLAN based communication is high data rate with low latency. Our
experiments show that the average transmission time is less than 2ms for a
single sensor. Further, it is operational for a whole day without battery
recharging. Keywords: wireless communication, sensor data collection, WLAN, Arduino | |||
| A Gesture Control Interface for a Wave Field Synthesis System | | BIBAK | PDF | 23 | |
| Wolfgang Fohl; Malte Nogalski | |||
| This paper presents the design and implementation of a gesture control
interface for a wave field synthesis system. The user's motion is tracked by an
IR-camera-based tracking system. The developed connecting software processes
the tracker data to modify the positions of the virtual sound sources of the
wave field synthesis system. Due to the modular design of the software, the
triggered actions of the gestures may easily be modified. Three elementary
gestures were designed and implemented: Select / deselect, circular movement
and radial movement. The gestures are easy to execute and allow a robust
detection. The guidelines for gesture design and detection are presented, and
the user experiences are discussed. Keywords: Wave field synthesis, gesture control | |||
| Agile Interface Development using OSC Expressions and Process Migration | | BIBAK | PDF | 24 | |
| John MacCallum; Adrian Freed; David Wessel | |||
| ABSTRACT This paper introduces "o.expr," an expression language for dynamic,
object- and agent-oriented computation of gesture signal processing workflows
using OSC bundles. The use of o.expr is shown for a range of gesture processing
tasks. Aspects of o.expr, including statelessness and homoiconicity, simplify
agile applications development and provide support for heterogeneous
computational networks. Keywords: Gesture Signal Processing, Open Sound Control, Functional Programming,
Homoiconicity, Process Migration | |||
| An Easily Removable, wireless Optical Sensing System (EROSS) for the Trumpet | | BIBAK | PDF | 25 | |
| Leonardo Jenkins; Wyatt Page; Shawn Trail; George Tzanetakis; Peter Driessen | |||
| This paper presents a minimally-invasive, wireless optical sensor system for
use with any conventional piston valve acoustic trumpet. It is designed to be
easy to install and remove by any trumpeter. Our goal is to offer the extended
control afforded by hyperinstruments without the hard to reverse or
irreversible invasive modifications that are typically used for adding digital
sensing capabilities. We utilize optical sensors to track the continuous
position displacement values of the three trumpet valves. These values are
transmitted wirelessly and can be used by an external controller. The hardware
has been designed to be reconfigurable by having the housing 3D printed so that
the dimensions can be adjusted for any particular trumpet model. The result is
a low cost, low power, easily replicable sensor solution that offers any
trumpeter the ability to augment their own existing trumpet without
compromising the instrument's structure or playing technique. The extended
digital control afforded by our system is interweaved with the natural playing
gestures of an acoustic trumpet. We believe that this seamless integration is
critical for enabling effective and musical human computer interaction. Keywords: hyperinstrument, trumpet, minimally-invasive, gesture sensing, wireless,
I²C | |||
| Multi Sensor Tracking for Live Sound Transformation | | BIBAK | PDF | 26 | |
| Anton Fuhrmann; Johannes Kretz; Peter Burwik | |||
| This paper demonstrates how to use multiple Kinect™ sensors to map a
performer's motion to music. Skeleton data streams from multiple sensors are
merged in order to compensate for occlusions of the performer. The skeleton
joint positions drive the performance via open sound control data. We discuss
how to register the different sensors to each other and how to smoothly merge
the resulting data streams and how to map position data in a general framework
to the live electronics applied to a chamber music ensemble. Keywords: kinect, multi sensor, sensor fusion, open sound control, motion tracking,
parameter mapping, live electronics | |||
| Near-Field Optical Reflective Sensing for Bow Tracking | | BIBAK | PDF | 27 | |
| Laurel Pardue; Andrew McPherson | |||
| This paper explores the potential of near-field optical reflective sensing
for musical instrument gesture capture. Near-field optical sensors are
inexpensive, portable and nonintrusive, and their high spatial and temporal
resolution makes them ideal for tracking the finer motions of instrumental
performance. The paper discusses general optical sensor performance with
detailed investigations of three sensor models. An application is presented to
violin bow position tracking using reflective sensors mounted on the stick. Bow
tracking remains a difficult task, and many existing solutions are expensive,
bulky, or offer limited temporal resolution. Initial results indicate that bow
position and pressure can be derived from optical measurements of the
hair-string distance, and that similar techniques may be used to measure bow
tilt. Keywords: optical sensor reflectance, LED, photodiode, photo-transistor, violin, bow
tracking, gesture, near-field sensing | |||
| Live Coding The Mobile Music Instrument | | BIBAK | PDF | 28 | |
| Sang Won Lee; Georg Essl | |||
| We introduce a form of networked music performance where a performer plays a
mobile music instrument while it is being implemented on the fly by a live
coder. This setup poses a set of challenges in performing a musical instrument
which changes over time and we suggest design guidelines such as making a
smooth transition, varying adoption of change, and sharing information between
the pair of two performers. A proof-of-concept instrument is implemented on a
mobile device using UrMus, applying the suggested guidelines. We wish that this
model would expand the scope of live coding to the distributed interactive
system, drawing existing performance ideas of NIMEs. Keywords: live coding, network music, on-the-fly instrument, mobile music | |||
| WAAX: Web Audio API eXtension | | BIBAK | PDF | 29 | |
| Hongchan Choi; Jonathan Berger | |||
| The introduction of the Web Audio API in 2011 marked a significant advance
for web-based music systems by enabling real-time sound synthesis on web
browsers simply by writing JavaScript code. While this powerful functionality
has arrived there is a yet unaddressed need for an extension to the API to
fully reveal its potential. To meet this need, a JavaScript library dubbed WAAX
was created to facilitate music and audio programming based on Web Audio API
bypassing underlying tasks and augmenting useful features. In this paper, we
describe common issues in web audio programming, illustrate how WAAX can speed
up the development, and discuss future developments. Keywords: Web Audio API, Chrome, JavaScript, web-based music system, collaborative
music making, audience participation | |||
| SonNet: A Code Interface for Sonifying Computer Network Data | | BIBAK | PDF | 30 | |
| KatieAnna Wolf; Rebecca Fiebrink | |||
| As any computer user employs the Internet to accomplish everyday activities,
a flow of data packets moves across the network, forming their own patterns in
response to his or her actions. Artists and sound designers who are interested
in accessing that data to make music must currently possess low-level knowledge
of Internet protocols and spend significant effort with low-level networking
code. We have created SonNet, a new software tool that lowers these practical
barriers to experimenting and composing with network data. SonNet executes
packet-sniffing and network connection state analysis automatically, and it
includes an easy-to-use ChucK object that can be instantiated, customized, and
queried from a user's own code. In this paper, we present the design and
implementation of the SonNet system, and we discuss a pilot evaluation of the
system with computer music composers. We also discuss compositional
applications of SonNet and illustrate the use of the system in an example
composition. Keywords: Sonification, network data, compositional tools | |||
| A Self-Organizing Gesture Map for a Voice-Controlled Instrument Interface | | BIBAK | PDF | 31 | |
| Stefano Fasciani; Lonce Wyse | |||
| Mapping gestures to digital musical instrument parameters is not trivial
when the dimensionality of the sensor-captured data is high and the model
relating the gesture to sensor data is unknown. In these cases, a front-end
processing system for extracting gestural information embedded in the sensor
data is essential. In this paper we propose an unsupervised offline method that
learns how to reduce and map the gestural data to a generic instrument
parameter control space. We make an unconventional use of the Self-Organizing
Maps to obtain only a geometrical transformation of the gestural data, while
dimensionality reduction is handled separately. We introduce a novel training
procedure to overcome two main SelfOrganizing Maps limitations which otherwise
corrupt the interface usability. As evaluation, we apply this method to our
existing Voice-Controlled Interface for musical instruments, obtaining sensible
usability improvements. Keywords: Self-Organizing Maps, Gestural Controller, Multi Dimensional Control,
Unsupervised Gesture Mapping, Voice Control | |||
| Machine Learning of Musical Gestures | | BIBAK | PDF | 32 | |
| Baptiste Caramiaux; Atau Tanaka | |||
| We present an overview of machine learning (ML) techniques and their
application in interactive music and new digital instrument design. We first
provide the non-specialist reader an introduction to two ML tasks,
classification and regression, that are particularly relevant for gestural
interaction. We then present a review of the literature in current NIME
research that uses ML in musical gesture analysis and gestural sound control.
We describe the ways in which machine learning is useful for creating
expressive musical interaction, and in turn why live music performance presents
a pertinent and challenging use case for machine learning. Keywords: Machine Learning, Data mining, Musical Expression, Musical Gestures,
Analysis, Control, Gesture, Sound | |||
| KIB: Simplifying Gestural Instrument Creation Using Widgets | | BIBAK | PDF | 33 | |
| Edward Zhang; Rebecca Fiebrink | |||
| The Microsoft Kinect is a popular and versatile input device for musical
interfaces. However, using the Kinect for such interfaces requires not only
significant programming experience, but also the use of complex geometry or
machine learning techniques to translate joint positions into higher level
gestures. We created the Kinect Instrument Builder (KIB) to address these
difficulties by structuring gestural interfaces as combinations of gestural
widgets. KIB allows the user to design an instrument by configuring gestural
primitives, each with a set of simple but attractive visual feedback elements.
After designing an instrument on KIB's web interface, users can play the
instrument on KIB's performance interface, which displays visualizations and
transmits OSC messages to other applications for sound synthesis or further
remapping. Keywords: Kinect, gesture, widgets, OSC, mapping | |||
| Towards Mapping Timbre to Emotional Affect | | BIBAK | PDF | 34 | |
| Niklas Klügel; Georg Groh | |||
| Controlling the timbre generated by an audio synthesizer in a goal-oriented
way requires a profound understanding of the synthesizer's manifold structural
parameters. Especially shaping timbre expressively to communicate emotional act
requires expertise. Therefore, novices in particular may not be able to
adequately control timbre in view of articulating the wealth of acts musically.
In this context, the focus of this paper is the development of a model that can
represent a relationship between timbre and an expected emotional act1 . The
results of the evaluation of the presented model are encouraging and thus
support its use in steering or augmenting the control of the audio synthesis.
We explicitly envision this paper as a contribution to the field of Synthesis
by Analysis in the broader sense, albeit being potentially suitable to other
related domains. Keywords: Emotional act, Timbre, Machine Learning, Deep Belief Networks, Analysis by
Synthesis | |||
| Cross-modal Sound Mapping Using Deep Learning | | BIBAK | PDF | 35 | |
| Ohad Fried; Rebecca Fiebrink | |||
| We present a method for automatic feature extraction and cross-modal mapping
using deep learning. Our system uses stacked autoencoders to learn a layered
feature representation of the data. Feature vectors from two (or more)
different domains are mapped to each other, effectively creating a cross-modal
mapping. Our system can either run fully unsupervised, or it can use high-level
labeling to fine-tune the mapping according a user's needs. We show several
applications for our method, mapping sound to or from images or gestures. We
evaluate system performance both in standalone inference tasks and in
cross-modal mappings. Keywords: Deep learning, feature learning, mapping, gestural control | |||
| The Quarterstaff, a Gestural Sensor Instrument | | BIBAK | PDF | 36 | |
| Jan C. Schacher | |||
| This article describes the motivations and reflections that led to the
development of a gestural sensor instrument called the Quarterstaff. In an
iterative design and fabrication process, several versions of this interface
were built, tested and evaluated in performances. A detailed explanation of the
design choices concerning the shape but also the sensing capabilities of the
instrument illustrates the emphasis on establishing an 'enactive' instrumental
relationship. A musical practice for this type of instrument is shown by
discussing the methods used in the exploration of the gestural potential of the
interface and the strategies deployed for the development of mappings and
compositions. Finally, to gain more information about how this instrument
compares with similar designs, two dimension-space analyses are made that show
a clear relationship to instruments that precede the Quarterstaff. Keywords: Gestural sensor interface, instrument design, body-object relation,
composition and performance practice, dimension space analysis | |||
| A Corpus-based Method for Controlling Guitar Feedback | | BIBA | PDF | 37 | |
| Sam Ferguson; Andrew Johnston; Aengus Martin | |||
| The use of feedback created by electric guitars and amplifiers is problematic in musical settings. For example, it is difficult for a performer to accurately obtain specific pitch and loudness qualities. This is due to the complex relationship between these quantities and other variables such as the string being fretted and the positions and orientations of the guitar and amplifier. This research investigates corpus-based methods for controlling the level and pitch of the feedback produced by a guitar and amplifier. A guitar-amplifier feedback system was built in which the feedback is manipulated using (i) a simple automatic gain control system, and (ii) a band-pass filter placed in the signal path. A corpus of sounds was created by recording the sound produced for various combinations of the parameters controlling these two components. Each sound in the corpus was analysed so that the control parameter values required to obtain particular sound qualities can be recalled in the manner of concatenative sound synthesis. As a demonstration, a recorded musical target phrase is recreated on the feedback system. | |||
| IMAGE 2.0: New Features and its Application in the Development of a Talking Guitar | | BIBAK | PDF | 38 | |
| Maria Astrinaki; Nicolas d'Alessandro; Loïc Reboursière; Alexis Moinet; Thierry Dutoit | |||
| This paper describes the recent progress in our approach to generate
performative and controllable speech. The goal of the performative HMM-based
speech and singing synthesis library, called Mage, is to have the ability to
generate natural sounding speech with arbitrary speaker's voice
characteristics, speaking styles and expressions and at the same time to have
accurate reactive user control over all the available production levels. Mage
allows to arbitrarily change between voices, control speaking style or vocal
identity, manipulate voice characteristics or alter the targeted context
on-the-fly and also maintain the naturalness and intelligibility of the output.
To achieve these controls, it was essential to redesign and improve the initial
library. This paper focuses on the improvements of the architectural design,
the additional user controls and provides an overview of a prototype, where a
guitar is used to reactively control the generation of a synthetic voice in
various levels. Keywords: speech synthesis, augmented guitar, hexaphonic guitar | |||
| Further Finger Position and Pressure Sensing Techniques for Strings and Keyboard Instruments | | BIBAK | PDF | 39 | |
| Tobias Grosshauser; Gerhard Troester | |||
| Several new technologies to capture motion, gesture and forces for musical
instrument players' analyses have been developed in the last years. In research
and for augmented instruments one parameter is underrepresented so far. It is
finger position and pressure measurement, applied by the musician while playing
the musical instrument. In this paper we show a flexible linear-potentiometer
and force-sensitive-resistor (FSR) based solution for position, pressure and
force sensing between the contact point of the fingers and the musical
instrument. A flexible matrix printed circuit board (PCB) is fixed on a piano
key. We further introduce linear potentiometer based left hand finger position
sensing for string instruments, integrated into a violin and a guitar finger
board. Several calibration and measurement scenarios are shown. The violin
sensor was evaluated with 13 music students regarding playability and
robustness of the system. Main focus was a the integration of the sensors into
these two traditional musical instruments as unobtrusively as possible to keep
natural haptic playing sensation. The musicians playing the violin in different
performance situations stated good playability and no differences in the haptic
sensation while playing. The piano sensor is rated, due to interviews after
testing it in a conventional keyboard quite unobtrusive, too, but still evokes
a different haptic sensation. Keywords: Sensor, Piano, Violin, Guitar, Position, Pressure, Keyboard | |||
| Designing and Building Expressive Robotic Guitars | | BIBAK | PDF | 40 | |
| Jim Murphy; James McVay; Ajay Kapur; Dale Carnegie | |||
| This paper provides a history of robotic guitars and bass guitars as well as
a discussion of the design, construction, and evaluation of two new robotic
instruments. Throughout the paper, a focus is made on different techniques to
extend the expressivity of robotic guitars. Swivel and MechBass, two new
robots, are built and discussed. Construction techniques of likely interest to
other musical roboticists are included. These robots use a variety of
techniques, both new and inspired by prior work, to afford composers and
performers with the ability to precisely control pitch and plucking parameters.
Both new robots are evaluated to test their precision, repeatability, and
speed. The paper closes with a discussion of the compositional and performative
implications of such levels of control, and how it might act humans who wish to
interface with the systems. Keywords: musical robotics, kinetic sculpture, mechatronics | |||
| The Third Room: A 3D Virtual Music Framework | | BIBAK | PDF | 41 | |
| Colin Honigman; Andrew Walton; Ajay Kapur | |||
| This paper describes a new paradigm for music creation using 3D audio and
visual techniques. It describes the Third Room, which uses a Kinect to place
users in a virtual environment to interact with new instruments for musical
expression. Users can also interact with smart objects, including the Ember
(modified mbira digital interface) and the Fluid (a wireless six degrees of
freedom and touch controller). This project also includes new techniques for 3D
audio connected to a 3D virtual space using multi-channel speakers and
distributed robotic instruments. Keywords: Kinect Camera, Third Space, Interface, Virtual Reality, Natural Interaction,
Robotics, Arduino | |||
| Lantern Field: Exploring Participatory Design of a Communal, Spatially Responsive Installation | | BIBAK | PDF | 42 | |
| Brennon Bortz; Aki Ishida; Ivica Ico Bukvic; R. Benjamin Knapp | |||
| Lantern Field is a communal, site-specific installation that takes shape as
a spatially responsive audio-visual field. The public participates in the
creation of the installation, resulting in shared ownership of the work between
both the artists and participants. Furthermore, the installation takes new
shape in each realization, both to incorporate the constraints and affordances
of each specific site, as well as to address the lessons learned from the
previous iteration. This paper describes the development and execution of
Lantern Field over its most recent version, with an eye toward the next
iteration at the Smithsonian's Freer Gallery during the 2013 National Cherry
Blossom Festival in Washington, D.C. Keywords: Participatory creation, communal interaction, fields, interactive
installation, Japanese lanterns | |||
| Hybrid Musicianship -- Teaching Gestural Interaction with Traditional and Digital Instruments | | BIBAK | PDF | 43 | |
| Jan C. Schacher | |||
| This article documents a class that teaches gestural interaction and
juxtaposes traditional instrumental skills with digital musical instrument
concepts. In order to show the principles and reflections that informed the
choices made in developing this syllabus, fundamental elements of an
instrument-body relationship and the perceptual import of sensory-motor
integration are investigated. The methods used to let participants learn in
practical experimental settings are discussed, showing a way to conceptualise
and experience the entire work flow from instrumental sound to electronic
transformations by blending gestural interaction with digital musical
instrument techniques and traditional instrumental playing skills. The
technical interfaces and software that were deployed are explained, focusing of
the interactive potential offered by each solution. In an attempt to summarise
and evaluate the impact of this course, a number of insights relating to this
specific pedagogical situation are put forward. Finally, concrete examples of
interactive situations that were developed by the participants are shown in
order to demonstrate the validity of this approach. Keywords: gestural interaction, digital musical instruments, pedagogy, mapping,
enactive approach | |||
| Generating an Integrated Musical Expression with a Brain-Computer Interface | | BIBAK | PDF | 44 | |
| Takayuki Hamano; Tomasz Rutkowski; Hiroko Terasawa; Kazuo Okanoya; Kiyoshi Furukawa | |||
| Electroencephalography (EEG) has been used to generate music for over 40
years, but the most recent developments in brain-computer interfaces (BCI)
allow greater control and more flexible expression to use new musical
instruments via EEG. We developed a real-time musical performance system using
BCI technology and sonification techniques to generate chords with organically
fluctuating timbre. We aimed to emulate the expressivity of traditional
acoustic instruments by adding "non-coded" expressions that were not marked in
the score. The BCI part of the system classifies patterns during neural
activity while a performer imagines a chord. The sonification part of the
system captures non-stationary changes in the brain waves and reflects them in
the timbre by additive synthesis. In this paper, we discuss the conceptual
design, system development, and the performance of this instrument. Keywords: Brain-computer interface (BCI), qualitative and quantitative information,
classification, sonification | |||
| Computer Assisted Melo-rhythmic Generation of Traditional Chinese Music from Ink Brush Calligraphy | | BIBAK | PDF | 45 | |
| Will W. W. Tang; Stephen Chan; Grace Ngai; Hong-va Leong | |||
| CalliMusic, is a system developed for users to generate traditional Chinese
music by writing Chinese ink brush calligraphy, turning the long-believed
strong linkage between the two art forms with rich histories into reality. In
addition to traditional calligraphy writing instruments (brush, ink and paper),
a camera is the only addition needed to convert the motion of the ink brush
into musical notes through a variety of mappings such as human-inspired,
statistical and a hybrid. The design of the system, including details of each
mapping and research issues encountered are discussed. A user study of system
performance suggests that the result is quite encouraging. The technique is,
obviously, applicable to other related art forms with a wide range of
applications. Keywords: Chinese Calligraphy, Chinese Music, Assisted Music Generation | |||
| PAMDI Music Box: Primarily Analogico-Mechanical, Digitally Iterated Music Box | | BIBAK | PDF | 46 | |
| Rebecca Kleinberger | |||
| PAMDI is an electromechanical music controller based on an expansion of the
common metal music boxes. Our system enables an augmentation of a music box by
adding different musical channels triggered and parameterized by natural
gestures during the "performance". All the channels are generated from the
original melody recorded once at the start.
We made a platform composed of a metallic structure supporting sensors that will be triggered by different natural and intentional gestures. The values we measure are processed by an arduino system that sends the results by serial communication to a Max/MSP patch for signal treatment and modification. We will explain how our embedded instrument aims to bring to the player a certain awareness of the mapping and the potential musical freedom of the very specific -- and not that much automatic -- instrument that is a music box. We will also address how our design tackles the different questions of mapping, ergonomics and expressiveness and how we are choosing the controller modalities and the parameters to be sensed. Keywords: Tangible interface, musical controller, music box, mechanical and electronic
coupling, mapping. | |||
| Stompboxes: Kicking the Habit | | BIBAK | PDF | 47 | |
| Gregory Burlet; Marcelo M. Wanderley; Ichiro Fujinaga | |||
| Sensor-based gesture recognition is investigated as a possible solution to
the problem of managing an overwhelming number of audio effects in live guitar
performances. A realtime gesture recognition system, which automatically
toggles digital audio effects according to gestural information captured by an
accelerometer attached to the body of a guitar, is presented. To supplement the
several predefined gestures provided by the recognition system, personalized
gestures may be trained by the user. Upon successful recognition of a gesture,
the corresponding audio effects are applied to the guitar signal and visual
feedback is provided to the user. An evaluation of the system yielded 86%
accuracy for user-independent recognition and 99% accuracy for user-dependent
recognition, on average. Keywords: Augmented instrument, gesture recognition, accelerometer, pattern
recognition, performance practice | |||
| Enabling Multimodal Mobile Interfaces for Musical Performance | | BIBAK | PDF | 48 | |
| Charles Roberts; Angus Forbes; Tobias Höllerer | |||
| We present research that extends the scope of the mobile application
Control, a prototyping environment for defining multimodal interfaces that
control real-time artistic and musical performances. Control allows users to
rapidly create interfaces employing a variety of modalities, including: speech
recognition, computer vision, musical feature extraction, touchscreen widgets,
and inertial sensor data. Information from these modalities can be transmitted
wirelessly to remote applications. Interfaces are declared using JSON and can
be extended with JavaScript to add complex behaviors, including the concurrent
fusion of multimodal signals. By simplifying the creation of interfaces via
these simple markup files, Control allows musicians and artists to make novel
applications that use and combine both discrete and continuous data from the
wide range of sensors available on commodity mobile devices. Keywords: Music, mobile, multimodal, interaction | |||
| Towards Note-Level Prediction for Networked Music Performance | | BIBAK | PDF | 49 | |
| Reid Oda; Adam Finkelstein; Rebecca Fiebrink | |||
| The Internet allows musicians and other artists to collaborate remotely.
However, network latency presents a fundamental challenge for remote
collaborators who need to coordinate and respond to each other's performance in
real time. In this paper, we investigate the viability of predicting percussion
hits before they have occurred, so that information about the predicted drum
hit can be sent over a network, and the sound can be synthesized at a
receiver's location at approximately the same moment the hit occurs at the
sender's location. Such a system would allow two percussionists to play in
perfect synchrony despite the delays caused by computer networks. To
investigate the feasibility of such an approach, we record vibraphone mallet
strikes with a high-speed camera and track the mallet head position. We show
that 30 ms before the strike occurs, it is possible to predict strike time and
velocity with acceptable accuracy. Our method fits a second-order polynomial to
the data to produce a strike-time prediction that is within 10 ms of the actual
strike, and a velocity estimate that will enable the sound pressure level of
the synthesized strike to be accurate within 3 dB. Keywords: Networked performance, prediction, computer vision | |||
| Motion and Synchronization Analysis of Musical Ensembles with the Kinect | | BIBAK | PDF | 50 | |
| Aristotelis Hadjakos; Tobias Grosshauser; Werner Goebl | |||
| Music ensembles have to synchronize their performances with highest
precision in order to achieve the desired musical results. For that purpose the
musicians do not only rely on their auditory perception but also perceive and
interpret the movements and gestures of their ensemble colleges. In this paper
we present a method for motion analysis of musical ensembles based on head
tracking with a Kinect camera. We discuss first experimental results with a
violin duo performance and present ways of analyzing and visualizing the
recorded head motion data. Keywords: Kinect, Ensemble, Synchronization, Strings, Functional Data Analysis,
Cross-Correlogram | |||
| Towards Gestural Sonic Affordances | | BIBAK | PDF | 51 | |
| Alessandro Altavilla; Baptiste Caramiaux; Atau Tanaka | |||
| We present a study that explores the affordance evoked by sound and
sound-gesture mappings. In order to do this, we make use of a sensor system
with minimal form factor in a user study that minimizes cultural association.
The present study focuses on understanding how participants describe sounds and
gestures produced while playing designed sonic interaction mappings. This
approach seeks to move from object-centric affordance towards investigating
embodied gestural sonic affordances. Keywords: Gestural embodiment of sound, Affordances, Mapping | |||
| Sound Spray -- Can-shaped Sound Effect Device | | BIBAK | PDF | 52 | |
| Gibeom Park; Kyogu Lee | |||
| This paper is about a development process of a novel sound effect device,
which resembles ordinary spray can, with the purpose of adding sound elements
to existing spray paint art. To this end, we first investigate the processes
and the characteristics of spray paint art and find the common elements that
can be expressed in an auditory form. We then design a prototype using Arduino
and various sensors, such as force sensing resistors, accelerometers, and
inclinometers, and examine the elements that would be necessary to apply the
proposed device to spray paint art activities. Experiments with the prototype
indicate that there is a significant potential in adding sound elements to
spray paint art to enrich its artistic expressions. Keywords: Sound effect device, Spray paint art, Arduino, Pure Data | |||
| Applied and Proposed Installations with Silent Disco Headphones for Multi-Elemental Creative Expression | | BIBAK | PDF | 53 | |
| Russell Eric Dobda | |||
| Breaking musical and creative expression into elements, layers, and
formulas, we explore how live listeners create unique sonic experiences from a
palette of these elements and their interactions. Bringing us to present-day
creative applications, a social and historical overview of silent disco is
presented. The advantages of this active listening interface are outlined by
the author's expressions requiring discrete elements, such as binaural beats,
3D audio effects, and multiple live music acts in the same space. Events and
prototypes as well as hardware and software proposals for live multi-listener
manipulation of multi-elemental sound and music are presented. Examples in
audio production, sound healing, music composition, tempo phasing, and spatial
audio illustrate the applications. Keywords: wireless headphones, music production, silent disco, headphone concert,
binaural beats, multi-track audio, active music listening, sound healing,
mobile clubbing, smart-phone apps | |||
| A Compact Spectrum-Assisted Human Beatboxing Reinforcement Learning Tool On Smartphone | | BIBAK | PDF | 54 | |
| Simon Lui | |||
| Music is expressive and hard to be described by words. Learning music is
therefore not a straightforward task especially for vocal music such as human
beatboxing. People usually learn beatboxing in the traditional way of imitating
audio sample without steps and instructions. Spectrogram contains a lot of
information about audio, but it is too complicated to be understood in
real-time. Reinforcement learning is a psychological method, which makes use of
reward and/or punishment as stimulus to train the decision-making process of
human. We propose a novel music learning approach based on the reinforcement
learning method, which makes use of compact and easy-to-read spectrum
information as visual clue to assist human beatboxing learning on smartphone.
Experimental result shows that the visual information is easy to understand in
real-time, which improves the effectiveness of beatboxing self-learning. Keywords: Audio analysis, music learning tool, reinforcement learning, smartphone app,
audio information retrieval | |||
| Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration | | BIBAK | PDF | 55 | |
| Saebyul Park; Seonghoon Ban; Dae Ryong Hong; Woon Seung Yeo | |||
| SSN (Sound Surfing Network) is a system that provides a new musical
experience by incorporating mobile phone-based spatial sound control to
collaborative music performance. SSN enables both the performer and the
audience to manipulate the spatial distribution of sound using the smartphones
of the audience as distributed speaker system. Proposing a new perspective to
the social aspect music appreciation, SSN will provide a new possibility to
mobile music performances in the context of interactive audience collaboration
as well as sound spatialization. Keywords: Mobile music, smartphone, audience participation, spatial sound control,
digital performance | |||
| Toward an Emotionally Intelligent Piano: Real-Time Emotion Detection and Performer Feedback via Kinesthetic Sensing in Piano Performance | | BIBAK | PDF | 56 | |
| Matan Ben-Asher; Colby Leider | |||
| A system is presented for detecting common gestures, musical intentions and
emotions of pianists in real-time using kinesthetic data retrieved by wireless
motion sensors. The algorithm can detect six performer intended emotions such
as cheerful, mournful, and vigorous, completely and solely based on
low-sample-rate motion sensor data. The algorithm can be trained in real-time
or can work based on previous training sets. Based on the classification, the
system offers feedback in by mapping the emotions to a color set and presenting
them as a owing emotional spectrum on the background of a piano roll. It also
presents a small circular object floating in the emotion space of Hevner's
adjective circle. This allows a performer to get real-time feedback regarding
the emotional content conveyed in the performance. The system was trained and
tested using the standard paradigm on a group of pianists, detected and
displayed structures and emotions, and it provided some insightful results and
conclusions. Keywords: Motion Sensors, IMUs, Expressive Piano Performance, Machine Learning,
Computer Music, Music and Emotion | |||
| New Interfaces for Traditional Korean Music and Dance | | BIBAK | PDF | 57 | |
| Ajay Kapur; Dae Hong Kim; Raakhi Kapur; Kisoon Eom | |||
| This paper describes the creation of new interfaces that extend traditional
Korean music and dance. Specifically, this research resulted in the design of
the eHaegum (Korean bowed instrument), eJanggu (Korean drum), and ZiOm wearable
interfaces. The paper describes the process of making these new interfaces as
well as how they have been used to create new music and forms of digital art
making that blend traditional practice with modern techniques. Keywords: Hyperinstrument, Korean interface design, wearable sensors, dance
controllers, bowed controllers, drum controllers | |||
| Multidimensional Sound Spatialization by Means of Chaotic Dynamical Systems | | BIBAK | PDF | 58 | |
| Edmar Soria; Roberto Morales-Manzanares | |||
| We present an instrument that explores an algorithmic sound spatialization
system developed with the SuperCollider programming language. We consider the
implementation of spatial multidimensional panning through the simultaneous use
of polygonal shaped horizontal and vertical loudspeaker array. This framework
uses chaotic dynamical systems to generate discrete data series from the orbit
of any specific system, which in this case is the logistic equation. The orbits
will create the path of the general panning structure form vectors of Rn,
containing entries from data series of different orbits from a specific
dynamical system. Such vectors, called system vectors and create ordered paths
between those points or system vectors. Finally, interpolating that result with
a fixed sample value, we obtain specific and independent multidimensional
panning trajectories for each speaker array and for any number of sound
sources. Keywords: NIME, spatialization, dynamical systems, chaos | |||
| Hand-Controller for Combined Tactile Control and Motion Tracking | | BIBAK | PDF | 59 | |
| Laurel Pardue; William Sebastian | |||
| The Hand-Controller is a new interface designed to enable a performer to
achieve detailed control of audio and visual parameters through a tangible
interface combined with motion tracking of the hands to capture large scale
physical movement. Such movement empowers an expressive dynamic for both
performer and audience. However movement in free space is notoriously difficult
for virtuosic performance that requires spatially exact, repetitive placement.
The lack of tactile feedback leads to difficulty learning the repeated muscle
movements required for precise reliable control. In comparison, the hands have
shown an impressive ability to master complex motor tasks through feel. The
HandController uses both modes of interaction. Electro-magnetic field tracking
enables 6D hand motion tracking while two options provide tactile interaction
-- a set of tracks that provide linear positioning and applied finger pressure,
or a set of trumpet like slider keys that provide continuous data describing
key depth. Thumbs actuate additional pressure sensitive buttons. The two haptic
interfaces are mounted to a comfortable hand grip that allows a significant
range of reach, and pressure to be applied without restricting hand movement
highly desirable in expressive motion. Keywords: hand, interface, free gesture, force sensitive resistor, new musical
instrument, tactile feedback, position tracking | |||
| Rocking the Keys with a Multi-Touch Interface | | BIBAK | PDF | 60 | |
| Thomas Walther; Damir Ismailovic; Bernd Brügge | |||
| Although multi-touch user interfaces have become a widespread form of human
computer interaction in many technical areas, they haven't found their way into
live performances of musicians and keyboarders yet. In this paper, we present a
novel multi-touch interface method aimed at professional keyboard players. The
method, which is inspired by computer trackpads, allows controlling up to ten
continuous parameters of a keyboard with one hand, without requiring the user
to look at the touch area -- a significant improvement over traditional
keyboard input controls. We discuss optimizations needed to make our interface
reliable, and show in an evaluation with four keyboarders of different skill
level that this method is both intuitive and powerful, and allows users to more
quickly alter the sound of their keyboard than they could with current input
solutions. Keywords: multi-touch, mobile, keyboard, interface | |||
| PESI Extended System: In Space, On Body, with 3 Musicians | | BIBAK | PDF | 61 | |
| Koray Tahiroglu; Nuno N. Correia; Miguel Espada | |||
| This paper introduces a novel collaborative environment in which performers
are not only free to move and interact with each other but where their social
interactions contribute to the sonic outcome. The PESI system is designed for
co-located collaboration and provides embodied and spatial opportunities for
musical exploration. To evaluate the system with skilled musicians, a user-test
jam session was conducted. Musicians' comments indicate that the system
facilitates group interaction finely to bring up further intentions to musical
ideas. PESI can shift the collaborative music activity to a more engaging and
active experience. Keywords: Affordances, collaboration, social interaction, mobile music, extended
system, NIME | |||
| A Real-time Musical Performance Feedback System for Beginner Musician | | BIBAK | PDF | 62 | |
| Yoonchang Han; Sejun Kwon; Kibeom Lee; Kyogu Lee | |||
| This paper proposes a musical performance feedback system based on real-time
audio-score alignment for musical instrument education of beginner musicians.
In the proposed system, we do not make use of symbolic data such as MIDI, but
acquire a real-time audio input from on-board microphone of smartphone. Then,
the system finds onset and pitch of the note from the signal, to align this
information with the ground truth musical score. Real-time alignment allows the
system to evaluate whether the user played the correct note or not, regardless
of its timing, which enables user to play at their own speed, as playing same
tempo with original musical score is problematic for beginners. As an output of
evaluation, the system notifies the user about which part they are currently
performing, and which note were played incorrectly. Keywords: Music performance analysis, Music education, Real-time score following | |||
| Enactive Mandala: Audio-visualizing Brain Waves | | BIBAK | PDF | 63 | |
| Tomohiro Tokunaga; Michael Lyons | |||
| We are exploring the design and implementation of artificial expressions,
kinetic audio-visual representations of realtime physiological data which
reflect emotional and cognitive state. In this work we demonstrate a prototype,
the Enactive Mandala, which maps real-time EEG signals to modulate ambient
music and animated visual music. Transparent real-time audio-visual feedback of
brainwave qualities supports intuitive insight into the connection between
thoughts and physiological states. Keywords: Brain-computer Interfaces, BCI, EEG, Sonification, Visualization, Artificial
Expressions, NIME, Visual Music | |||
| Notesaaz: a new controller and performance idiom | | BIBAK | PDF | 64 | |
| Erfan Abdi Dezfouli; Edwin van der Heide | |||
| Notesaaz is both a new physical interface meant for musical performance and
a proposal for a three-stage process where the controller is used to navigate
within a graphical score that on its turn controls the sound generation. It can
be seen as a dynamic and understandable way of using dynamic mapping between
the sensor input and the sound generation. Furthermore by presenting the
graphical score to both the performer and the audience a new engagement of the
audience can be established. Keywords: musical instrument, custom controller, gestural input, dynamic score | |||
| Air Violin: A Body-centric Style Musical Instrument | | BIBAK | PDF | 65 | |
| Xin Fan; Georg Essl | |||
| We show how body-centric sensing can be integrated in musical interface to
enable more flexible gestural control. We present a barehanded body-centric
interaction paradigm where users are able to interact in a spontaneous way
through performing gestures. The paradigm employs a wearable camera and
see-through display to enable flexible interaction in the 3D space. We designed
and implemented a prototype called Air Violin, a virtual musical instrument
using depth camera, to demonstrate the proposed interaction paradigm. We
described the design and implementation details. Keywords: NIME, musical instrument, interaction, gesture, Kinect | |||
| Remix_Dance 3: Improvisatory Sound Displacing on Touch Screen-Based Interface | | BIBAK | PDF | 66 | |
| Jaeseong You; Red Wierenga; Arsid Ketjuntra; Teerapat Parnmongkol | |||
| Remix_Dance Music 3 is a quadraphonic quasi-fixed tape piece; an improviser
can operate 60 pre-designed audio files using the Max/MSP-based interface. The
audio files appear as icons, and the improviser can touch and drag them to four
speakers in a miniature room on a tablet. Within the fixed duration of six
minutes, the performer can freely activate/deactivate the audios and realize 32
different sound-diffusing motions, generating a sonic structure to one's liking
out of the given network of musical possibilities. The interface is designed to
invite an integral musical structuring in the dimensions of performatively
underexplored (but still sonically viable) parameters that are largely based on
MPEG-7 audio descriptors. Keywords: Novel controllers, interface for musical expression, musical mapping
strategy, music cognition, music perception, MPEG-7 | |||
| Multi Sensor Tracking for Live Sound Transformation | | BIB | 67 | |
| Anton Fuhrmann; Johannes Kretz; Peter Burwik | |||
| A Self-Organizing Gesture Map for a Voice-Controlled Instrument Interface | | BIB | 68 | |
| Stefano Fasciani; Lonce Wyse | |||
| Rainboard and Musix: Building dynamic isomorphic interfaces | | BIB | 69 | |
| Brett Park; David Gerhard | |||
| Muscular Interactions. Combining EMG and MMG sensing for musical practice | | BIB | 70 | |
| Marco Donnarumma; Baptiste Caramiaux; Atau Tanaka | |||
| Fluid Simulation as Full Body Audio-Visual Instrument | | BIB | 71 | |
| Andrew Johnston | |||
| Synchronous Data Flow Modeling for DMIs | | BIBAK | PDF | 72 | |
| Danielle Bragg | |||
| This paper proposes a graph-theoretic model that supports the design and
analysis of data flow within digital musical instruments (DMIs). The state of
the art in DMI design does not provide standards for the scheduling of
computations within a DMI's data flow. Without a theoretical framework,
analysis of different scheduling protocols and their impact on the DMI's
performance is extremely difficult. As a result, the mapping between the DMI's
sensory inputs and sonic outputs is classically treated as a black box. DMI
builders are forced to design and schedule the flow of data through this black
box on their own. Improper design of the data flow can produce undesirable
results, ranging from over flowing buffers that cause system crashes to
misaligned sensory data that result in strange or disordered sonic events. In
this paper, we attempt to remedy this problem by providing a framework for the
design and analysis of the DMI data flow closely modeled after a framework for
digital signal processing. We also propose the use of a scheduling algorithm
built upon that framework, and prove that it guarantees desirable properties
for the resulting DMI. Keywords: DMI design, data flow, mapping function | |||
| SoundCraft: Transducing StarCraft 2 | | BIBAK | PDF | 73 | |
| Mark Cerqueira; Spencer Salazar; Ge Wang | |||
| SoundCraft is a framework that enables real-time data gathering from a
StarCraft 2 game to external software applications, allowing for musical
interpretation of the game's internal structure and strategies in novel ways.
While players battle each other for victory within the game world, a custom
StarCraft 2 map collects and writes out data about players' decision-making,
performance, and current focus on the map. This data is parsed and transmitted
over Open Sound Control (OSC) [9] in real-time, becoming the source for the
soundscape that accompanies the player's game. Using SoundCraft, we have
composed a musical work for two StarCraft 2 players, entitled GG Music. This
paper details the technical and aesthetic development of SoundCraft, including
data collection and sonic mapping. Keywords: interactive sonification, interactive game music, StarCraft 2 | |||
| Construction of a System for Recognizing Touch of Strings for Guitar | | BIBAK | PDF | 74 | |
| Hayami Tobise; Yoshinari Takegawa; Tsutomu Terada; Masahiko Tsukamoto | |||
| In guitar performance, fingering is an important factor. In particular, the
fingering of the left hand comprises various relationships between the finger
and the string, such as the finger touching/pressing/releasing the strings. The
recognition of the precise fingering is applied to a self-learning support
system, which is able to detect strings being muted by a finger, and which
transcribes music automatically, including the details of fingering techniques.
Therefore, the goal of our study is to construct a system for recognizing the
touch of strings for the guitar. We propose a method for recognizing the touch
of strings based on the conductive characteristics of strings and frets. We
develop a prototype system, and evaluate its effectiveness. Furthermore, we
propose an application that utilizes our system. Keywords: Guitar, Touched strings, Fingering recognition | |||
| Using Audio and Haptic Feedback to Improve Pitched Percussive Instrument Performance in Humanoids | | BIBAK | PDF | 75 | |
| Alyssa Batula; Manu Colacot; David Grunberg; Youngmoo Kim | |||
| We present a system that determines whether an adult-sized humanoid has
correctly played a pitched percussive instrument in real time. Human musicians
utilize sensory feedback to determine if they are playing their instruments
correctly and robot performers should be capable of the same feat. We present a
classification algorithm that uses auditory and haptic feedback to decide if a
note was well- or poorly-struck. This system is demonstrated using Hubo, an
adult-sized humanoid, which is able to play pitched pipes using paddles. We
show that this system is able to determine whether a note was played correctly
with 97% accuracy. Keywords: Musical robots, humanoids, auditory feedback, haptic feedback | |||
| Mobile DJ: a Tangible, Mobile Platform for Active and Collaborative Music Listening | | BIBAK | PDF | 76 | |
| Kenneth W. K. Lo; Chi Kin Lau; Michael Xuelin Huang; Wai Wa Tang; Grace Ngai; Stephen C. F. Chan | |||
| This paper presents Mobile DJ, a tangible, mobile platform for active music
listening, designed to augment internet-based social interaction with the
element of active music listening. A tangible interface facilitates users to
manipulate musical effects, such as incorporating chords or "scratching" the
record. A communication and interaction server further enables multiple users
to connect over the Internet and collaborate and interact through their music.
User tests indicate that the device is successful at allowing user immersion
into the active listening experience, and that users enjoy the added sensory
input as well as the novel way of interacting with the music and each other. Keywords: Mobile, music, interaction design, tangible user interface | |||
| Mining Unlabeled Electronic Music Databases through 3D Interactive Visualization of Latent Component Relationships | | BIBAK | PDF | 77 | |
| Parag Kumar Mital; Mick Grierson | |||
| We present an interactive content-based MIR environment specifically
designed to aid in the exploration of databases of experimental electronic
music, particularly in cases where little or no metadata exist. In recent
years, several rare archives of early experimental electronic music have become
available. The Daphne Oram Collection contains one such archive, consisting of
approximately 120 hours of 1/4 inch tape recordings and representing a period
dating from circa 1957. This collection is recognized as an important
musicological resource, representing aspects of the evolution of electronic
music practices, including early tape editing methods, experimental synthesis
techniques and composition. However, it is extremely challenging to derive
meaningful information from this dataset, primarily for three reasons. First,
the dataset is very large. Second, there is limited metadata -- some titles,
track lists, and occasional handwritten notes exist, but where this is true,
the reliability of the annotations are unknown. Finally, and most
significantly, as this is a collection of early experimental electronic music,
the sonic characteristics of the material are often not consistent with
traditional musical information. In other words, there is no score, no known
instrumentation, and often no recognizable acoustic source. We present a method
for the construction of a frequency component dictionary derived from the
collection via Probabilistic Latent Component Analysis (PLCA), and demonstrate
how an interactive 3D visualization of the relationships between the
PLCA-derived dictionary and the archive is facilitating researcher's
understanding of the data. Keywords: mir, plca, mfcc, 3d browser, daphne oram, content-based information
retrieval, interactive visualization | |||
| Updating the Classifications of Mobile Music Projects | | BIBAK | PDF | 78 | |
| David John | |||
| This paper reviews the mobile music projects that have been presented at
NIME in the past ten years in order to assess whether the changes in technology
have affected the activities of mobile music research. An overview of mobile
music projects is presented using the categories that describe the main
activities: projects that explore the influence of and make use of location;
applications that share audio or promote collaborative composition; interaction
using wearable devices; the use of mobile phones as performance devices;
projects that explore HCI design issues. The relative activity between
different of categories of mobile music is assessed in order to identify
trends. The classification according to technological, social or geographic
showed an overwhelming bias to the technological, followed by social
investigations. An alternative classification of survey, product, or artefact
reveals an increase in the number of products described with a corresponding
decline in the number of surveys and artistic projects. The increase in
technical papers appears to be due to an enthusiasm to make use of increased
capability of mobile phones, although there are signs that the initial interest
has already peaked, and researchers are again interested to explore
technologies and artistic expression beyond what can be provided by existing
mobile phones. Keywords: Mobile Music, interactive music, proximity sensing, wearable devices, mobile
phone performance, interaction design | |||
| Visual Associations in Augmented Keyboard Performance | | BIBAK | PDF | 79 | |
| Qi Yang; Georg Essl | |||
| What is the function of visuals in the design of an augmented keyboard
performance device with projection? We address this question by thinking
through the impact of design choices made in three examples on notions of locus
of attention, visual anticipation and causal gestalt to articulate a space of
design factors. Visuals can emphasize and deemphasize aspects of performance
and help clarify the role input has to the performance. We suggest that this
process might help thinking through visual feedback design in NIMEs with
respect to the performer or the audience. Keywords: Visual feedback, interaction, NIME, musical instrument, interaction,
augmented keyboard, gesture, Kinect | |||
| Variator: A Creativity Support Tool for Music Composition | | BIBAK | PDF | 80 | |
| Avneesh Sarwate; Rebecca Fiebrink | |||
| The Variator is a compositional assistance tool that aims to let users
quickly produce and experiment with variations on musical objects, such as
chords, melodies, and chord progressions. The transformations performed by the
Variator can range from standard counterpoint transformations (inversion,
retrograde, transposition) to more complicated custom transformations, and the
system is built to encourage the writing of custom transformations. This paper
explores the design decisions involved in creating a compositional assistance
tool, describes the Variator interface and a preliminary set of implemented
transformation functions, analyzes the results of the evaluations of a
prototype system, and lays out future plans for expanding upon that system,
both as a stand-alone application and as the basis for an open
source/collaborative community where users can implement and share their own
transformation functions. Keywords: Composition assistance tool, computer-aided composition, social composition | |||
| Netpixl: Towards a New Paradigm for Networked Application Development | | BIBAK | PDF | 81 | |
| Dimitri Diakopoulos; Ajay Kapur | |||
| Netpixl is a new micro-toolkit built to network devices within interactive
installations and environments. Using a familiar client-server model, Netpixl
centrally wraps an important aspect of ubiquitous computing: real-time
messaging. In the context of sound and music computing, the role of Netpixl is
to fluidly integrate endpoints like OSC and MIDI within a larger multi-user
system. This paper considers useful design principles that may be applied to
toolkits like Netpixl while also emphasizing recent approaches to application
development via HTML5 and Javascript, highlighting an evolution in networked
creative computing. Keywords: networking, ubiquitous computing, toolkits, html5 | |||
| Multi-Touch Interfaces for Phantom Source Positioning in Live Sound Diffusion | | BIBAK | PDF | 82 | |
| Bridget Johnson; Ajay Kapur | |||
| This paper presents a new technique for interface-driven diffusion
performance. Details outlining the development of a new tabletop surface-based
performance interface, named tactile.space, are discussed. User interface and
amplitude panning processes employed in the creation of tactile.space are
focused upon, and are followed by a user study-based evaluation of the
interface. It is hoped that the techniques described in this paper afford
performers and composers an enhanced level of creative expression in the
diffusion performance practice. Keywords: Multi touch, diffusion, VBAP, tabletop surface | |||
| GrainProc: a real-time granular synthesis interface for live performance | | BIBAK | PDF | 83 | |
| Mayank Sanganeria; Kurt Werner | |||
| GrainProc is a touchscreen interface for real-time granular synthesis
designed for live performance. The user provides a real-time audio input
(electric guitar, for example) as a granularization source and controls various
synthesis parameters with their fingers or toes. The control parameters are
designed to give the user access to intuitive and expressive live granular
manipulations. Keywords: Granular synthesis, touch screen interface, toe control, realtime, CCRMA | |||
| Graphic Waveshaping | | BIBAK | PDF | 84 | |
| Shawn Greenlee | |||
| In the design of recent systems, I have advanced techniques that position
graphic synthesis methods in the context of solo, improvisational performance.
Here, the primary interfaces for musical action are prepared works on paper,
scanned by digital video cameras which in turn pass image data on to software
for analysis and interpretation as sound synthesis and signal processing
procedures. The focus of this paper is on one of these techniques, a process I
describe as graphic waveshaping. A discussion of graphic waveshaping in basic
form and as utilized in my performance work, Impellent, is offered. In the
latter case, the performer's objective is to guide the interpretation of images
as sound, constantly tuning and retuning the conversion while selecting and
scanning images from a large catalog. Due to the erratic nature of the system
and the precondition that image to sound relationships are unfixed, the
performance situation is replete with the discovery of new sounds and the
circumstances that bring them into play. Keywords: Graphic waveshaping, graphic synthesis, waveshaping synthesis, graphic
sound, drawn sound | |||
| Illusio: A Drawing-Based Digital Musical Instrument | | BIBAK | PDF | 85 | |
| Jeronimo Barbosa; Filipe Calegario; Geber Ramalho; Veronica Teichrieb; Giordano Cabral | |||
| This paper presents an innovative digital musical instrument, the Illusio,
based on an augmented multi-touch interface that combines a traditional
multi-touch surface and a device similar to a guitar pedal. Illusio allows
users to perform by drawing and by associating the sketches with live loops.
These loops are manipulated based on a concept called hierarchical live
looping, which extends traditional live looping through the use of a musical
tree, in which any music operation applied to a given node affects all its
children nodes. Finally, we evaluate the instrument considering the performer
and the audience, which are two of the most important stakeholders involved in
the use, conception, and perception of a musical device. The results achieved
are encouraging and led to useful insights about how to improve instrument
features, performance and usability. Keywords: Digital musical instruments, augmented multi-touch, hierarchical live
looping, interaction techniques, evaluation methodology | |||
| Kontrol: Hand Gesture Recognition for Music and Dance Interaction | | BIBAK | PDF | 86 | |
| Kameron Christopher; Jingyin He; Raakhi Kapur; Ajay Kapur | |||
| This paper describes Kontrol, a new hand interface that extends the
intuitive control of electronic music to traditional instrumentalist and
dancers. The goal of the authors has been to provide users with a device that
is capable of detecting the highly intricate and expressive gestures of the
master performer, in order for that information to be interpreted and used for
control of electronic music. This paper discusses related devices, the
architecture of Kontrol, its potential as a gesture recognition device, and
several performance applications. Keywords: Hand controller, computational ethnomusicology, dance interface, conducting
interface, Wekinator, wearable sensors | |||
| Development of A Learning Environment for Playing Erhu by Diagnosis and Advice regarding Finger Position on Strings | | BIBAK | PDF | 87 | |
| Fumitaka Kikukawa; Sojiro Ishihara; Masato Soga; Hirokazu Taki | |||
| So far, there are few existing studies on skill learning support
environments for playing string instruments with bows because there are many
parameters to acquire skills and it is difficult to measure these parameters.
Therefore, the aim of this paper is to propose a design of a learning
environment for a novice learner to acquire an accurate finger position skill.
For achieving the aim, we developed a learning environment which can diagnose
learner's finger position and give the learner advice by using magnetic
position sensors. The system shows three windows; a finger position window for
visualization of finger position, a score window for diagnosing finger position
along the score and command prompt window for showing states of system and
advice. Finally, we evaluated the system by an experiment. The experimental
group improved accuracy values about finger positions and also improved
accuracy of pitches of sounds compared with the control group. These results
show significant differences. Keywords: Magnetic Position Sensors, String Instruments, Skill, Learning Environment,
Finger Position | |||
| BioSync: An Informed Participatory Interface for Audience Dynamics and Audiovisual Content Co-creation using Mobile PPG and EEG | | BIBAK | PDF | 88 | |
| Yuan-Yi Fan; Myles Sciotto | |||
| The BioSync interface presented in this paper merges the paradigms of
heart-rate and brain-wave into one mobile unit which is scalable for large
audience real-time applications. The goal of BioSync is to provide a hybrid
interface, which uses audience biometric responses for audience participation
techniques and methods. To provide an affordable and scalable solution, BioSync
collects the user's heart rate via mobile device pulse oximetry and the EEG
data via Bluetooth communication with the off-the-shelf MindWave Mobile
hardware. Various interfaces have been designed and implemented in the
development of audience participation techniques and systems. In the design and
concept of BioSync, we first summarize recent interface research for audience
participation within the NIME-related context, followed by the outline of the
BioSync methodology and interface design. We then present a technique for
dynamic tempo control based on the audience biometric responses and an early
prototype of a mobile dual-channel pulse oximetry and EEG bi-directional
interface for iOS device (BioSync). Finally, we present discussions and ideas
for future applications, as well as plans for a series of experiments, which
investigate if temporal parameters of an audience's physiological metrics
encourage crowd synchronization during a live event or performance, a
characteristic, which we see as having great potential in the creation of
future live musical, audiovisual and performance applications. Keywords: Mobile, Biometrics, Synchronous Interaction, Social, Audience, Experience | |||
| Laptap: Laptop Computer as a Musical Instrument using Audio Feedback | | BIBAK | PDF | 89 | |
| Dae Ryong Hong; Woon Seung Yeo | |||
| Laptap is a laptop-based, real-time sound synthesis/control system for music
and multimedia performance. The system produces unique sounds by positive audio
feedback between the on-board microphone and the speaker of a laptop computer.
Users can make a variety of sounds by touching the laptop computer in several
different ways, and control their timbre with the gestures of the other hand
above the microphone and the speaker to manipulate the characteristics of the
acoustic feedback path. We introduce the basic concept of this audio feedback
system, describe its features for sound generation and manipulation, and
discuss the result of an experimental performance. Finally we suggest some
relevant research topics that might follow in the future. Keywords: Laptop music, laptop computer, audio feedback, hand gesture, gestural
control, musical mapping, audio visualization, musical notation | |||
| A Function-Oriented Interface for Music Education and Musical Expressions: "the Sound Wheel" | | BIBAK | PDF | 90 | |
| Shoken Kaneko | |||
| In this paper, a function-oriented musical interface, named named the sound
wheel, is presented. This interface is designed to manipulate musical functions
like pitch class sets, tonal centers and scale degrees, rather than the musical
surface, i.e. the individual notes with concrete note heights. The sound wheel
has an interface summarizing harmony theory, and the playing actions have
explicit correspondence with musical functions. Easy usability is realized by
semi-automatizing the conversion process from musical functions into the
musical surface. Thus, the player can use this interface with concentration on
the harmonic structure, without having his attention caught by manipulating the
musical surface. Subjective evaluation indicated the effectiveness of this
interface as a tool helpful for understanding the music theory. Because of such
features, this interface can be used for education and interactive training of
tonal music theory. Keywords: Music education, Interactive tonal music generation | |||
| note~ for Max -- An extension for Max/MSP for Media Arts & Music | | BIBAK | PDF | 91 | |
| Thomas Resch | |||
| note~ for Max consists of four objects for the Software Max/MSP which allow
sequencing in floating point resolution and provide a Graphical User Interface
and a Scripting Interface for generating events within a timeline. Due to the
complete integration into Max/MSP it is possible to control almost every type
of client like another software, audio and video or extern hardware by note~ or
control note~ itself by other software and hardware. Keywords: Max/MSP, composing, timeline, GUI, sequencing, score, notation | |||
| Sonifying Chemical Evolution | | BIBAK | PDF | 92 | |
| Steve Everett | |||
| This presentation-demonstration discusses the creation of FIRST LIFE, a
75-minute mixed media performance for string quartet, live audio processing,
live motion capture video, and audience participation utilizing stochastic
models of chemical data provided by Martha Grover's Research Group at the
School of Chemical and Biomolecular Engineering at Georgia Institute of
Technology. Each section of this work is constructed from contingent outcomes
drawn from biochemical research exploring possible early Earth formations of
organic compounds. Keywords: Data-driven composition, sonification, live electronics-video | |||
| Fortissimo: Force-Feedback for Mobile Devices | | BIBAK | PDF | 93 | |
| Tae Hong Park; Oriol Nieto | |||
| In this paper we present a highly expressive, robust, and easy-to-build
system that provides force-feedback interaction for mobile computing devices
(MCD). Our system, which we call fortissimo (ff), utilizes standard built-in
accelerometer measurements in conjunction with generic foam padding that can be
easily placed under a device to render an expressive force-feedback performance
setup. fortissimo allows for musically expressive user-interaction with added
force-feedback which is integral for any musical controller -- a feature that
is absent for touchscreen-centric MCDs. This paper details ff core concepts,
hardware and software designs, and expressivity of musical features. Keywords: force-feedback, expression, mobile computing devices, mobile music | |||
| cutting record -- a record without (or with) prior acoustic information | | BIBAK | PDF | 94 | |
| Kazuhiro Jo; Mitsuhito Ando | |||
| In this paper, we present a method to produce analog records with vector
graphics software and two different types of cutting machines: laser cutter,
and paper cutter. The method enables us to engrave a variety of wave forms on a
surface of diverse materials such as paper, wood, acrylic, and leather without
or with prior acoustic information. The results could be played as analog
records with standard record players. We present the method with its technical
specification and explain our initial findings through practices. The work
examines the role of musical reproduction in the age of personal fabrication. Keywords: Analog Record, Personal Fabrication, Media Archaeology | |||
| Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment | | BIBAK | PDF | 95 | |
| Miles Thorogood; Philippe Pasquier | |||
| Soundscape composition in improvisation and performance contexts involves
many processes that can become overwhelming for a performer, impacting on the
quality of the composition. One important task is evaluating the mood of a
composition for evoking accurate associations and memories of a soundscape. We
present a new system called Impress that uses supervised machine learning for
the acquisition and realtime feedback of soundscape affect. We used an audio
features vector of audio descriptors to represent an audio signal for fitting
multiple regression models to predict soundscape affect. A model of soundscape
affect is created by users entering evaluations of audio environments using a
mobile device. The same device then provides feedback to the user of the
predicted mood of other audio environments. The evaluation of the Impress
system suggests the tool is effective in predicting soundscape affect. Keywords: soundscape, performance, machine learning, audio features, affect grid | |||
| Mobile rhythmic interaction in a sonic tennis game | | BIBAK | PDF | 96 | |
| Stefano Baldan; Amalia De Götzen; Stefania Serafin | |||
| This paper presents an audio-based tennis simulation game for mobile
devices, which uses motion input and non-verbal audio feedback as exclusive
means of interaction. Players have to listen carefully to the provided auditory
clues, like racquet hits and ball bounces, rhythmically synchronizing their
movements in order to keep the ball into play. The device can be swung freely
and act as a full-edged motion-based controller, as the game does not rely at
all on visual feedback and the device display can thus be ignored. The game
aims to be entertaining but also effective for educational purposes, such as
ear training or improvement of the sense of timing, and enjoyable both by
visually-impaired and sighted users. Keywords: Audio game, mobile devices, sonic interaction design, rhythmic interaction,
motion-based | |||
| Kinectofon: Performing with Shapes in Planes | | BIBAK | PDF | 97 | |
| Alexander Refsum Jensenius | |||
| The paper presents the Kinectofon, an instrument for creating sounds through
free-hand interaction in a 3D space. The instrument is based on the RGB and
depth image streams retrieved from a Microsoft Kinect sensor device. These two
image streams are used to create different types of motiongrams, which, again,
are used as the source material for a sonification process based on inverse
FFT. The instrument is intuitive to play, allowing the performer to create
sound by "touching" a virtual sound wall. Keywords: Kinect, motiongram, sonification, video analysis | |||
| Providing a Feeling of Other Remote Learners' Presence in An Online Learning Environment via Realtime Sonification of Moodle Access Log | | BIBAK | PDF | 98 | |
| Toshihiro Kita; Naotoshi Osaka | |||
| When people learn using Web-based educational resources, they are sitting in
front of their own computer at home and are physically isolated from other
online learners. In this study, an Open Sound Control based prototype system
for sonification of the access log of Moodle, a popular e-learning system, has
been developed as a way to provide a feeling of other learners' presence. To
generate sound from access log in Moodle, we designed a mapping of the log data
to sound parameters. Keywords: e-learning, online learners, Moodle, Csound, realtime sonification, OSC
(Open Sound Control) | |||
| Digiti Sonus: Advanced Interactive Fingerprint Sonification Using Visual Feature Analysis | | BIB | 99 | |
| Yoon Chung Han; Byeong-jun Han; Matthew Wright | |||
| MAGE 2.0: New Features and its Application in the Development of a Talking Guitar | | BIB | 100 | |
| Maria Astrinaki; Nicolas d'Alessandro; Loïc Reboursière; Alexis Moinet; Thierry Dutoit | |||
| A New Wi-Fi based Platform for Wireless Sensor Data Collection | | BIB | 101 | |
| Jim Torresen; Yngve Hafting; Kristian Nymoen | |||
| Musical Poi (mPoi) | | BIB | 102 | |
| Sangbong Nam | |||
| Towards an Interface for Music Mixing based on Smart Tangibles and Multitouch | | BIB | 103 | |
| Steven Gelineck; Dan Overholt; Morten Büchert; Jesper Andersen | |||
| An Interactive 3D Network Music Space | | BIBAK | PDF | 104 | |
| Chad McKinney; Nick Collins | |||
| In this paper we present Shoggoth, a 3D graphics based program for
performing network music. In Shoggoth, users utilize video game style controls
to navigate and manipulate a grid of malleable height maps. Sequences can be
created by defining paths through the maps which trigger and modulate audio
playback. With respect to a context of computer music performance, and specific
problems in network music, design goals and technical challenges are outlined.
The system is evaluated through established taxonomies for describing
interfaces, followed by an enumeration of the merits of 3D graphics in
networked performance. In discussing proposed improvements to Shoggoth, design
suggestions for other developers and network musicians are drawn out. Keywords: 3D, Generative, Network, Environment | |||
| echobo: Audience Participation Using The Mobile Music Instrument | | BIBAK | PDF | 105 | |
| Sang Won Lee; Jason Freeman | |||
| This work aims to create a musical performance for large-scale audience
participation using mobile phones as musical instruments. Utilizing ubiquitous
smartphones, we attempted to facilitate audience engagement with networked
phones as musical instruments. Drawing lessons learnt from previous works of
mobile music, audience participation practices, and networked instrument
design, we developed echobo. Audience members download the app, play the
instrument instantly, interact with other audience members, and contribute to
the music via sound generated on their mobile phones. Surveys of participants
indicated that it was easy to play and that participants felt connected to the
music and other musicians. Keywords: mobile music, audience participation, networked instrument | |||
| Jam On: A New Interface for Web-based Collective Music Performance | | BIBAK | PDF | 106 | |
| Ulysse Rosselet; Alain Renaud | |||
| This paper presents the musical interactions aspects of the design and
development of Jam On, a web-based interactive music collaboration system.
Based on a design science approach, this system is being built according to
principles taken from usability engineering and human computer interaction
(HCI). The goal of the system is to allow people with no to little musical
background to play a song collaboratively. The musicians control the musical
content and structure of the song thanks to an interface relying on the free
inking metaphor. The design of Jam On is based on a set of quality criteria
aimed at ensuring the musicality of the performance and the interactivity of
the technical system. The paper compares two alternative interfaces used in the
development of the system and explores the various stages of the design process
aimed at making the system as musical and interactive as possible. Keywords: Networked performance, interface design, mapping, web-based music
application | |||
| Modelling Gestures in Music Performance with Statistical Latent-State Models | | BIBAK | PDF | 107 | |
| Taehun Kim; Stefan Weinzierl | |||
| We discuss try to identify "gestures" in music performances by observing
patterns in both compositional and expressive properties, and by modelling them
with a statistical approach. Assuming a finite number of latent states on each
property value, we can describe those gestures with statistical latent state
models, and train them by unsupervised learning algorithms. Results for several
recorded performances indicate that the trained models could identify the
gestures observed, and detect their boundaries. An entropy-based measure was
used to estimate the relevance of each property for the identified gestures.
Results for a larger corpus of recorded and annotated musical performances are
promising and reveal potential for further improvements. Keywords: Musical gestures, performance analysis, unsupervised machine learning | |||
| Toward DMI Evaluation Using Crowd-Sourced Tagging Techniques | | BIBAK | PDF | 108 | |
| Michael Everman; Colby Leider | |||
| Few formal methods exist for evaluating digital musical instruments (DMIs).
We propose a novel method of DMI evaluation using crowd-sourced tagging.
Tagging is already used to classify websites and musical genres, which, like
DMIs, do not lend themselves to simple categorization or parameterization.
Using the social tagging method, participating individuals assign descriptive labels, or tags, to a DMI. A DMI can then be evaluated by analyzing the tags associated with it. Metrics can be generated from the tags assigned to the instrument, and comparisons made to other instruments. This can give the designer valuable insight into the where the strengths of the DMI lie and where improvements may be needed. Keywords: Evaluation, tagging, digital musical instrument | |||
| Paralinguistic Microphone | | BIBAK | PDF | 109 | |
| Alex McLean; EunJoo Shin; Kia Ng | |||
| The Human vocal tract is considered for its sonorous qualities in carrying
prosodic information, which implicates vision in the perceptual processes of
speech. These considerations are put in the context of previous work in NIME,
forming background for the introduction of two sound installations;
"Microphone", which uses a camera and computer vision to translate mouth shapes
to sounds, and "Microphone II", a work-in-progress, which adds physical
modelling synthesis as a sound source, and visualisation of mouth movements. Keywords: face tracking, computer vision, installation, microphone | |||
| Coral -- a Physical and Haptic Extension of a Swarm Simulation | | BIBAK | PDF | 110 | |
| Daniel Bisig; Sébastien Schiesser | |||
| This paper presents a proof of concept implementation of an interface
entitled Coral. The interface serves as a physical and haptic extension of a
simulated complex system, which will be employed as an intermediate mechanism
for the creation of generative music and imagery. The paper discusses the
motivation and concept that underlies the implementation, describes its
technical realisation and presents first interaction experiments. The focus
lies on the following two aspects: the interrelation between the physical and
virtual behaviours and properties of the interface and simulation, and the
capability of the interface to enable an intuitive and tangible exploration of
a hybrid dynamical system. Keywords: haptic interface, swarm simulation, generative art | |||
| Personalized Song Interaction Using a Multi Touch Interface | | BIBAK | PDF | 111 | |
| Jeffrey Scott; Mickey Moorhead; Justin Chapman; Ryan Schwabe; Youngmoo E. Kim | |||
| Digital music technology is a catalyst for transforming the way people
listen to music and creates new avenues for creative interaction and expression
within the musical domain. The barrier to music creation, distribution and
collaboration has been reduced, leading to entirely new ecosystems of musical
experience. Software editing tools such as digital audio workstations (DAW)
allow nearly limitless manipulation of source audio into new sonic elements and
textures and have promoted a culture of recycling and repurposing of content
via mashups and remixes. We present a multi-touch application that allows a
user to customize their listening experience by blending various versions of a
song in real time. Keywords: Multi-track, Multi-touch, Mobile devices, Interactive media | |||
| Sonifying Game-Space Choreographies With UDKOSC | | BIBAK | PDF | 112 | |
| Rob Hamilton | |||
| With a nod towards digital puppetry and game-based film genres such as
machinima, recent additions to UDKOSC offer an Open Sound Control (OSC) input
layer for external control over both third-person "pawn" entities, first-person
"player" actors and camera controllers in fully rendered game-space. Real-time
OSC input, driven by algorithmic process or parsed from a human-readable timed
scripting syntax allows users to shape intricate choreographies of timed
gesture, in this case actor motion and action, as well as an audiences' view
into a game-space environment. As UDKOSC outputs real-time coordinate and
action data generated by UDK pawns and players with OSC, individual as well as
aggregate virtual actor gestures and motion can be leveraged as drivers for
both creative and procedural/adaptive gaming music and audio concerns. Keywords: procedural music, procedural audio, interactive sonification, game music,
Open Sound Control | |||
| "Old" is the New "New": a Fingerboard Case Study in Recrudescence as a NIME Development Strategy | | BIBAK | PDF | 113 | |
| Adrian Freed; John MacCallum; Sam Mansfield | |||
| This paper addresses the problem that most electrophones and computer-based
musical instruments are ephemera lasting long enough to signal academic and
technical prowess. They rarely are used more than in a few musical
performances. We offer a case study that suggests that longevity of use depends
on stabilizing the interface and innovating the implementation to maintain the
required stability of performance for players. Keywords: Fingerboard controller, Best practices, Recrudescence, Organology,
Unobtainium | |||
| SoloTouch: A Capacitive Touch Controller with Lick-based Note Selector | | BIBAK | PDF | 114 | |
| A Jackie; Yi Tang Chui; Mubarak Marafa; A Samson; Ka Fai Young | |||
| This paper describes the design of a guitar-inspired, pocket-sized
controller system SoloTouch. SoloTouch consists of a capacitive touch trigger,
and an automated note selector program. Requiring only one finger, the touch
trigger allows intuitive execution of both velocity sensitive notes and
aftertouch messages. The automated note selector program selects consecutive
consonant notes from preprogrammed solo phrases. A companion iPhone app
displays and controls the phrases that are being performed. The interface
focuses on the balance between ease of playing and the degree of expressive
controls available. Players without prior musical training could perform
musical and expressive solos suitable for improvisational contexts in the style
of blues and rock. Keywords: Capacitive touch controller, automated note selector, virtual instrument
MIDI controller, novice musicians | |||
| Designing Empowering Vocal and Tangible Interaction | | BIBAK | PDF | 115 | |
| Anders-Petter Andersson; Birgitta Cappelen | |||
| Our voice and body are important parts of our self-experience, and our
communication and relational possibilities. They gradually become more
important for Interaction Design due to increased development of tangible
interaction and mobile communication. In this paper we present and discuss our
work with voice and tangible interaction in our ongoing research project RHYME.
The goal is to improve health for families, adults and children with
disabilities through use of collaborative, musical, tangible media. We build on
the use of voice in Music Therapy and on a humanistic health approach. Our
challenge is to design vocal and tangible interactive media that through use
reduce isolation and passivity and increase empowerment for the users. We use
sound recognition, generative sound synthesis, vibrations and cross-media
techniques to create rhythms, melodies and harmonic chords to stimulate
voice-body connections, positive emotions and structures for actions. Keywords: Vocal Interaction, Tangible Interaction, Music & Health, Voice, Empowerment,
Music Therapy, Resource-Oriented | |||
| Cloud Bridge: a Data-driven Immersive Audio-Visual Software Interface | | BIBAK | PDF | 116 | |
| Qian Liu; Yoon Chung Han; JoAnn Kuchera-Morin; Matthew Wright; George Legrady | |||
| Cloud Bridge is an immersive interactive audiovisual software interface for
both data exploration and artistic creation. It explores how information
sonification and visualization can facilitate findings, by creating interactive
visual/musical compositions. Cloud Bridge is a multi-user, multimodal
instrument based on a data set representing the history of items checked out by
patrons of the Seattle Public Library. A single user or a group of users
functioning as a performance ensemble participates in the piece by
interactively querying the database using iOS devices. Each device is
associated with a unique timbre and color for contributing to the piece, which
appears on large shared screens and a surround-sound system for all
participants and observers. Cloud Bridge leads to a new media interactive
interface utilizing audio synthesis, visualization and real-time interaction. Keywords: Data Sonification, Data Visualization, Sonification, User Interface, Sonic
Interaction Design, Open Sound Control | |||
| Mira: Liveness in iPad Controllers for Max/MSP | | BIBAK | PDF | 117 | |
| Sam Tarakajian; David Zicarelli; Joshua Clayton | |||
| Mira is an iPad app for mirroring Max patchers in real time with minimal
configuration. The Mira iPad app discovers open Max patchers automatically
using the Bonjour1 protocol, connects to them over WiFi and negotiates a
description of the Max patcher. As objects change position and appearance, Mira
makes sure that the interface on the iPad stays up to date. Mira eliminates the
need for an explicit mapping step between the interface and the system being
controlled. The user is never asked to input an IP address, nor to configure
the mapping between interface objects on the iPad and those in the Max patcher.
So the prototyping composer is free to rapidly configure and reconfigure the
interface. Keywords: NIME, Max/MSP/Jitter, Mira, iPad, osc, bonjour, zeroconf | |||
| VOSIS: a Multi-touch Image Sonification Interface | | BIBAK | PDF | 118 | |
| Ryan McGee | |||
| VOSIS is an interactive image sonification interface that creates complex
wavetables by raster scanning greyscale image pixel data. Using a multi-touch
screen to play image regions of unique frequency content rather than a linear
scale of frequencies, it becomes a unique performance tool for experimental and
visual music. A number of image filters controlled by multi-touch gestures add
variation to the sound palette. On a mobile device, parameters controlled by
the accelerometer add another layer expressivity to the resulting audio-visual
montages. Keywords: image sonification, multi-touch, visual music | |||
| Feeling for Sound: Mapping Sonic Data to Haptic Perceptions | | BIBAK | PDF | 119 | |
| Tom Mudd | |||
| This paper presents a system for exploring different dimensions of a sound
through the use of haptic feedback. The Novint Falcon force feedback interface
is used to scan through soundfiles as a subject moves their hand horizontally
from left to right, and to relay information about volume, frequency content,
envelopes, or potentially any analysable parameter back to the subject through
forces acting on their hand.
General practicalities of mapping sonic elements to physical forces are considered such as the problem of representing detailed data through vague physical sensation, approaches to applying forces to the hand that do not interfering with the smooth operation of the device, and the relative merits of discreet and continuous mappings. Three approaches to generating the force vector are discussed: 1) the use of simulated detents to identify areas of an audio parameter over a certain threshold, 2) applying friction proportional to the level of the audio parameter along the axis of movement, and 3) creating forces perpendicular to the subject's hand movements. The potential uses of such a device are also discussed, such as 'pre-feeling' as a method for selecting material to play during a live performance, an aid for visually impaired audio engineers, and as a general augmentation of standard audio editing environments. Keywords: Haptics, force feedback, mapping, human-computer interaction | |||
| POWDER BOX: An Interactive Device with Sensor Based Replaceable Interface For Musical Session | | BIBAK | PDF | 120 | |
| Yoshihito Nakanishi; Seiichiro Matsumura; Chuichi Arakawa | |||
| In this paper, the authors introduce an interactive device, "POWDER BOX" for
use by novices in musical sessions. "POWDER BOX" is equipped with sensor-based
replaceable interfaces, which enable participants to discover and select their
favorite playing styles of musical instruments during a musical session. In
addition, it has a wireless communication function that synchronizes musical
scale and BPM between multiple devices. "POWDER BOX" provides novice
participants with opportunities to experience a cooperative music performance.
Here, the interaction design and configuration of the device is presented. Keywords: Musical instrument, synthesizer, replaceable interface, sensors | |||
| Performing with a Mobile Computer System for Vibraphone | | BIBAK | PDF | 121 | |
| Charles Martin | |||
| This paper describes the development of an Apple iPhone based mobile
computer system for vibraphone and its use in a series of the author's
performance projects in 2011 and 2012.
This artistic research was motivated by a desire to develop an alternative to laptop computers for the author's existing percussion and computer performance practice. The aims were to develop a light, compact and flexible system using mobile devices that would allow computer music to infiltrate solo and ensemble performance situations where it is difficult to use a laptop computer. The project began with a system that brought computer elements to Nordlig Vinter, a suite of percussion duos, using an iPhone, RjDj, Pure Data and a home-made pickup system. This process was documented with video recordings and analysed using ethnographic methods. The mobile computer music setup proved to be elegant and convenient in performance situations with very little time and space to set up, as well as in performance classes and workshops. The simple mobile system encouraged experimentation and the platforms used enabled sharing with a wider audience. Keywords: percussion, mobile computer music, Apple iOS, collaborative performance
practice, ethnography, artistic research | |||
| NoiseBear: A Malleable Wireless Controller Designed In Participation with Disabled Children | | BIBAK | PDF | 122 | |
| Mick Grierson; Chris Kiefer | |||
| NoiseBear is a wireless malleable controller designed for, and in
participation with, physically and cognitively disabled children. The aim of
the project was to produce a musical controller that was robust, and flexible
enough to be used in a wide range of interactive scenarios in participatory
design workshops. NoiseBear demonstrates an open ended system for designing
wireless malleable controllers in different shapes. It uses pressure sensitive
material made from conductive thread and polyester cushion stuffing, to give
the feel of a soft toy. The sensor networks with other devices using the
Bluetooth Low Energy protocol, running on a BlueGiga BLE112 chip. This contains
an embedded 8051 processor which manages the sensor. NoiseBear has undergone an
initial formative evaluation in workshop sessions with four autistic children,
and continues to evolve in series of participatory design workshops. The
evaluation showed that controller could be engaging for the children to use,
and highlighted some technical limitations of the design. Solutions to these
limitations are discussed, along with plans for future design iterations. Keywords: malleable controllers, assistive technology, multi-parametric mapping | |||
| AlphaSphere | | BIBAK | PDF | 123 | |
| Adam Place; Liam Lacey; Thomas Mitchell | |||
| The AlphaSphere is an electronic musical instrument featuring a series of
tactile, pressure sensitive touch pads arranged in a spherical form. It is
designed to offer a new playing style, while allowing for the expressive
real-time modulation of sound available in electronic-based music. It is also
designed to be programmable, enabling the flexibility to map a series of
different notational arrangements to the pad-based interface.
The AlphaSphere functions as an HID, MIDI and OSC device, which connects to a computer and/or independent MIDI device, and its control messages can be mapped through the AlphaLive software. Our primary motivations for creating the AlphaSphere are to design an expressive music interface which can exploit the sound palate of synthesizers1 in a design which allows for the mapping of notational arrangements. Keywords: AlphaSphere, MIDI, HID, polyphonic aftertouch, open source | |||
| The Black Box | | BIBAK | PDF | 124 | |
| Romain Michon; Myles Borins; David Meisenholder | |||
| The Black Box1 is a site based installation that allows users to create
unique sounds through physical interaction. The installation consists of a
geodesic dome, surround sound speakers, and a custom instrument suspended from
the apex of the dome. Audience members entering the space are able to create
sound by striking or rubbing the cube, and are able to control a delay system
by moving the cube within the space. Keywords: Satellite CCRMA, Beagleboard, PureData, Faust, EmbeddedLinux, Open Sound
Control | |||
| Gamelan Sampul: Laptop Sleeve Gamelan | | BIBAK | PDF | 125 | |
| Antonius Wiriadjaja | |||
| The Gamelan Sampul is a laptop sleeve with embedded circuitry that allows
users to practice playing Javanese gamelan instruments without a full set of
instruments. It is part of a larger project that aims to develop a set of
portable and mobile tools for learning, recording and performing classical
Javanese gamelan music.
The accessibility of a portable Javanese gamelan set introduces the musical genre to audiences who have never experienced this traditional music before, passing down long established customs to future generations. But it also raises the question of what is and what isn't appropriate to the musical tradition. The Gamelan Sampul attempts to introduce new technology to traditional folk music while staying sensitive to cultural needs. Keywords: Physical computing, product design, traditional folk arts, gamelan | |||
| Plum St: Live Digital Storytelling with Remote Browsers | | BIBAK | PDF | 126 | |
| Ben Taylor; Jesse Allison | |||
| What is the place for Internet Art within the paradigm of remote music
performance? In this paper, we discuss techniques for live audiovisual
storytelling with the Web browsers of remote viewers. We focus on the
incorporation of socket technology to create a real-time link between performer
and audience, enabling control of audiovisual media directly within the
audiences' browsers. Finally, we describe Plum Street, an online multimedia
performance, and suggest that by appropriating Web media such as Google Maps,
social media, and Web Audio into the genre of remote audio performance, we can
tell stories in a way that more accurately addresses modern life and
holistically fulfills the Web browser's capabilities as a contemporary
performance instrument. Keywords: Remote Performance, Network Music, Internet Art, Storytelling | |||
| PENny: An Extremely Low-Cost Pressure-Sensitive Stylus for Existing Capacitive Touchscreens | | BIBAK | PDF | 127 | |
| Johnty Wang; Nicolas d'Alessandro; Aura Pon; Sidney Fels | |||
| By building a wired passive stylus we have added pressure sensitivity to
existing capacitive touch screen devices for less than $10 in materials, about
1/10th the cost of existing solutions. The stylus makes use of the built-in
audio interface that is available on most smartphones and tablets on the market
today. Limitations of the device include the physical constraint of wires, the
occupation of one audio input and output channel, and increased latency equal
to the period of at least one audio buffer duration. The stylus has been
demonstrated in two NIME applications thus far: a visual musical score drawing
and a singing synthesis application. Keywords: input interfaces, touch screens, tablets, pressure-sensitive, low-cost | |||
| An Easily Removable, wireless Optical Sensing System (EROSS) for the Trumpet | | BIB | 128 | |
| Leonardo Jenkins; Shawn Trail; George Tzanetakis; Peter Driessen; Wyatt Page | |||
| Embedded Networking and Hardware-Accelerated Graphics with Satellite CCRMA | | BIB | 129 | |
| Edgar Berdahl; Spencer Salazar; Myles Borins | |||
| Agile Interface Development using OSC Expressions and Process Migration | | BIB | 130 | |
| Adrian Freed; John MacCallum; David Wessel | |||
| NEXUS: Collaborative Performance for the Masses, Handling Instrument Interface Distribution through the Web | | BIB | 131 | |
| Jesse Allison; Yemin Oh; Benjamin Taylor | |||
| Further Finger Position and Pressure Sensing Techniques for Strings and Keyboard Instruments | | BIB | 132 | |
| Tobias Grosshaus | |||