HCI Bibliography Home | HCI Conferences | NIME Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
NIME Tables of Contents: 0102030405060708091011121314

NIME 2014: New Interfaces for Musical Expression 2014-06-30

Fullname:NIME 2014: New Interfaces for Musical Expression
Editors:Atau Tanaka; Rebecca Fiebrink; Baptiste Caramiaux; Koray Tahiroğlu
Location:London, England
Dates:2014-Jun-30 to 2014-Jul-04
Standard No:hcibib: NIME14
Papers:179
Links:Conference Home Page | Online Proceedings
  1. Workshops
  2. Papers: Musicianship, Practice-Based Research
  3. Papers: Collaborative Music Making
  4. Sensing and Augmented Instruments
  5. Demos
  6. Posters
  7. Papers: Motion and Gesture
  8. Papers: Musical Interaction Design
  9. Papers: Networked Wireless Systems
  10. Papers: Machine Learning Applied
  11. Demos
  12. Posters
  13. Papers: Audio Applications and Installations
  14. Papers: Interfaces: Development, Deployment, Evaluation
  15. Papers: Robotic and Mechatronic Systems
  16. Demos
  17. Posters
  18. Papers: Tangible Interaction and Interfaces
  19. Panels

Workshops

Interactive Music Notation & Representation BIBAFull-TextCall for Participation 1
  Jean Bresson; Pierre Couprie; Dominique Fober; Yann Gueslin; Richard Hoadley
Computer music tools for music notation have long been restricted to conventional approaches and dominated by a few systems, mainly oriented towards music engraving. During the last decade and driven by artistic and technological evolutions, new tools and new forms of music representation have emerged. The recent advent of systems like Bach, MaxScore or INScore (to cite just a few), clearly indicates that computer music notation tools have become mature enough to diverge from traditional approaches and to explore new domains and usages such as interactive and live notation.
   The aim of the workshop is to gather artists, researchers and application developers, to compare the views and the needs inspired by contemporary practices, with a specific focus on interactive and live music, including representational forms emerging from live coding. Special consideration will be given to new instrumental forms emerging from the NIME community.
Practice-based Research and NIMEs BIBAFull-TextWorkshop Website 2
  Ernest Edmonds; Samuel Ferguson; Andrew Johnston
Practitioner-researchers in new musical instrument/interface design often set themselves multiple challenges: they seek to design and implement new technologies, create and perform new works, examine and evaluate what they have done and, finally, articulate what has been learned in the process.
   To do this effectively requires careful consideration of the links between creative work and research. Failing to do so can lead to technical research which lacks relevance to creative practice or, conversely, creative work where the broader contribution is unclear.
   This workshop focuses on the relationships between creative practice and research -- and blends of the two -- with particular emphasis on new musical interface/instrument design.
A NIME Primer BIBAKSWYZTQWorkshop Description 3
  Michael Lyons; Sidney Fells
Attending NIME for the first time can be an overwhelming experience. Beginners may find it difficult to make sense of the vast array of topics presented during the busy program of talks and posters, or appreciate the significance of the wide variety of demos and concerts. This half-day tutorial is intended to provide a general and gentle introduction to the theory and practice of the design of interactive systems for music creation and performance. Our target audience consists of newcomers to the field who would like to start research projects, as well as interested students, people from other fields and members of the public with a general interest in the potential of NIME. We aim to give our audience an entry point to the theory and practice of musical interface design by drawing on case studies from previous years of the conference. Past attendees of the tutorial have told us that they gained a helpful perspective that helped them to increase their understanding and appreciation of their first NIME.
Keywords: New interfaces for musical expression, musical instrument design, human computer interaction, expressive interfaces, multimodal interaction, digital musical instruments
Human Harp BIBAFull-TextWeb Page 4
  Di Mainstone; Adam Stark; Becky Stewart
The Human Harp is a project led by artist Di Mainstone with hardware and software engineering by Becky Stewart and Adam Stark. The project draws inspiration from suspension bridges and explores how they can be transformed into musical instruments through augmentation with physical computing interfaces.
   The proposed workshop will lead participants through the hardware and software central to the installation -- in particular the hardware string interface and the sound generation software. Participants will get hands-on experience programming and testing the Arduino-based hardware and then a guided tour through the software generating the audio.
Learning to Programme Haptic Interactions BIBAFull-TextWorkshop Details 5
  Edgar Berdahl; Alexandros Kontogeorgakopoulos
In this workshop, participants will learn how to program force-feedback haptic interactions in Max. During the workshop, each participant will borrow a FireFader haptic device with the option of purchasing it at the end of the workshop. This workshop aims to get participants easily up to speed by examining simple example haptic interactions in the familiar Max programming environment. Many of these examples are based on physical models and leverage Max's palette of visualization objects to help communicate the means of operation to participants. More advanced examples help provide participants with specific insight into how haptics can be integrated into novel music compositions and sound art. Several music compositions will be used as examples: "Metronom," "Transmogrified Strings," "Engraving-Hammering-Casting," and "When The Robots Get Loose."
NIME & Accessibility BIBAPXHPTF 6
  Conor Barry; Nick Bryan-Kinns; Fiore Martin; Oussama Metatla; Sile O'Modhrain; Tony Stockman; Adam Parkinson
We propose to bring together different communities, including the NIME community and the community centred around the AIMS (Accessible Interfaces for Music and Sound) group, to participate in a one day workshop at NIME 2014 to explore both the accessibility of NIMEs and what NIMEs can bring to accessible audio interfaces. This will take the shape of a day of demos of NIME-related technologies, presentations and sessions for learning and the exchange of skills and techniques surrounding accessibility and interactive music. The workshop aims to balance information exchange with practical hands on experiences, and will feature a 90 minute special session lead by Sile O'Modhrain on movement and interactivity, giving participants experience working with NIME-related technologies that they can extend into their own practices.
Sonic Bikes BIBAFull-Text 7
  Dave Griffiths; Kaffe Matthews
The NIME BRI workshop will present the sonic bicycle as instrument as well as the research made behind the technical achievements reached to make the 'Pedalling Games' playing in the park during the conference. (The Pedalling Games has been commissioned by NIME).
   The workshop's aim will be to present an open lab for conference delegates/participants to input on different kinds of interactivity between bikes and how these sonically (and even visually) could be developed and used for musical outcome by pedalling alone and together.
   5 sonic bikes will be available for participants to try on the lawn or tennis court beside the conference centre and participants could possibly be able to bring their own bikes for sonic augmentation. Participants will be able to feedback their experience and offer technical or philosophical input as to how the sonic bike could be improved/developed/altered. Some of these ideas will be made in the workshop and taken out to try and so on.
Keyboard Salon: Connecting Instrument Designers and Artistic Practitioners BIBAFull-TextWorkshop Details 8
  Andrew McPherson; Thomas Walther; Xiao Xiao
Within the current conference format, there are two ways to experience new interfaces -- as a demo or in a concert performance. Demo sessions display many projects concurrently, with a focus on the technical capabilities of an interface rather than its artistic potential. Concerts focus on the performative aspects of each interface, which is given full attention on stage, but there are not so many opportunities for critical feedback from the community. In this workshop, we would like to experiment with a new format for trying out and talking about new interfaces. Inspired by the salons of 18th and 19th centuries where composers, virtuosos and afficionado gathered around a piano to play and discuss new music, we propose a salon-style workshop on the topic of new keyboard instruments and interfaces. We hope to gather both designers and artists from established musical practices outside NIME to share their expertise both in discussion and in hands-on experimentation.
   Though the workshop, we want to focus on the experience of new keyboard instruments. For instrument-builders within NIME, exploring questions of how to extend a NIME into a wider artistic community. For artists, we're interested in exploring how their creative practice can influence other musicians and designers.
Musical Metacreation Tutorial (MUME) BIBAFull-TextWorkshop Details 9
  Oliver Bown; Arne Eigenfeldt; Philippe Pasquier
This three hours tutorial aims at introducing the field of musical metacreation (MUME) and its current developments, promises, and challenges, with a particular focus on NIME-relevant aspects of the field.
   Thanks to continued progress in artistic and scientific research, a new possibility has emerged in our musical relationship with technology: Musical Metacreation. MUME involves using tools and techniques from artificial intelligence, artificial life, and machine learning, themselves often inspired by cognitive and life sciences, to endow machines with musical creativity. Concretely, it brings together artists, practitioners and researchers interested in developing systems that autonomously (or interactively) recognize, learn, represent, compose, complete, accompany, or interpret musical data.
   Besides introducing the field of musical metacreation (MUME) and its current developments, the tutorial will bring to the front NIME-relevant aspects of the field, such as: musical interfaces for the collaboration between human performers and creative software "partners", the development of interfaces and instrument that foster and support computer-assisted musical creativity.

Papers: Musicianship, Practice-Based Research

NIME, Musicality and Practice-led Methods BIBAFull-Text 10
  Owen Green
To engage with questions of musicality is to invite into consideration a complex network of topics beyond the mechanics of soundful interaction with our interfaces. Drawing on the work of Born, I sketch an outline of the reach of these topics. I suggest that practice-led methods, by dint of focussing on the lived experience where many of these topics converge, may be able to serve as a useful methodological 'glue' for NIME by helping stimulate useful agonistic discussion on our objects of study, and map the untidy contours of contemporary practices. I contextualise this discussion by presenting two recently developed improvisation systems and drawing from these some starting suggestions for how attention to the grain of lived practice could usefully contribute to considerations for designers in terms of the pursuit of musicality and the care required in considering performances in evaluation.
Hybrid Resonant Assemblages: Rethinking Instruments, Touch and Performance in New Interfaces for Musical Expression BIBAFull-Text 11
  John Bowers; Annika Haas
This paper outlines a concept of hybrid resonant assemblages, combinations of varied materials excited by sound transducers, feeding back to themselves via digital signal processing. We ground our concept as an extension of work by David Tudor, Nicolas Collins and Bowers and Archer [NIME 2005] and draw on a variety of critical perspectives in the social sciences and philosophy to explore such assemblages as an alternative to more familiar ideas of instruments and interfaces. We lay out a conceptual framework for the exploration of hybrid resonant assemblages and describe how we have approached implementing them. Our performance experience is presented and implications for work are discussed. In the light of our work, we urge a reconsideration of the implicit norms of performance which underlie much research in NIME. In particular, drawing on the philosophical work of Jean-Luc Nancy, we commend a wider notion of touch that also recognises the performative value of withholding contact.
Examining the Perception of Liveness and Activity in Laptop Music: Listeners' inference about what the performer is doing from the audio alone BIBAFull-Text 12
  Oliver Bown; Renick Bell; Adam Parkinson
Audiences of live laptop music frequently express dismay at the opacity of performer activity and question how "live" performances actually are. Yet motionless laptop performers endure as musical spectacles from clubs to concert halls, suggesting that for many this is a non-issue. Understanding these perceptions might help performers better achieve their intentions, inform interface design within the NIME field and help develop theories of liveness and performance. To this end, a study of listeners' perception of liveness and performer control in laptop performance was carried out, in which listeners examined several short audio-only excerpts of laptop performances and answered questions about their perception of the performance: what they thought was happening and its sense of liveness. Our results suggest that audiences are likely to associate liveness with perceived performer activity such as improvisation and the audibility of gestures, whereas perceptions of generative material, backing tracks, or other preconceived material do not appear to inhibit perceptions of liveness.
Improvising with the Threnoscope: Integrating Code, Hardware, GUI, Network, and Graphic Scores BIBAFull-Text 13
  Thor Magnusson
Live coding emphasises improvisation. It is an art practice that merges the act of musical composition and performance into a public act of projected writing. This paper introduces the Threnoscope system, which includes a live coding micro-language for drone-based microtonal composition. The paper discusses the aims and objectives of the system, elucidates the design decisions, and introduces in particular the code score feature present in the Threnoscope. The code score is a novel element in the design of live coding systems allowing for improvisation through a graphic score, rendering a visual representation of past and future events in a real-time performance. The paper demonstrates how the system's methods can be mapped ad hoc to GUI- or hardware-based control.

Papers: Collaborative Music Making

Instrumenting the Interaction: Affective and Psychophysiological Features of Live Collaborative Musical Improvisation BIBATAFVGT 14
  Evan Morgan; Hatice Gunes; Nick Bryan-Kinns
New technologies have led to the design of exciting interfaces for collaborative music making. However we still have very little understanding of the underlying affective and communicative processes which occur during such interactions. To address this issue, we carried out a pilot study where we collected continuous behavioural, physiological, and performance related measures from pairs of improvising drummers. This paper presents preliminary findings, which could be useful for the evaluation and design of user-centred collaborative interfaces for musical creativity and expression.
Augmented Stage for Participatory Performances BIBATAKPKL 15
  Dario Mazzanti; Victor Zappi; Darwin Caldwell; Andrea Brogni
Designing a collaborative performance requires the use of paradigms and technologies which can deeply influence the whole piece experience. In this paper we define a set of six variables, and use them to describe and evaluate a number of platforms for participatory performances. Based on this evaluation, the Augmented Stage is introduced. Such concept describes how Augmented Reality techniques can be used to superimpose a performance stage with a virtual environment, populated with interactive elements. The manipulation of these objects allows spectators to contribute to the visual and sonic outcome of the performance through their mobile devices, while keeping their freedom to focus on the stage. An interactive acoustic rock performance based on this concept was staged. Questionnaires distributed to the audience and performers' comments have been analyzed, contributing to an evaluation of the presented concept and platform done through the defined variables.
Touch Screen Collaborative Music: Designing NIME for Older People with Dementia BIBAFull-Text 16
  Stu Favilla; Sonja Pedell
This paper presents new touch-screen collaborative music interaction for people with dementia. The authors argue that dementia technology has yet to focus on collaborative multi-user group musical interactions. The project aims to contribute to dementia care while addressing a significant gap in current literature. Two trials explore contrasting musical scenarios: the performance of abstract electronic music and the distributed performance of J.S. Bach's Goldberg Variations. Findings presented in this paper; demonstrate that people with dementia can successfully perform and engage in collaborative music performance activities with little or no scaffolded instruction. Further findings suggest that people with dementia can develop and retain musical performance skill over time. This paper proposes a number of guidelines and design solutions.
SoundXY4: Supporting Tabletop Collaboration and Awareness with Ambisonics Spatialisation BIBARIGWPO 17
  Anna Xambó; Gerard Roma; Robin Laney; Chris Dobbyn; Sergi Jordà
Co-located tabletop tangible user interfaces (TUIs) for music performance are known for promoting multi-player collaboration with a shared interface, yet it is still unclear how to best support the awareness of the workspace in terms of understanding individual actions and the other group members actions, in parallel. In this paper, we investigate the effects of providing auditory feedback using ambisonics spatialisation, aimed at informing users about the location of the tangibles on the tabletop surface, with groups of mixed musical backgrounds. Participants were asked to improvise music on "SoundXY4: The Art of Noise", a tabletop system that includes sound samples inspired by Russolo's taxonomy of noises. We compared spatialisation vs. no-spatialisation conditions, and findings suggest that, when using spatialisation, there was a clearer workspace awareness, and a greater engagement in the musical activity as an immersive experience.
Experio: a Design for Novel Audience Participation in Club Settings BIBARIKZQT 18
  Bastiaan van Hout; Luca Giacolini; Bart Hengeveld; Mathias Funk; Joep Frens
When looking at modern music club settings, especially in the area of electronic music, music is consumed in a unidirectional way -- from DJ or producer to the audience -- with little direct means to influence and participate. In this paper we challenge this phenomenon and aim for a new bond between the audience and the DJ through the creation of an interactive dance concept: Experio. Experio allows for multiple audience participants influencing the musical performance through dance, facilitated by a musical moderator using a tailored interface. This co-creation of electronic music on both novice and expert levels is a new participatory live performance approach, which is evaluated on the basis of thousands of visitors who interacted with Experio during several international exhibitions.
TreeQuencer: Collaborative Rhythm Sequencing -- A Comparative Study BIBAFull-Text 19
  Niklas Klügel; Georg Groh; Gerhard Hagerer
In this contribution we will show three prototypical applications that allow users to collaboratively create rhythmic structures with successively more degrees of freedom to generate rhythmic complexity. By means of a user study we analyze the impact of this on the users' satisfaction and further compare it to data logged during the experiments that allow us to measure the rhythmic complexity created.

Sensing and Augmented Instruments

Low-Latency Audio Pitch Tracking: A Multi-Modal Sensor-Assisted Approach BIBARIGUOS 20
  Laurel Pardue; Dongjuan Nian; Christopher Harte; Andrew McPherson
This paper presents a multi-modal approach to musical instrument pitch tracking combining audio and position sensor data. Finger location on a violin fingerboard is measured using resistive sensors, allowing rapid detection of approximate pitch. The initial pitch estimate is then used to restrict the search space of an audio pitch tracking algorithm. Most audio-only pitch tracking algorithms face a fundamental tradeoff between accuracy and latency, with longer analysis windows producing better pitch estimates at the cost of noticeable lag in a live performance environment. Conversely, sensor-only strategies struggle to achieve the fine pitch accuracy a human listener would expect. By combining the two approaches, high accuracy and low latency can be simultaneously achieved.
The Actuated guitar: Implementation and user test on children with Hemiplegia BIBAFull-Text 21
  Jeppe Larsen; Thomas Moeslund; Dan Overholt
People with a physical handicap are often not able to engage and embrace the world of music on the same terms as normal functioning people. Musical instruments have been refined the last centuries which makes them highly specialized instruments that nearly all requires at least two functioning hands. In this study we try to enable people with hemiplegia to play a real electrical guitar by modifying it in a way that make people with hemiplegia able to actually play the guitar. We developed the guitar platform to utilize sensors to capture the rhythmic motion of alternative fully functioning limbs, such as a foot, knee or the head to activate a motorized fader moving a pick back and forth across the strings. The approach employs the flexibility of a programmable digital system which allows us to scale and map different ranges of data from various sensors to the motion of the actuator and thereby making it easier adapt to individual users. To validate and test the instrument platform we collaborated with the Helena Elsass Center during their 2013 Summer Camp to see if we actually succeeded in creating an electrical guitar that children with hemiplegia could actually play. The initial user studies showed that children with hemiplegia were able to play the actuated guitar by producing rhythmical movement across the strings that enables them to enter a world of music they so often see as closed.
Visualizing Song Structure on Timecode Vinyls BIBAFull-Text 22
  Florian Heller; Jan Borchers
Although an analog technology, many DJs still value the turntable as an irreplaceable performance tool. Digital vinyl systems combine the distinct haptic nature of the analog turntable with the advantages of digital media. They use special records containing a digital timecode which is then processed by a computer and mapped to properties like playback speed and direction. These records, however, are generic and, in contrast to traditional vinyl, do not provide visual cues representing the structure of the track. We present a system that augments the timecode record with a visualization of song information such as artist, title, and track length, but also with a waveform that allows to visually navigate to a certain beat. We conducted a survey examining the acceptance of such tools in the DJ community and conducted a user study with professional DJs. The system was widely accepted as a tool in the DJ community and received very positive feedback during observational mixing sessions with four professional DJs.
Optical Measurement of Acoustic Drum Strike Locations BIBAFull-Text 23
  Janis Sokolovskis; Andrew McPherson
This paper presents a method for locating the position of a strike on an acoustic drumhead. Near-field optical sensors were installed underneath the drumhead of a commercially available snare drum. By implementing time difference of arrival (TDOA) algorithm accuracy within 2cm was achieved in approximating the location of strikes. The system can be used for drum performance analysis, timbre analysis and can form a basis for an augmented drum performance system.
Techniques in Swept Frequency Capacitive Sensing: An Open Source Approach BIBAFull-Text 24
  Colin Honigman; Jordan Hochenbaum; Ajay Kapur
This paper introduces a new technique for creating Swept Frequency Capacitive Sensing with open source technology for use in creating richer and more complex musical gestures. This new style of capacitive touch sensing is extremely robust compared to older versions and will allow greater implementation of gesture recognition and touch control in the development of NIMEs. Inspired by the Touché project, this paper discusses how to implement this technique using the community standard hardware Arduino instead of custom designed electronics. The technique requires only passive components and can be used to enhance the touch sensitivity of many everyday objects and even biological materials and substances such as plants, which this paper will focus on as a case study through the project known as Cultivating Frequencies. This paper will discuss different techniques of filtering data captured by this system, different methods for creating gesture recognition unique to the object being used, and the implications of this technology as it pertains to the goal of ubiquitous sensing. Furthermore, this paper will introduce a new Arduino Library, SweepingCapSense, which simplifies the coding required to implement this technique.

Demos

BIBFull-Text 25
 

Posters

BIBFull-Text 26
 

Papers: Motion and Gesture

Dynamics in Music Conducting: A Computational Comparative Study Among Subjects BIBAQPZYZB 27
  Alvaro Sarasua; Enric Guaus
Many musical interfaces have used the musical conductor metaphor, allowing users to control the expressive aspects of a performance by imitating the gestures of conductors. In most of them, the rules to control these expressive aspects are predefined and users have to adapt to them. Other works have studied conductors' gestures in relation to the performance of the orchestra. The goal of this study is to analyze, following the path initiated by this latter kind of works, how simple motion capture descriptors can explain the relationship between the loudness of a given performance and the way in which different subjects move when asked to impersonate the conductor of that performance. Twenty-five subjects were asked to impersonate the conductor of three classical music fragments while listening to them. The results of different linear regression models with motion capture descriptors as explanatory variables show that, by studying how descriptors correlate to loudness differently among subjects, different tendencies can be found and exploited to design models that better adjust to their expectations.
Triggering sounds from discrete air gestures: What movement feature has the best timing? BIBAFull-Text 28
  Luke Dahl
The recent proliferation of affordable motion sensing technologies (e.g. Kinect) has led to a surge in new musical interfaces where a performer moves their body "in the air" without manipulating or contacting a physical object. These interfaces work well when the movement and control of sound are smooth and continuous. However it has proven difficult to heuristically design a system which will trigger discrete sounds with a precision that would allow for a complex rhythmic performance. In such systems the relationship between a gesture and the timing of the resulting sound often feels wrong to the performer.
The Composing Hand: Musical Creation with Leap Motion and the BigBang Rubette BIBATAFMJL 29
  Daniel Tormoen; Florian Thalmann; Guerino Mazzola
This paper introduces an extension of the Rubato Composer software's BigBang rubette module for gestural composition. The extension enables composers and improvisers to operate BigBang using the Leap Motion controller, which uses two cameras to detect hand motions in three-dimensional space. The low latency and high precision of the device make it a good fit for BigBang's functionality, which is based on immediate visual and auditive feedback. With the new extensions, users can define an infinite variety of musical objects, such as oscillators, pitches, chord progressions, or frequency modulators, in real-time and transform them in order to generate more complex musical structures on any level of abstraction.
Harmonic Motion: A toolkit for processing gestural data for interactive sound BIBAFull-Text 30
  Tim Murray-Browne; Mark Plumbley
We introduce Harmonic Motion, a free open source toolkit for artists, musicians and designers working with gestural data. Extracting musically useful features from captured gesture data can be challenging, with projects often requiring bespoke processing techniques developed through iterations of tweaking equations involving a number of constant values -- sometimes referred to as 'magic numbers'. Harmonic Motion provides a robust interface for rapid prototyping of patches to process gestural data and a framework through which approaches may be encapsulated, reused and shared with others. In addition, we describe our design process in which both personal experience and a survey of potential users informed a set of specific goals for the software.
To gesture or not? An analysis of terminology in NIME proceedings 2001-2013 BIBAFull-Text 31
  Alexander Refsum Jensenius
The term 'gesture' has represented a buzzword in the NIME community since the beginning of its conference series. But how often is it actually used, what is it used to describe, and how does its usage here differ from its usage in other fields of study? This paper presents a linguistic analysis of the motion-related terminology used in all of the papers published in the NIME conference proceedings to date (2001-2013). The results show that 'gesture' is in fact used in 62% of all NIME papers, which is a significantly higher percentage than in other music conferences (ICMC and SMC), and much more frequently than it is used in the HCI and biomechanics communities. The results from a collocation analysis support the claim that 'gesture' is used broadly in the NIME community, and indicate that it ranges from the description of concrete human motion and system control to quite metaphorical applications.

Papers: Musical Interaction Design

Manhattan: End-User Programming for Music BIBAFull-Text 32
  Chris Nash
This paper explores the concept of end-user programming languages in music composition, and introduces the Manhattan system, which integrates formulas with a grid-based style of music sequencer. Following the paradigm of spreadsheets, an established model of end-user programming, Manhattan is designed to bridge the gap between traditional music editing methods (such as MIDI sequencing and typesetting) and generative and algorithmic music -- seeking both to reduce the learning threshold of programming and support flexible integration of static and dynamic musical elements in a single work. Interaction draws on rudimentary knowledge of mathematics and spreadsheets to augment the sequencer notation with programming concepts such as expressions, built-in functions, variables, pointers and arrays, iteration (for loops), branching (goto), and conditional statements (if-then-else). In contrast to other programming tools, formulas emphasise the visibility of musical data (e.g. notes), rather than code, but also allow composers to interact with notated music from a more abstract perspective of musical processes. To illustrate the function and use cases of the system, several examples of traditional and generative music are provided, the latter drawing on minimalism (process-based music) as an accessible introduction to algorithmic composition. Throughout, the system and approach are evaluated using the cognitive dimensions of notations framework, together with early feedback for use by artists.
The Divergent Interface: Supporting Creative Exploration of Parameter Spaces BIBAFull-Text 33
  Robert Tubb; Simon Dixon
This paper outlines a theoretical framework for creative technology based on two contrasting processes: divergent exploration and convergent optimisation. We claim that these two cases require different gesture-to-parameter mapping properties. Results are presented from a user experiment that motivates this theory. The experiment was conducted using a publicly available iPad app: "Sonic Zoom". Participants were encouraged to conduct an open ended exploration of synthesis timbre using a combination of two different interfaces. The first was a standard interface with ten sliders, hypothesised to be suited to the "convergent" stage of creation. The second was a mapping of the entire 10-D combinatorial space to a 2-D surface using a space filling curve. This novel interface was intended to support the "divergent" aspect of creativity. The paths of around 250 users through both 2-D and 10-D space were logged and analysed. Both the interaction data and questionnaire results show that the different interfaces tended to be used for different aspects of sound creation, and a combination of these two navigation styles was deemed to be more useful than either individually. The study indicates that the predictable, separate parameters found in most music technology are more appropriate for convergent tasks.
A Methodological Framework for Teaching, Evaluating and Informing NIME Design with a Focus on Mapping and Expressiveness BIBAFull-Text 34
  Sebastian Mealla; Sergi Jordà
The maturation process of the NIME field has brought a growing interest in teaching the design and implementation of Digital Music Instruments (DMI) as well as in finding objective evaluation methods to assess the suitability of these outcomes. In this paper we propose a methodology for teaching NIME design and a set of tools meant to inform the design process. This approach has been applied in a master course focused on the exploration of expressiveness and on the role of the mapping component in the NIME creation chain, through hands-on and self-reflective approach based on a restrictive setup consisting of smart-phones and the Pd programming language. Working Groups were formed, and a 2-step DMI design process was applied, including 2 performance stages. The evaluation tools assessed both System and Performance aspects of each project, according to Listeners' impressions after each performance. Listeners' previous music knowledge was also considered. Through this methodology, students with different backgrounds were able to effectively engage in the NIME design processes, developing working DMI prototypes according to the demanded requirements; the assessment tools proved to be consistent for evaluating NIMEs systems and performances, and the fact of informing the design processes with the outcome of the evaluation, showed a traceable progress in the students outcomes.
Rapid Creation and Publication of Digital Musical Instruments BIBAFull-Text 35
  Charlie Roberts; Matthew Wright; JoAnn Kuchera-Morin; Tobias Hollerer
We describe research enabling the rapid creation of digital musical instruments and their publication to the Internet. This research comprises both high-level abstractions for making continuous mappings between audio, interactive, and graphical elements, as well as a centralized database for storing and accessing instruments. Published instruments run in devices capable of running a modern web browser. Notation of instrument design is considered and optimized for readability, expressivity and simplicity.
TrAP: An Interactive System to Generate Valid Raga Phrases from Sound-Tracings BIBATACPXW 36
  Udit Roy; Tejaswinee Kelkar; Bipin Indurkhya
We propose a new musical interface, TrAP (TRace-A-Phrase) for generating phrases of Hindustani Classical Music (HCM). In this system the user traces melodic phrases on a tablet interface to create phrases in a raga. We begin by analyzing tracings drawn by 28 participants, and train a classifier to categorize them into one of four melodic categories from the theory of Hindustani Music. Then we create a model based on note transitions from the raga grammar for the notes used in the singable octaves in HCM. Upon being given a new tracing, the system segments the tracing and computes a final phrase that best approximates the tracing.
Rich Contacts: Corpus-Based Convolution of Contact Interaction Sound for Enhanced Musical Expression BIBAFull-Text 37
  Diemo Schwarz; Pierre Alexandre Tremblay; Alex Harker
Contact gestures are an intuitive way to express musical rhythm and dynamics on any surface or object by hitting, scratching, strumming. It is striking to observe how easily one can express a dynamic rhythm by table-drumming, the hands giving a range of timbres using the fingernails, fingertips, knuckles, thumb ball.

Papers: Networked Wireless Systems

Making the Most of Wi-Fi: Optimisations for Robust Wireless Live Music Performance BIBAFull-Text 38
  Thomas Mitchell; Sebastian Madgwick; Simon Rankine; Geoffrey Hilton; Adrian Freed; Andrew Nix
Wireless technology is growing increasingly prevalent in the development of new interfaces for live music performance. However, with a number of different wireless technologies operating in the 2.4 GHz band, there is a high risk of interference and congestion, which has the potential to severely disrupt live performances. With its high transmission power, channel bandwidth and throughput, Wi-Fi (IEEE 802.11) presents an opportunity for highly robust wireless communications. This paper presents our preliminary work optimising the components of a Wi-Fi system for live performance scenarios. We summarise the manufacture and testing of a prototype directional antenna that is designed to maximise sensitivity to a performer's signal while suppressing interference from elsewhere. We also propose a set of recommended Wi-Fi configurations to reduce latency and increase throughput. Practical investigations utilising these arrangements demonstrate a single x-OSC device achieving a latency of <3 ms and a distributed network of 15 devices achieving a net throughput of 4800 packets per second (~320 per device); where each packet is a 104-byte OSC message containing 16 analogue input channels acquired by the device.
Simplified Expressive Mobile Development with NexusUI, NexusUp, and NexusDrop BIBARIDWRU 39
  Benjamin Taylor; Jesse Allison; Daniel Holmes; William Conlin; Yemin Oh
Developing for mobile and multimodal platforms is more important now than ever, as smartphones and tablets proliferate and mobile device orchestras become commonplace. We detail NexusUI, a JavaScript framework that enables rapid prototyping and development of expressive multitouch electronic instrument interfaces within a web browser.
Communication, Control, and State Sharing in Collaborative Live Coding BIBAFull-Text 40
  Sang Won Lee; Georg Essl
In the setting of collaborative live coding, a number of issues emerge: (1) need for communication, (2) issues of conflicts in sharing program state space, and (3) remote control of code execution. In this paper, we propose solutions to these problems. In the recent extension of UrMus, a programming environment for mobile music application development, we introduce a paradigm of shared and individual namespaces safeguard against conflicts in parallel coding activities. We also develop live variable view that communicates live changes in state among live coders, networked performers, and the audience. Lastly, we integrate collaborative aspects of programming execution into built-in live chat, which enables not only communication with others, but also distributed execution of code.
A Simple Architecture for Server-based (Indoor) Audio Walks BIBAFull-Text 41
  Thomas Resch; Matthias Krebs
The submitted paper proposes a simple architecture for creating (indoor) audio walks/games by using a server running Max/MSP together with the external object fhnw.audiowalk.state and smartphone clients running either under Android or iOS using LibPd. Server and smartphone clients communicate over WLAN by exchanging OSC messages. Server and client have been designed in a way that allows artists with only little programming skills to create position-based audio walks. The source code of both, the Max/MSP object including help files for the setup and the Android application are available at Github as open source software.
Ping-Pong: Musically Discovering Locations BIBAFull-Text 42
  Hyung Suk Kim; Jorge Herrera; Ge Wang
A recently developed system that uses pitched sounds to discover relative 3D positions of a group of devices located in the same physical space is described. The measurements are coordinated over an IP network in a decentralized manner, while the actual measurements are carried out measuring the time-of-flight of the notes played by different devices.
CloudOrch: A Portable SoundCard in the Cloud BIBAFull-Text 43
  Abram Hindle
One problem with live computer music performance is the transport of computers to a venue and the following setup of the computers used in playing and rendering music. The more computers involved the longer the setup and tear-down of a performance. Each computer adds power and cabling requirements that the venue must accommodate. Cloud computing can change of all this by simplifying the setup of many (10s, 100s) of machines at the click of a button. But there's a catch, the cloud is not physically near you, you cannot run an audio cable to the cloud. The audio from a computer music instrument in the cloud needs to streamed back to the performer and listeners. There are many solutions for streaming audio over networks and the internet, most of them suffer from high latency, heavy buffering, or proprietary/non-portable clients. In this paper we propose a portable cloud-friendly method of streaming, almost a cloud soundcard, whereby performers can use mobile devices (Android, iOS, laptops) to stream audio from the cloud with far lower latency than technologies like icecast. This technology enables near-realtime control over power computer music networks enabling performers to travel light and perform live with more computers than ever before.

Papers: Machine Learning Applied

AudioQuilt: 2D Arrangements of Audio Samples using Metric Learning and Kernelized Sorting BIBAFull-Text 44
  Ohad Fried; Zeyu Jin; Reid Oda; Adam Finkelstein
The modern musician enjoys access to a staggering number of audio samples. Composition software can ship with many gigabytes of data, and there are many more to be found online. However, conventional methods for navigating these libraries are still quite rudimentary, and often involve scrolling through alphabetical lists. We present a system for sample exploration that allows audio clips to be sorted according to user taste, and arranged in any desired 2D formation such that similar samples are located near each other. Our method relies on two advances in machine learning. First, metric learning allows the user to shape the audio feature space to match their own preferences. Second, kernelized sorting finds an optimal arrangement for the samples in 2D. We demonstrate our system with two new interfaces for exploring audio samples, and evaluate the technology qualitatively and quantitatively via a pair of user studies.
Probabilistic Models for Designing Motion and Sound Relationships BIBARIHYLC 45
  Jules Françoise; Norbert Schnell; Riccardo Borghesi; Frédéric Bevilacqua
We present a set of probabilistic models that support the design of movement and sound relationships in interactive sonic systems. We focus on a mapping-by-demonstration approach in which the relationships between motion and sound are defined by a machine learning model that learns from a set of user examples. We describe four probabilistic models with complementary characteristics in terms of multimodality and temporality. We illustrate the practical use of each of the four models with a prototype application for sound control built using our Max implementation.
Musical Instrument Mapping Design with Echo State Networks BIBAFull-Text 46
  Chris Kiefer
Echo State Networks (ESNs), a form of recurrent neural network developed in the field of Reservoir Computing, show significant potential for use as a tool in the design of mappings for digital musical instruments. They have, however, seldom been used in this area, so this paper explores their possible uses. This project contributes a new open source library, which was developed to allow ESNs to run in the Pure Data dataflow environment. Several use cases were explored, focusing on addressing current issues in mapping research. ESNs were found to work successfully in scenarios of pattern classification, multiparametric control, explorative mapping and the design of nonlinearities and uncontrol. Un-trained behaviours are proposed, as augmentations to the conventional reservoir system that allow the player to introduce potentially interesting non-linearities and uncontrol into the reservoir. Interactive evolution style controls are proposed as strategies to help design these behaviours, which are otherwise dependent on arbitrary parameters. A study on sound classification shows that ESNs can reliably differentiate between two drum sounds, and also generalise to other similar input. Following evaluation of the use cases, heuristics are proposed to aid the use of ESNs in computer music scenarios.
Funky Sole Music: Gait Recognition and Adaptive Mapping BIBARIGSVM 47
  Kristian Nymoen; Sichao Song; Yngve Hafting; Jim Torresen
We present Funky Sole Music, a musical interface employing a sole embedded with three force sensitive resistors in combination with a novel algorithm for continuous movement classification. A heuristics-based music engine has been implemented, allowing users to control high-level parameters of the musical output. This provides a greater degree of control to users without musical expertise compared to what they get with traditional media playes. By using the movement classification result not as a direct control action in itself, but as a way to change mapping spaces and musical sections, the control possibilities offered by the simple interface are greatly increased.

Demos

BIBRIDYJX 48
 

Posters

BIBFull-Text 49
 

Papers: Audio Applications and Installations

A Morphological Analysis of Audio-Objects and their Control Methods for 3D Audio BIBAFull-Text 50
  Justin Mathew; Stephane Huot; Alan Blum
Recent technological improvements in audio reproduction systems increased the possibilities to spatialize sources in a listening environment. The spatialization of reproduced audio is however highly dependent on the recording technique, the rendering method, and the loudspeaker configuration. While object-based audio production has proven to reduce the dependency on loudspeaker configurations, authoring tools are still considered to be difficult to interact with in current production environments. In this paper, we investigate the issues of spatialization techniques for object-based audio production and introduce the Spatial Audio Design Spaces (SpADS) framework, that provides insights into the spatial manipulation of object-based audio. Based on interviews with professional sound engineers, this morphological analysis clarifies the relationships between recording and rendering techniques that define audio-objects for 3D speaker configurations, allowing the analysis and the design of advanced object-based controllers as well.
Evaluating the Perceived Similarity Between Audio-Visual Features Using Corpus-Based Concatenative Synthesis BIBAQPZUBO 51
  Augoustinos Tsiros
This paper presents the findings of two exploratory studies. In these studies participants performed a series of image-sound association tasks. The aim of the studies was to investigate the perceived similarity and the efficacy of two multidimensional mappings each consisting of three audio-visual associations. The purpose of the mappings is to enable visual control of corpus-based concatenative synthesis. More specifically the stimuli in the first study was designed to test the perceived similarity of six audio-visual associations, between the two mappings using three corpora resulting in 18 audio-visual stimuli. The corpora differ in terms of two sound characteristics: harmonic contain and continuity. Data analysis revealed no significant differences in the participant's responses between the three corpora, or between the two mappings. However highly significant differences were revealed between the individual audio-visual association pairs. The second study investigates the affects of the mapping and the corpus in the ability of the participants to detect which image out of three similar images was used to generate six audio stimuli. The data analysis revealed significant differences in the ability of the participants' to detect the correct image depending on which corpus was used. Less significant was the effect of the mapping in the success rate of the participant responses.
CollideFx: A Physics-Based Audio Effects Processor BIBAFull-Text 52
  Chet Gnegy
CollideFx is a real-time audio effects processor that integrates the physics of real objects into the parameter space of the signal chain. Much like a traditional signal chain, the user can choose a series of effects and offer realtime control to their various parameters. In this work, we introduce a means of creating tree-like signal graphs that dynamically change their routing in response to changes in the location of the unit generators in a virtual space. Signals are rerouted using a crossfading scheme that avoids the harsh clicks and pops associated with amplitude discontinuities. The unit generators are easily controllable using a click and drag interface that responds using familiar physics. CollideFx brings the interactivity of a video game together with the purpose of creating interesting and complex audio effects. With little difficulty, users can craft custom effects, or alternatively, can fling a unit generator into a cluster of several others to obtain more surprising results, letting the physics engine do the decision making.
VOCAL VIBRATIONS: A Multisensory Experience of the Voice BIBAFull-Text 53
  Charles Holbrow; Elena Jessop; Rebecca Kleinberger
Vocal Vibrations is a new project by the Opera of the Future group at the MIT Media Lab that seeks to engage the public in thoughtful singing and vocalizing, while exploring the relationship between human physiology and the resonant vibrations of the voice. This paper describes the motivations, the technical implementation, and the experience design of the Vocal Vibrations public installation. This installation consists of a space for reflective listening to a vocal composition (the Chapel) and an interactive space for personal vocal exploration (the Cocoon). In the interactive experience, the participant also experiences a tangible exteriorization of his voice by holding the ORB, a handheld device that translates his voice and singing into tactile vibrations. This installation encourages visitors to explore the physicality and expressivity of their voices in a rich musical context.
Timbre morphing: near real-time hybrid synthesis in a musical installation BIBAFull-Text 54
  Duncan Williams; Eduardo Miranda
This paper presents an implementation of a near real-time timbre morphing signal processing system, designed to facilitate an element of 'liveness' and unpredictability in a musical installation. The timbre morpher is a hybrid analysis and synthesis technique based on Spectral Modeling Synthesis (an additive and subtractive modeling technique). The musical installation forms an interactive soundtrack in response to the series of Rosso Luana marble sculptures Shapes in the Clouds, I, II, III, IV & V by artist Peter Randall-Page, exhibited at the Peninsula Arts Gallery in Devon, UK, from 1 February to 29 March 2014.
Acoustic Localisation as an Alternative to Positioning Principles in Applications presented at NIME 2001-2013 BIBAFull-Text 55
  Dominik Schlienger; Sakari Tervo
This paper provides a rationale for choosing acoustic localisation techniques as an alternative to other principles to provide spatial positions in interactive locative audio applications (ILAA). By comparing positioning technology in existing ILAAs to the expected performance of acoustic positioning systems (APS), we can evaluate if APS would perform equivalently in a particular application. In this paper, the titles of NIME conference proceedings from 2001 to 2013 were searched for presentations on ILAA using positioning technology. Over 80 relevant articles were found. For each of the systems we evaluated if and why APS would be a contender or not. The results showed that for over 73 percent of the reviewed applications, APS could possibly provide competitive alternatives and at very low cost.

Papers: Interfaces: Development, Deployment, Evaluation

Forming Shapes to Bodies: Design for Manufacturing in the Prosthetic Instruments BIBAFull-Text 56
  Ian Hattwick; Joseph Malloch; Marcelo Wanderley
Moving new DMIs from the research lab to professional artistic contexts places new demands on both their design and manufacturing. Through a discussion of the Prosthetic Instruments, a family of digital musical instruments we designed for use in an interactive dance performance, we discuss four different approaches to manufacturing -- artisanal, building block, rapid prototyping, and industrial. We discuss our use of these different approaches as we strove to reconcile the many conflicting constraints placed upon the instruments' design due to their use as hypothetical prosthetic extensions to dancers' bodies, as aesthetic objects, and as instruments used in a professional touring context. Experiences and lessons learned during the design and manufacturing process are discussed in relation both to these manufacturing approaches as well as to Bill Buxton's concept of artist-spec design.
Constraining Movement as a Basis for DMI Design and Performance BIBAFull-Text 57
  Giuseppe Torre; Nicholas Ward
In this paper we describe the application of a movement-based design process for digital musical instruments which led to the development of a prototype DMI named the Twister. The development is described in two parts. Firstly, we consider the design of the interface or physical controller. Following this we describe the development of a specific sonic character, mapping approach and performance. In both these parts an explicit consideration of the type of movement we would like the device to engender in performance drove the design choices. By considering these two parts separately we draw attention to two different levels at which movement might be considered in the design of DMIs; at a general level of ranges of movement in the creation of the controller and a more specific, but still quite open, level in the creation of the final instrument and a particular performance. In light of the results of this process the limitations of existing representations of movement within the DMI design discourse is discussed. Further, the utility of a movement focused design approach is discussed.
Dimensionality and Appropriation in Digital Musical Instrument Design BIBAFull-Text 58
  Victor Zappi; Andrew McPherson
This paper investigates the process of appropriation in digital musical instrument performance, examining the effect of instrument complexity on the emergence of personal playing styles. Ten musicians of varying background were given a deliberately constrained musical instrument, a wooden cube containing a touch/force sensor, speaker and embedded computer. Each cube was identical in construction, but half the instruments were configured for two degrees of freedom while the other half allowed only a single degree. Each musician practiced at home and presented two performances, in which their techniques and reactions were assessed through video, sensor data logs, questionnaires and interviews. Results show that the addition of a second degree of freedom had the counterintuitive effect of reducing the exploration of the instrument's affordances; this suggested the presence of a dominant constraint in one of the two configurations which strongly differentiated the process of appropriation across the two groups of participants.
The Prospects For Eye-Controlled Musical Performance BIBASENOVK 59
  Anthony Hornof
As new sensor devices and data streams are harnessed for musical expression, and as eye-tracking devices become increasingly cost-effective and prevalent in research and as a means of communication for people with severe motor impairments, eye-controlled musical expression nonetheless remains largely unexplored and elusive. This paper (a) identifies a number of fundamental human ocularmotor capabilities that may constrain what can be musically expressed with eye movements, (b) reviews prior work on eye-controlled musical expression, (c) discusses how careful consideration of these human constraints in the design of eye-controlled interfaces can nonetheless contribute to usable and expressive eye-controlled instruments that can, in turn, be used to create compelling audience experiences, and (d) presents a taxonomy to classify and categorize prior and future work on eye-controlled musical performance. The main dimension in the taxonomy relates to whether the goal of a particular eye-controlled instrument or composition is to create (a) an avant garde musical experience for a critical audience or (b) a simpler though perhaps more profound opportunity for a person with disabilities to express themselves musically. We argue that the only reasonable way to achieve the second of these two goals is to collaborate directly with the people with disabilities for whom the instrument is intended in the design of the instrument or composition. We conclude that overall the prospects for eye-controlled musical performance are somewhat constrained.
Musical Interface Design: An Experience-oriented Framework BIBAFull-Text 60
  Fabio Morreale; Antonella De Angeli; Sile O'Modhrain
This paper presents MINUET, a framework for musical interface design grounded in the experience of the player. MINUET aims to provide new perspectives on the design of musical interfaces, referred to as a general term that comprises digital musical instruments and interactive installations. The ultimate purpose is to reduce the complexity of the design space emphasizing the experience of the player. MINUET is structured as a design process consisting of two stages: goal and specifications. The reliability of MINUET is tested through a systematic comparison with the related work and through a case study. To this end, we present the design and prototyping of Hexagon, a new musical interface with learning purposes.

Papers: Robotic and Mechatronic Systems

Rasper: a Mechatronic Noise-Intoner BIBAFull-Text 61
  Mo Zareei; Ajay Kapur; Dale A. Carnegie
Over the past few decades, there has been an increasing number of musical instruments and works of sound art that incorporate robotics and mechatronics. This paper proposes a new approach in classification of such works and focuses on those whose ideological roots can be sought in Luigi Russolo's noise-intoners (intonarumori). It presents a discussion on works in which mechatronics is used to investigate new -- and traditionally perceived as "extra-musical" -- sonic territories, and introduces Rasper: a new mechatronic noise-intoner that features an electromechanical apparatus to create noise physically, while regulating it rhythmically and timbrally.
The Robotic Taishogoto: A New Plug 'n Play Desktop Performance Instrument BIBAFull-Text 62
  Jason Long
This paper describes the Robotic Taishogoto, a new robotic musical instrument for performance, musical installations, and educational purposes. The primary goals of its creation is to provide an easy to use, cost effective, compact and integrated acoustic instrument which is fully automated and controllable via standard MIDI commands. This paper describes the technical details of its design and implementation including the mechanics, electronics and firmware. It also outlines various control methodologies and use cases for the instrument.
Imitation Framework for Percussion BIBAFull-Text 63
  Ozgur Izmirli; Jake Faris
We present a framework for imitation of percussion performances with parameter-based learning for accurate reproduction. We constructed a robotic setup involving pull-solenoids attached to drum sticks which communicate with a computer through an Arduino microcontroller. The imitation framework allows for parameter adaptation to different mechanical constructions by learning the capabilities of the overall system being used. For the rhythmic vocabulary, we have considered regular stroke, flam and drag styles. A learning and calibration system was developed to efficiently perform grace notes for the drag rudiment as well as the single stroke and the flam rudiment. A second pre-performance process is introduced to minimize the latency difference between individual drum sticks in our mechanical setup. We also developed an off-line onset detection method to reliably recognize onsets from the microphone input. Once these pre-performance steps are taken, our setup will then listen to a human drummer's performance pattern, analyze for onsets, loudness, and rudiment pattern, and then play back using the learned parameters for the particular system. We conducted three different evaluations of our constructed system.
Distributed Control in a Mechatronic Musical Instrument BIBASENVAD 64
  Michael Gurevich
Drawing on concepts from systemics, cybernetics, and musical automata, this paper proposes a mechatronic, electroacoustic instrument that allows for shared control between programmed, mechanized motion and a human interactor. We suggest that such an instrument, situated somewhere between a robotic musical instrument and a passive controller, will foster the emergence of new, complex, and meaningful modes of musical interaction. In line with the methodological principles of practice as research, we describe the development and design of one such instrument -- Stringtrees. The design process also reflects the notion of ambiguity as a resource in design: The instrument was endowed with a collection of sensors, controls, and actuators without a highly specific or prescriptive model for how a musician would interact with it.
Rhythm Apparatus on Overhead BIBAFull-Text 65
  Christian Faubel
In the paper I present a robotic device that offers new ways of interaction for producing rhythmic patterns. The apparatus is placed on an overhead projector and a visual presentation of these rhythmic patterns is delivered as a shadow play. The rhythmic patterns can be manipulated by modifying the environment of the robot, through direct physical interaction with the robot, by rewiring the internal connectivity, and by adjusting internal parameters. The theory of embodied cognition provides the theoretical basis of this device. The core postulate of embodied cognition is that biological behavior can only be understood through an understanding of the real-time interactions of an organism's nervous system, the organism's body and the environment. One the one hand the device illustrates this theory because the patterns that are created equally depend on the real-time interactions of the electronics, the physical structure of the device and the environment. On the other hand the device presents a synthesis of these ideas and it is effectively possible to play with it at all the three levels, the electronics, the physical configuration of the robot and the environment.

Demos

BIBFull-Text 66
 

Posters

BIBFull-Text 67
 

Papers: Tangible Interaction and Interfaces

Tangible Scores: Shaping the Inherent Instrument Score BIBARIBBNF 68
  Enrique Tomás; Martin Kaltenbrunner
Tangible Scores are a new paradigm for musical instrument design with a physical configuration inspired by graphic scores. In this paper we will focus on the design aspects of this new interface as well as on some of the related technical details. Creating an intuitive, modular and expressive instrument for textural music was the primary driving force. Following these criteria, we literally incorporated a musical score onto the surface of the instrument as a way of continuously controlling several parameters of the sound synthesis. Tangible Scores are played with both hands and they can adopt multiple physical forms. Complex and expressive sound textures can be easily played over a variety of timbres, enabling precise control in a natural manner.
Striso, a compact expressive instrument based on a new isomorphic note layout BIBAFull-Text 69
  Piers Titus van der Torren
The Striso is a new expressive music instrument with an acoustic feel, which is designed to be intuitive to play and playable everywhere. The sound of every note can be precisely controlled using the direction and pressure sensitive buttons, combined with instrument motion like tilting or shaking. It works standalone, with an internal speaker and battery, and is meant as a self contained instrument with its own distinct sound, but can also be connected to a computer to control other synthesizers. The notes are arranged in an easy and systematic way, according to the new DCompose note layout that is also presented in this paper. The DCompose note layout is designed to be compact, ergonomic, easy to learn, and closely bound to the harmonic properties of the notes.
SPINE: A TUI Toolkit and Physical Computing Hybrid BIBAFull-Text 70
  Aristotelis Hadjakos; Simon Waloschek
Physical computing platforms such as the Arduino have significantly simplified developing physical musical interfaces. However, those platforms typically target everyday programmers rather than composers and media artists. On the other hand, tangible user interface (TUI) toolkits, which provide an integrated, easy-to-use solution have not gained momentum in modern music creation. We propose a concept that hybridizes physical computing and TUI toolkit approaches. This helps to tackle typical TUI toolkit weaknesses, namely quick sensor obsolescence and limited choices. We developed a physical realization based on the idea of "universal pins," which can be configured to perform a variety of duties, making it possible to connect different sensor breakouts and modules. We evaluated our prototype by making performance measurements and conducting a user study demonstrating the feasibility of our approach.
Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion BIBATAHEVV 71
  Xiao Xiao; Basheer Tome; Hiroshi Ishii
We present Andante, a representation of music as animated characters walking along the piano keyboard that appear to play the physical keys with each step. Based on a view of music pedagogy that emphasizes expressive, full-body communication early in the learning process, Andante promotes an understanding of the music rooted in the body, taking advantage of walking as one of the most fundamental human rhythms. We describe three example visualizations on a preliminary prototype as well as applications extending our examples for practice feedback, improvisation and composition. Through our project, we reflect on some high level considerations for the NIME community.

Panels

Gender, Education, Creativity in Digital Music and Sound Art BIBAFull-TextWeb Page 72
  Georgina Born; Kyle Devine; Sally-Jane Norman; Mark Taylor
This panel broadly examines issues of gender in relation to both higher education and creative practices in the fields of electronic and computer music and sound art. The enormous growth of music technology degree provision in British Higher Education since the mid 1990s has been accompanied by a clear demographic bifurcation between music technology and traditional music degrees. Our goal is to set these research findings into dialogue with panelists and discussants concerned with issues of gender in relation to creative processes in terms of technological design and use as well as performance, installation and compositional practices. The workshop will therefore offer a basis on which to reflect on questions of gender within the NIME community and beyond.
Moco Panel discussion on Movement and Computing BIBAFull-Text 73
  Frédéric Bevilacqua; Sarah Fdili Alaoui; Jules Françoise; Philippe Pasquier; Thecla Schiphorst
The Moco panel bridges interdisciplinary communities in movement, computing, music and interaction. We will present the outcomes of a workshop called MOCO2014 (Movement and Computing) premiering at Ircam in Paris in June 2014. Although the primary target of MOCO is movement and computing, we address a community that overlaps with the NIME community, sharing topics on expressivity, embodied interaction, interactive machine learning, compositional modeling and generative systems to name a few. We are interested in creating a dialogue between researchers and artists involved in these communities, as well as the larger community interested in the intersection between arts, science and technology. The panel will include contributors to MOCO alongside researchers and artists in the NIME community that explore the space between sound and movement. Our goal is to share research concepts and to develop future relationships that will be beneficial to both communities.