HCI Bibliography Home | HCI Conferences | AM Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AM Tables of Contents: 101112131415

Proceedings of Audio Mostly 2014: A Conference on Interaction with Sound

Fullname:AudioMostly 2014: Conference on Interaction with Sound
Editors:Mark Grimshaw; Mads Walther-Hansen
Location:Aalborg, Denmark
Dates:2014-Oct-01 to 2014-Oct-03
Publisher:ACM
Standard No:ISBN: 978-1-4503-3032-9; ACM DL: Table of Contents; hcibib: AM14
Papers:29
Links:Conference Website
AudioMetro: directing search for sound designers through content-based cues BIBAFull-Text 1
  Christian Frisson; Stéphane Dupont; Willy Yvart; Nicolas Riche; Xavier Siebert; Thierry Dutoit
Sound designers source sounds in massive collections, heavily tagged by themselves and sound librarians. For each query, once successive keywords attained a limit to filter down the results, hundreds of sounds are left to be reviewed. AudioMetro combines a new content-based information visualization technique with instant audio feedback to facilitate this part of their workflow. We show through user evaluations by known-item search in collections of textural sounds that a default grid layout ordered by filename unexpectedly outperforms content-based similarity layouts resulting from a recent dimension reduction technique (Student-t Stochastic Neighbor Embedding), even when complemented with content-based glyphs that emphasize local neighborhoods and cue perceptual features. We propose a solution borrowed from image browsing: a proximity grid, whose density we optimize for nearest neighborhood preservation among the closest cells. Not only does it remove overlap but we show through a subsequent user evaluation that it also helps to direct the search. We based our experiments on an open dataset (the OLPC sound library) for replicability.
Imagining sound BIBAFull-Text 2
  Mark Grimshaw; Tom Garner
We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology.
Sound through the rabbit hole: sound design based on reports of auditory hallucination BIBAFull-Text 3
  Jonathan Weinel; Stuart Cunningham; Darryl Griffiths
As video game developers seek to provide increasing levels of realism and sophistication, there is a need for game characters to be able to exhibit psychological states including 'altered states of consciousness' (ASC) realistically. 'Auditory hallucination' (AH) is a feature of ASC in which an individual may perceive distortions to auditory perception, or hear sounds with no apparent acoustic origin. Appropriate use of game sound may enable realistic representations of these sounds in video games. However to achieve this requires rigorous approaches informed by research. This paper seeks to inform the process of designing sounds based on auditory hallucination, by reporting the outcomes of analysing nearly 2000 experience reports that describe drug-induced intoxication. Many of these reports include descriptions of auditory hallucination. Through analysis of these reports, our research establishes a classification system, which we propose can be used for designing sounds based on auditory hallucination.
Vocalmetrics: an interactive software for visualization and classification of music BIBAFull-Text 4
  Felix Schönfeld; Axel Berndt; Tilo Hähnel; Martin Pfleiderer; Rainer Groh
Vocalmetrics is an interactive software tool that provides scientific techniques for interactive visualization and classification of musical data. The application supports the classification of music data as a pivotal aim of music education and analysis. The paper, in particular, introduces Vocalmetrics' prototype semantics and the egg cell metaphor. The former provides an intuitive and playful approach for exploring and classifying multidimensional musical data, whereas the latter is a direct manipulative interaction technique for rating features of musical data, particularly suitable for subjective assessments.
Imaginary soundscapes: the SoDA project BIBAFull-Text 5
  Matteo Casu; Marinos Koutsomichalis; Andrea Valle
The SoDA (Sound Design Accelerator) project aims at providing a flexible software environment for soundscape generation. Based on semantic information, it provides both an annotation schema and an annotated library of sound files that operates in relation to a generative system that delivers the final audio content. SoDA provides the user with various forms of interaction: from incremental assisted exploration of semantic and audio content in real-time to completely automated off-line soundscape composition. In this paper we describe the semantic and audio components and the various interaction modes.
Imagining between ourselves: a group interview approach in exploring listening experiences BIBAFull-Text 6
  Kai Tuuri; Henna-Riikka Peltola
In this paper listening experiences are explored through verbal interactions in a group interview situation. By focusing on (1) ordinary and (2) evocative modes of dealing with listening experiences, the present study investigated how imagination mediates between pre-reflective and reflective consciousness and how imagining is shared between persons. Analysis revealed that participants utilised both modes in their discussions for either music or everyday sound samples, although the ordinary mode was the dominant one. The evocative mode was utilised relatively more frequently with the music samples. Group dynamics had an effect on how the mental images and meanings were described and shared within the group interviews. With regard to the sharing of listening experiences, three degrees of group agreement were unveiled.
ACERemix: a tool for glitch music remix production and performance BIBAFull-Text 7
  Stuart Cunningham; Jonathan Weinel; Darryl Griffiths
In this paper we discuss the use of a recently developed audio compression approach: Audio Compression Exploiting Repetition (ACER) as a compositional tool for glitch composition and remixing. ACER functions by repeating similar sections of audio where they occur in a file and discarding the repetitive data. Thresholds for similarity can be defined using this approach, allowing for various degrees of (dis)similarity between materials identified as 'repetitive'. Through our initial subjective evaluation of ACER, we unexpectedly discovered that the compression method produced musically interesting results on some materials with higher levels of compression. Whilst listeners demonstrate this level of loss of fidelity to be unacceptable for the purposes of compression, it shows potential as a performance or production tool. When applied to pop songs the predicable form of the music was disrupted, introducing moments of novelty, while retaining the songs quantized rhythmic structure. In this paper we propose the use of ACER as a suitable method for producing sonic materials for 'glitch' composition. We present the use of ACER for this purpose with regards to a variety of materials that may be suitable for glitch or electroacoustic composition and using ACER in several different ways to process and reproduce musical audio.
Evaluating musical foreshadowing of videogame narrative experiences BIBAFull-Text 8
  Marco Scirea; Yun-Gyung Cheong; Mark J. Nelson; Byung-Chull Bae
We experiment with mood-expressing, procedurally generated music for narrative foreshadowing in videogames, investigating the relationship between music and the player's experience of narrative events in a game. We designed and conducted a user study in which the game's music expresses true foreshadowing in some trials (e.g. foreboding music before a negative event) and false foreshadowing in others (e.g. happy music that does not lead to a positive event). We observed players playing the game, recorded analytics data, and had them complete a survey upon completion of the gameplay. Thirty undergraduate and graduate students participated in the study. Statistical analyses suggest that the use of musical cues for narrative foreshadowing induces a better perceived consistency between music and game narrative. Surprisingly, false foreshadowing was found to enhance the player's enjoyment.
Real-time algorithmic composition with a tabletop musical interface: a first prototype and performance BIBAFull-Text 9
  Cárthach Ó Nuanáin; Liam O'Sullivan
Algorithmic composition is the creation of music using algorithms, or more specifically, computer programming. Real-time Algorithmic Composition seeks to extend this compositional technique to real-time scenarios like concert performances or interactive installations. As such, a more novel mode of interaction between the composer and the system is needed for effective expression and parameter control. We summarise the design and implementation of a Real-time Algorithmic Composition system that employs a tabletop musical interface for input control. Several algorithms developed for real-time performance are discussed. The application and inclusion of the tabletop interface is then outlined and evaluated.
Cognitive factors in generative music systems BIBAFull-Text 10
  Jim Bevington; Don Knox
This research aims to inform the development of generative music algorithms with principles drawn from research into music perception and cognition. Research has provided insights into the ways humans mentally organise musical sound and resulted in development of complex theories of musical expectation. Implementing these theories in generative music systems has the potential to produce music which is perceptually more meaningful for the listener. A detailed description of a prototype generative music algorithm is given. The algorithm aims to automatically create music that displays tonal and metric hierarchies in a Western tonal style, grouping structure and encourages the formation of musical expectations. The extent to which the system goals have been achieved is evaluated by objective analysis. Subjective analysis of the system output is suggested as a crucial aspect of further work.
Towards a listener model for predicting the overall listening experience BIBAFull-Text 11
  Michael Schoeffler; Jürgen Herre
Listeners have different preferences when it comes to rating the overall listening experience while listening to music. Therefore, a listener model for predicting the overall listening experience must consider sensory, perceptual, cognitive and psychological aspects rather than solely rely on perception-based attributes (e. g. audio quality) of the music signal. In this work, a generic model framework is defined for modeling a listener while taking part in an auditory experiment. In addition, a subset of the model is used to describe algorithms for predicting the overall listening experience based on experimental results. Thereby, the results of two experiments are utilized to define a prediction algorithm for the cases of bandwidth-degradation stimuli and playback by different reproduction systems.
Facilitating the creation of natural interactions for live audiovisual performances: an authoring-by-demonstration approach BIBAFull-Text 12
  Dionysios Marinos; Christian Geiger
In this paper an approach for creating natural interactions is discussed, which can be used to facilitate the authoring of interactive live audiovisual performances. The approach is supported by flexible, yet simple software tools and combines spatial mapping, gesture following and fuzzy logic in a way that enables the configuration and authoring of complex interactive mechanisms by demonstration. Due to the connectivity properties of the tools, the approach can be easily incorporated in almost any workflow for creating interactive installation or live performance applications, utilizing sensor and tracking data and combining it with other real-time parameters to drive the behavior of elements, which performers can interact with. This approach was used to facilitate the creation of a gestural interface for a conductor, which enabled him to control a virtual pianist during a live music performance by moving his hands in a natural manner. This gestural interface was configured by letting the conductor demonstrate the desired gestures and by feeding the resulting tracking data to the tools. The advantages and shortcomings of using such an approach are presented and discussed.
Metaphor as a focal concept in sound design BIBAFull-Text 13
  Antti Pirhonen
A central challenge in the design of non-speech sounds is to understand the relating conceptualisation process. In the current paper, we propose the use of metaphor theories as a framework to understand what sound design is fundamentally all about. In the proposed framework, we handle metaphors as conceptual entities, which are the basic constituents of meaning making.
OperaBooth: an installation for intimate remote communication through music BIBAFull-Text 14
  Steven Gelineck
This paper describes the development and evaluation of an installation that explores intimate connections between remote strangers. The conceptual intension of the installation considers that music can be used as a universal language through which strangers can communicate. For that, several approaches have been taken to enhance the feeling of intimacy between the users of the installation. The paper describes related work within the field of mediated intimacy and musical interaction forming the initial goals of the system. It then describes the iterative development process, which includes two smaller prototype tests. The resulting installation implements two large human size boxes with a hole in each for inserting one's head. Inside the box users can view the face of the remote stranger. A special setup enables users to appear very close to each other while being able to look each other in the eye for an enhanced feeling of intimacy. Finally, a face-tracking algorithm detects when users open their mouth, which results in them triggering the voice of an opera singer. Thus, strangers (who are not musically skilled) are able to explore an opera-duet in a form of musical exploration and communication.
Designing sound identity: providing new communication tools for building brands "corporate sound" BIBAFull-Text 15
  Maxime Carron; Françoise Dubois; Nicolas Misdariis; Corinne Talotte; Patrick Susini
In this paper we focus through a series of interviews on the relation between sound and brand identity in the context of musical and sound design for the industry. The interviews showed that the sound design process involves stakeholders who have different domains of expertise, which leads to difficulties in the interaction between them. As a solution, we propose a methodological framework for designing sound identity supported by two communication tools: a deck of cards allowing the different stakeholders to share a common vocabulary concerning both brand and sound concepts, and a sound charter which is a way to communicate guidelines for sound design through the use of sound identity semantic descriptors, illustrated by sound examples.
Imagined time: sculpting a temporal experience for the listener BIBAFull-Text 16
  Justin Christensen
Starting from a philosophy of hearing rather than one of vision greatly changes how one understands the experience of the individual in the world. This paper focuses on the participatory act of listening, examining listener immersion, entrainment, habituation, expectation, and memory. These physiological components of human listening are investigated as a means to better understand the temporal listening experience of an audience. Additionally, tools for improving how one might engage with the temporal experience of a listener are also presented.
Auditory weather reports: demonstrating listener comprehension of five concurrent variables BIBAFull-Text 17
  Jonathan H. Schuett; Riley J. Winton; Jared M. Batterman; Bruce N. Walker
Displaying multiple variables or data sets within a single sonification has been identified as a challenge for the field of auditory display research. We discuss our recent study that evaluates the usability of a sonification that contains multiple variables presented in a way that encouraged perception across multiple auditory streams. We measured listener comprehension of weather sonifications that include the variables of temperature, humidity, wind speed, wind direction, and cloud cover. Listeners could accurately identify trends in five concurrent variables presented together in a single sonification. This demonstrates that it is indeed possible to include multiple variables together within an auditory stream and thus a greater number of variables within a sonification.
Towards soundpainting gesture recognition BIBAFull-Text 18
  Thomas Pellegrini; Patrice Guyot; Baptiste Angles; Christophe Mollaret; Christophe Mangou
In this article, we describe our recent research activities on gesture recognition for soundpainting applications. Soundpainting is a multidisciplinary live composing sign language for musicians, actors, dancers, and visual artists. These gestures are produced by a soundpainter, which plays the role of a conductor, in order to lead a live performance. Soundpainting gestures are normalized and well defined, thus they are a very interesting case study in automatic gesture recognition. We describe a first gesture recognition system based on hidden Markov Models. We also report on the creation of a pilot corpus of soundpainting RGB/depth videos. The use of a computer could have many interesting applications listed in the paper. These applications are not limited to live performance, in which the computer would act as a performer. It could also help to investigate the balance between improvisation and planned creation in the particular context of soundpainting.
Evaluating computer-based musical instruments from the perspective of listening BIBAFull-Text 19
  Cornelius Poepel
This paper presents a method of evaluating computer-based musical instruments. The method is based on a paired comparison of sound files produced by the instruments to be evaluated. The same input signal imparting performers' actions to generate musical expression is applied to different instruments. Each output is recorded and the different recordings are compared by listeners in relation to factors relevant for musical expression. A study using this method is presented. The results show that there are significant differences between the instruments compared and that the evaluation method used allows insight into specific features of the evaluated instruments. The method used in this study can be adapted to compare and evaluate new developments in the field of new interfaces for musical expression providing the same input signal can be used and fed into the instruments.
Step and play!: space as interface in the context of location-based musical album apps BIBAFull-Text 20
  Fernanda Dias
In this article, I intend to raise a debate on a very recent phenomenon of the music industry: the location-based musical album app. In this context, I discuss in which manners the physical public space can be interpreted as interface. The space mediates, reshapes and adds meaning to a site-specific musical album. The walking activity becomes the input that one needs to perform in order to encounter the output (music). The album listening is relative to the extension covered of these pre-designed territories.
   I see space as an intriguing interface, in this case delivering the main content (the musical album) as well as guiding people further on their experiences, influencing their personal narrative and perception of that work of art. My argumentation follows recent theories on mobile music listening, locative and pervasive media, entangling different perspectives in order to analyze this new musical album format, which had its first appearances in the year of 2011.
Educational audio game design: sonification of the curriculum through a role-playing scenario in the audio game 'Kronos' BIBAFull-Text 21
  Emmanouel Rovithis; Andreas Mniestris; Andreas Floros
Audio-Games (AGs) are electronic games that feature partially or completely auditory interfaces to express the game's plot and mechanics. The required concentration on sonic information makes AGs a suitable medium not only for entertainment, but also for education on (and not limited to) music and sound studies curricula. This paper presents a novel educational AG entitled Kronos that implements a role-playing scenario to facilitate the sonification of the relevant curriculum and to create an educational platform that combines an audio-based gaming environment with a musical instrument. In that process a methodology suggested by the authors has been used. The sonic symbols assigned to create the game's narrative content will be explained and future developments will be mentioned.
An eco-structuralism approach in soundscape (data) composition BIBAFull-Text 22
  José Alberto Gomes; Álvaro Barbosa; Rui Penha
In this paper we present the motivations, processes and results of two experimental music pieces. Two approaches as how the soundscape can catalyze direct inspiration and raw material to the creation of music. By describing the processes and results of public presentations, we identified approaches reinterpreting raw data in music composition. The first work is a 9 minute piece for tape, small orchestra and improvisation ensemble. The main purpose was to capture the soundscape identity around the concert hall and transform it in a music piece. The second work is a 6 minute piece for 4 channel (quadriphonic) tape. The purpose of this piece was to experiment a more purist approach to create music with the data information taken from a random soundscape of a random full day from an urban space also chosen randomly. Furthermore, some considerations are inferred about the artistic relevance of using this approach, raising questions about the musical outcomes vs. original sound sources.
Does the beat go on?: identifying rhythms from brain waves recorded after their auditory presentation BIBAFull-Text 23
  Sebastian Stober; Daniel J. Cameron; Jessica A. Grahn
Music imagery information retrieval (MIIR) systems may one day be able to recognize a song just as we think of it. As one step towards such technology, we investigate whether rhythms can be identified from an electroencephalography (EEG) recording taken directly after their auditory presentation. The EEG data has been collected during a rhythm perception study in Kigali, Rwanda and comprises 12 East African and 12 Western rhythmic stimuli presented to 13 participants. Each stimulus was presented as a loop for 32 seconds followed by a break of four seconds before the next one started. Using convolutional neural networks (CNNs), we are able to recognize individual rhythms with a mean accuracy of 22.9% over all subjects by just looking at the EEG recorded during the silence between the stimuli.
The role of sound in the sensation of ownership of a pair of virtual wings in immersive VR BIBAFull-Text 24
  Erik Sikström; Amalia de Götzen; Stefania Serafin
This paper describes an evaluation of the role of self-produced sounds in participants' sensation of ownership and control of virtual wings in an immersive virtual reality scenario where the participants were asked to complete an obstacle course flight while exposed to four different sound conditions The experiment resulted in either none or very small differences between the experimental conditions.
Redesigning the way we listen: curating responsive sound interfaces in transdisciplinary domains BIBAFull-Text 25
  Morten Søndergaard
This paper is based on a research project-in-progress investigating curatorial practice as methodology for creating responsive interfaces to sound art practices. Sound art is a transdisciplinary practice. As such, it creates new domains that may be used for redesign-purposes. Not only do experiences of sound alter; the way we listen to sound is transforming as well. Thus, the paper analyses and discusses two responsive sound interfaces and claim that curating as a transdisciplinary practice may frame what is termed in the paper as a domain-game redesigning the way the audience listens to and uses sound.
Methodological approaches to the evaluation of game music systems BIBAFull-Text 26
  Anthony Prechtl; Robin Laney; Alistair Willis; Robert Samuels
Despite an emerging interest in the application of dynamic computer music systems to computer games, currently there are no commonly accepted approaches to empirically evaluating game music systems. In this paper we pose four questions that researchers could assess in order to evaluate different aspects of a game music system. They focus on the music's effect on the game playing experience (whether the music leads to a more enjoyable experience, and whether it affects the player in the intended way during the game), and how the music itself is perceived (whether it reaches a certain aesthetic standard, and whether it accurately conveys the intended narrative). We examine each of these questions in turn, for each one establishing a theoretical background as well as reviewing and comparing relevant research methodologies in order to show how it could be addressed in practice.
A software architecture for dynamic enhancement of soundscapes in games BIBAFull-Text 27
  Durval Pires; Valter Alves; Licinio Roque
A game soundscape often includes sounds that are triggered by the game logic according to the player's actions and to other real time occurrences. The dynamic nature of such triggering events can lead to a composition that is not necessarily interesting, at a given moment. Following principles from Acoustic Ecology, and specifically the notion of healthy soundscape, we propose a system aiming at the moderation of sounds generated during gameplay in a way that the composition retains its communicational meaningfulness. For instance, the system aims to ensure that the soundscape does not get overcrowded by the superimposition of whatever sounds might be triggered, and that the player can actually ear the relevant stimuli. The main component of the system is a module that implements heuristics that free designers from restating accepted sound design principles, and programmers from embedding such intelligence in the game logic. Thus, designers can focus on expressing their design intents, by using an API to create and characterize the sound sources to be handled by the sound engine. The use of this API also conveys a more readable expression of the game's sound design, easing communication and reuse. Hence, this proposal can be particularly relevant during fast prototyping phases, when it can constitute an expedite way to test and refine creative ideas, while avoiding extensive coding and the typical complexity and cost of existing middleware, which is particular relevant for small budget and sound novice practitioners. We finish by presenting results from a proof of concept implementation, and a game remake for evaluating the proposed system.
Dynamic music and immersion in the action-adventure an empirical investigation BIBAFull-Text 28
  Hans-Peter Gasselseder
Aiming to immerse players into a new realm of drama experience, a growing number of video games utilize interactive, 'dynamic' music that reacts adaptively to game events. Though little is known about the involved perceptual processes, the design rationale of enhanced immersive experiences is taken over by public discussion including scientific accounts, despite lacking empirical validation. The present paper intends to fill this gap by hypothesizing facilitatory effects of dynamic music on attention allocation in the matching of expected and incoming expressive characteristics of concurrent stimuli. Moreover, personality constructs are investigated in mediating the decoding and sensing of experiences linked to immersion, presence, and emotion. The experiment explored experiential states of immersion, emotional valence/arousal as well as trait music empathizing and emotional involvement in the context of dynamic and non-dynamic music. 60 subjects answered self-report questionnaires each time after playing a 3rd-person action-adventure in one of three conditions accounting for (1) dynamic music, (2) non-dynamic music/low arousal potential and (3) non-dynamic music/high arousal potential, in this way aiming to manipulate structural-temporal alignment, emotional arousal and resulting congruency of nondiegetic music. Shedding light on the implications of music dramaturgy within a semantic ecology, different layers of mind sets between the player, avatar, and game environment are assumed to moderate a continuous regulatory modulation of emotional response achieved by context effects of dynamic music.
Inclusive game design: audio interface in a graphical adventure game BIBAFull-Text 29
  Per Anders Östblad; Henrik Engström; Jenny Brusk; Per Backlund; Ulf Wilhelmsson
A lot of video games on the market are inaccessible to players with visual impairments because they rely heavily on use of graphical elements. This paper presents a project aimed at developing a point-and-click adventure game for smart phones and tablets that is equally functional and enjoyable by blind and sighted players. This will be achieved by utilizing audio to give blind players all necessary information and enjoyment without graphics. In addition to creating the game, the aim of the project is to identify design aspects that can be applied to more types of games to include more players. This paper also presents a pilot study that has been conducted on an early version of the game and the preliminary findings are discussed.