HCI Bibliography Home | HCI Conferences | AM Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AM Tables of Contents: 101112131415

Proceedings of the 2013 Audio Mostly Conference: A Conference on Interaction with Sound

Fullname:Proceedings of the 8th Audio Mostly Conference: A Conference on Interaction with Sound
Note:Sound for Information and Intuition
Editors:Katarina Delsing; Mats Liljedahl
Location:Piteå, Sweden
Dates:2013-Sep-18 to 2013-Sep-20
Publisher:ACM
Standard No:ISBN: 978-1-4503-2659-9; ACM DL: Table of Contents; hcibib: AM13
Papers:20
Links:Conference Home Pgae | Conference Series Home Page
Initial objective & subjective evaluation of a similarity-based audio compression technique BIBAFull-Text 1
  Stuart Cunningham; Jonathan Weinel; Shaun Roberts; Vic Grout; Darryl Griffiths
In this paper, we undertake an initial study evaluation of a recently developed audio compression approach; Audio Compression Exploiting Repetition (ACER). This is a novel compression method that employs dictionary-based techniques to encode repetitive musical sequences that naturally occur within musical audio. As such, it is a lossy compression technique that exploits human perception to achieve data reduction.
   To evaluate the output from the ACER approach, we conduct a pilot evaluation of the ACER coded audio, by employing both objective and subjective testing, to validate the ACER approach. Results show that the ACER approach is capable of producing compressed audio that varies in subjective and objective and quality grades that are inline with the amount of compression desired; configured by setting a similarity threshold value. Several lessons are learned and suggestion given as to how a larger, enhanced series of listening tests will be taken forward in future, as a direct result of the work presented in this paper.
Gestural user interface for audio multitrack real-time stereo mixing BIBAFull-Text 2
  Konstantinos Drossos; Andreas Floros; Konstantinos Koukoudis
Sound mixing is a well-established task applied (directly or indirectly) in many fields of music and sound production. For example, in the case of classical music orchestras, their conductors perform sound mixing by specifying the reproduction gain of specific groups of musical instruments or of the entire orchestra. Moreover, modern sound artists and performers also employ sound mixing when they compose music or improvise in real-time. In this work a system is presented that incorporates a gestural interface for real-time multitrack sound mixing. The proposed gestural sound mixing control scheme is implemented on an open hardware micro-controller board, using common sensor modules. The gestures employed are as close as possible to the ones particularly used by the orchestra conductors. The system overall performance is also evaluated in terms of the achieved user experience through subjective tests.
An auditory display that assist commercial drivers in lane changing situations BIBAFull-Text 3
  Johan Fagerlönn; Stefan Larsson; Stefan Lindberg
This paper presents a simulator study that evaluates four auditory displays to assist commercial drivers in lane changing situations. More specifically, the displays warned the drivers about vehicles in the adjacent lane. Three displays utilized different variants of graded auditory warnings (early and late signals) while one display contained a single-stage warning (late signal). For all graded warnings, a manipulation of the turn indicator sound was utilized to alert the driver. The study investigated whether the graded warnings had different effects on safety and driver acceptance compared to a single stage warning. In addition, the study examined whether the idea of changing the turn indicator sound influenced traffic safety and initial acceptance. The results support that graded warnings are more effective compared to single stage warnings. Manipulating the turn indicator was effective and the acceptance for the solution was high. The implications for design based on the results are presented.
Sonosemantics BIBAFull-Text 4
  Mark Grimshaw; Tom Garner
The purpose of this positioning paper is to propose a new definitional framework of sound, sonosemantics. The need for this is apparent for a number of reasons. Physiological conditions such as some forms of subjective tinnitus require no sound waves for sound to be heard yet all current mainstream definitions of sound state that sound is a sound wave or that sound requires sound waves for its perception. New research suggests a closer and directly causal identification of sound with emotion than hitherto uncovered. New technology suggests new ways to exploit the possibility of hearing sound in the absence of sound waves.
Differences in human audio localization performance between a HRTF- and a non-HRTF audio system BIBAFull-Text 5
  Camilla H. Larsen; David S. Lauritsen; Jacob J. Larsen; Marc Pilgaard; Jacob B. Madsen
Spatial audio solutions have been around for a long time in real-time applications, but yielding spatial cues that more closely simulate real life accuracy has been a computational issue, and has often been solved by hardware solutions. This has long been a restriction, but now with more powerful computers this is becoming a lesser and lesser concern and software solutions are now applicable. Most current virtual environment applications do not take advantage of these implementations of accurate spatial cues, however. This paper compares a common implementation of spatial audio and a head-related transfer function (HRTF) system implementation in a study in relation to precision, speed and navigational performance in localizing audio sources in a virtual environment. We found that a system using HRTFs is significantly better at all three performance tasks than a system using panning.
Designing auditory pointers BIBAFull-Text 6
  Robert Harald Lorenz; Axel Berndt; Rainer Groh
Sound can be used to give orientation, drag the listener's attention into a certain direction and provide navigational cues in virtual, as well as in physical environments. In analogy to the concept of visual pointers we call such sounds Auditory Pointers. While previous work mainly focused on the spacial localization property of sounds, we would like to complement this by using properties of the sound itself. Properties, like loudness, timbre, and pitch, can be used to sonify distance and direction to a target point. In this paper, we describe an exemplary implementation of respective sound synthesis techniques and investigate the effectiveness of different properties in a user study. The findings reveal big differences between the sound parameters and give clues for functional sound design.
'AIE-studio' -- a pragmatist aesthetic approach for procedural sound design BIBAFull-Text 7
  Matti Luhtala; Markku Turunen; Jaakko Hakulinen; Tuuli Keskinen
This paper introduces the AIE-Studio (Audio Interfaces for Exploration), a modular dataflow patching library implemented with Pure Data. The AIE-Studio introduces new tools for procedural sound design through generative sonic and musical structures. Particular focus is on aesthetic experience. The designed modules allow versatile dataflow mapping through matrix routing system while also enabling the sound designer to influence generative processes of music creation. In particular, The AIE-Studio was used to create generative sonic and musical material in an embodied game-like application. In this paper we present key questions driving the research, theoretical background, research approach and the main development activities.
Binaural spatialization for 3D immersive audio communication in a virtual world BIBAFull-Text 8
  Davide A. Mauro; Rufael Mekuria; Michele Sanna
Realistic 3D audio can greatly enhance the sense of presence in a virtual environment. We introduce a framework for capturing, transmitting and rendering of 3D audio in presence of other bandwidth savvy streams in a 3D Tele-immersion based virtual environment. This framework presents an efficient implementation for 3D Binaural Spatialization based on the positions of current objects in the scene, including animated avatars and on the fly reconstructed humans. We present a general overview of the framework, how audio is integrated in the system and how it can exploit the positions of the objects and room geometry to render realistic reverberations using head related transfer functions. The network streaming modules used to achieve lip-synchronization, high-quality audio frame reception, and accurate localization for binaural rendering are also presented. We highlight how large computational and networking challenges can be addressed efficiently. This represents a first step in adequate networking support for Binaural 3D Audio, useful for tele-presence. The subsystem is successfully integrated with a larger 3D immersive system, with state of art capturing and rendering modules for visual data.
The blindfold soundscape game: a case for participation-centered gameplay experience design and evaluation BIBAFull-Text 9
  Durval Pires; Bárbara Furtado; Tiago Carregã; Luís Reis; Luís Lucas Pereira; Rui Craveirinha; Licínio Roque
In this paper we report on a game design exercise that focus on the sensoriality and sensemaking participant dimensions for conceiving and evaluating gameplay experience, by framing design intentions, artifact characteristics and user participation. Through this exercise we were able to build understandings of user participation in the soundscape constituting the gameplay scenario. By employing a goal-question-metric approach we demonstrated the viability of using the participation-centric gameplay model dimensions as a basis for the synthesis of gameplay participation indicators and metrics, and their analysis in the context of interactions with a game as soundscape.
Cuing sensorimotor skills to optimize with the use of audible stimuli in exercise BIBAFull-Text 10
  Kelly Smith
Rhythmic sound provides timing information that can be used to improve precision in exercise. Rhythmic performance of an exercise that consists of a movement cycle of goal-directed movements can enhance the efficiency of exercise on a cardio-fitness machine. Processing of rhythmic audio signals may cue continuous performances of movement cycles using rhythmic events, such as the beginning and end of audio playback, and the period of audio pulses, to temporally constrain movement to a spatial interval. An analogy of an optimization paradigm for rhythmic control is proposed to explain how sound drives adaptive and coincident methods for self-synchronization for the purpose of achieving precision in rhythmic movement. Criteria for rhythmic control are then presented in the context of modern fitness equipment as a future prospect to interface with sound technology.
Measuring comprehension in sonification tasks that have multiple data streams BIBAFull-Text 11
  Jonathan H. Schuett; Bruce N. Walker
When the goal of an auditory display is to provide inference or intuition to a listener, it is important for researchers and sound designers to gauge users' comprehension of the display to determine if they are, in fact, receiving the correct message. This paper discusses an approach to measuring listener comprehension in sonifications that contain multiple concurrently presented data series. We draw from situation awareness research that has developed measures of comprehension within environments or scenarios, based on our view that an auditory scene is similar to a virtual or mental representation of the listener's environment.
Visual patterns of hallucination as a basis for sonic arts composition BIBAFull-Text 12
  Jonathan Weinel
Visual patterns of hallucination; pin-point dot patterns of light, arranged in spiral or funnel structures are often perceived in hallucinogenic experiences such as those produced by mescaline. This article discusses the use of altered states of consciousness (ASC) and visual patterns of hallucination, as principles upon which to base the design of musical compositions and related audio-visual works. I provide some background information regarding visual patterns of hallucination (or 'entoptic phenomena'), with reference to studies by Klüver and Strassman. I then proceed to discuss a process for using visual patterns of hallucination as a basis for designing sonic and visual material, using a purpose-built piece of software: the Atomizer Live Patch. The implementation of this sonic material in the context of 'ASC compositions' is discussed with regards to Entoptic Phenomena, a fixed electroacoustic composition, and Tiny Jungle, an audio-visual work. These pieces form part of a larger body of work completed as part of my PhD research, where ASC was used as a principle for the design of electroacoustic music and work in related mediums. Through the discussion of these works I will demonstrate an approach for using ASC, and visual patterns of hallucination in particular, as a basis for the design of sonic artworks and visual music. This research therefore contributes to the field of compositional methods for electroacoustic music, while more broadly indicating approaches for creating digital artworks that reflect ASC.
A discussion of musical features for automatic music playlist generation using affective technologies BIBAFull-Text 13
  Darryl Griffiths; Stuart Cunningham; Jonathan Weinel
This paper discusses how human emotion could be quantified using contextual and physiological information that has been gathered from a range of sensors, and how this data could then be used to automatically generate music playlists. The work is very much in progress and this paper details what has been done so far and plans for experiments and feature mapping to validate the concept in real-world scenarios.
   We begin by discussing existing affective systems that automatically generate playlists based on human emotion. We then consider the current work in audio description analysis. A system is proposed that measures human emotion based on contextual and physiological data using a range of sensors. The sensors discussed to invoke such contextual characteristics range from temperature and light to EDA (electro dermal activity) and ECG (electrocardiogram). The concluding section describes the progress achieved so far, which includes defining datasets using a conceptual design, microprocessor electronics and data acquisition using Matlab. Lastly, there is brief discussion of future plans to develop this research.
Identifying habitual statistical features of EEG in response to fear-related stimuli in an audio-only computer video game BIBAFull-Text 14
  Tom Garner
A better understanding of observable and quantifiable psychophysiological outputs such as electroencephalography (EEG) during computer video gameplay has significant potential to support the development of an automated, emotionally intelligent system. Integrated into a game engine, such a system could facilitate an effective biofeedback loop, accurately interpreting player emotions and adjusting gameplay parameters to respond to players' emotional states in a way that moves towards exciting ventures in affective interactivity. This paper presents a crucial step to reaching this objective by way of examining the statistical features of EEG that may relate to user experience during audio-centric gameplay. An audio-only test game ensures that game sound is the exclusive stimulus modality with gameplay contextualisation and qualitative data collection enabling the study to focus specifically upon fear. Though requiring of an unambiguous horror-game context, the results documented within this paper identify several statistical features of EEG data that could differentiate fear from calm.
GangKlang: designing walking experiences BIBAFull-Text 15
  Nassrin Hajinejad; Heide-Rose Vatterrott; Barbara Grüter; Simon Bogutzky
As mobile game researchers we focus on playful experiences emerging in everyday life interactions. In this paper we present GangKlang, a particular sonic interaction design (SID) to support and facilitate the activity of walking with reference to Csikszentmihalyi's concept of flow. Tightly coupled with movement, sound becomes an element of sensorimotor cycles of hearing, proprioception and action.
Dynamic enhancement of videogame soundscapes BIBAFull-Text 16
  Durval Pires; Licinio Roque; Valter Alves
A game soundscape often includes sounds that are triggered by the game logic according to the player's actions and to other real time occurrences. The dynamic nature of such triggering events leads to a composition that is not necessarily interesting, at a given moment.
   We propose a system aiming at the enhancement of the soundscape generated during gameplay. The main component of the system is a module that implements heuristics, which we set to follow principles from Acoustic Ecology and, specifically, the notion of healthy soundscape. In order to inform the heuristics, designers can characterize the sounds being handled by the sound engine, using an API that aims to be accessible and informative about the designer's intentions.
   We also present reflections on an essay where a game was remade using the proposed system, which helped us to support the feasibility of the proposed system.
Bringing musicality to movement sonification: design and evaluation of an auditory swimming coach BIBAFull-Text 17
  Gabriela Seibert; Daniel Hug
In this paper we describe a novel approach to the sonification of crawl swim movement. The design method integrates task and data analysis from a sport science perspective with subjective experience of swimmers and swimming coaches, and strongly relies on the skills of musicians in order to define the basic sonic design. We report on the design process, and on the implementation and evaluation of a first prototype.
Sonification for supporting joint attention in dyadic augmented reality-based cooperation BIBAFull-Text 18
  Thomas Hermann; Alexander Neumann; Christian Schnier; Karola Pitsch
This paper presents a short evaluation of auditory representations for object interactions as support for cooperating users of an Augmented Reality (AR) system. Particularly head-mounted AR displays limit the field of view and thus cause users to miss relevant activities of their interaction partner, such as object interactions or deictic references that normally would be effective to establish joint attention. We start from an analysis of the differences between face-to-face interaction and interaction via the AR system, using interaction linguistic conversation analysis. From that we derive a set of features that are relevant for interaction partners to co-ordinate their activities. We then present five different interactive sonifications which make object manipulations of interaction partners audible by sonification that convey information about the kind of activity.
Towards coarse-scale event detection in music BIBAFull-Text 19
  Anna M. Kruspe; Jakob Abeßer; Christian Dittmar
Over the past years, the detection of onset times of acoustic events has been investigated in various publications. However, to our knowledge, there is no research on event detection on a broader scale. In this paper, we introduce a method to automatically detect "big" events in music pieces in order to match them with events in videos. Furthermore, we discuss different application scenarios for this audio-visual matching.
Middie mercury: an ambient music generator for relaxation BIBAFull-Text 20
  Ivan Sysoev; Ramitha D. Chitloor; Ajay Rajaram; R. Stephens Summerlin; Nicholas Davis; Bruce N. Walker
We describe a computer application for relaxation that is based on music generation following users' actions with a simulated drop of mercury. Rationale for the approach as well as architectural, algorithmic and technical details of the implementation are included. We also provide results of a user survey evaluating qualities of application usage experience.