HCI Bibliography Home | HCI Conferences | AM Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AM Tables of Contents: 101112131415

Proceedings of the 2011 Audio Mostly Conference: A Conference on Interaction with Sound

Fullname:Proceedings of the 6th Audio Mostly Conference: A Conference on Interaction with Sound
Editors:Licinio Roque; Valter Alves
Location:Coimbra, Portugal
Dates:2011-Sep-07 to 2011-Sep-09
Publisher:ACM
Standard No:ISBN:; ACM DL: Table of Contents hcibib: AM11
Papers:18
Pages:137
Links:Conference Website | Conference Series Home Page
Summary:The Audio Mostly Conference has since its start in 2006 become a venue for researchers, technicians and artists exploring audio in interactive and other computer-based environments. The conference themes varied over the years, from the first focus on "Sound in Games" over to "Interaction with Sound", "Sound and Motion", "Sound and Emotion", "Sound and Design", to this year's (2011) theme "Sound and Context".
    Through its editions, the Audio Mostly Conference series fostered the thoughtfulness for the unexploited potential of audio in computer-based environments and across many contexts. It aimed to help open up this area of thinking by bringing together game designers, audio experts, content creators, and technology and behavioural researchers. Through this forum, varied experts can discuss developments and new potentials for audio in many areas such as entertainment, health and fitness, education, industrial training, serious gaming, etc. This is also a venue to present sonic solutions to development and design challenges in low resolution scenarios or environments where screens are unavailable.
    The theme for the sixth Audio Mostly Conference covers ways in which sound and music can be utilized as a way to create context, in physical and virtual environments, and especially as a way to enhance the experience in interactive applications. Sound is both an expression of an environment and a drive for change in that context. The perception of sound as a context carrier emphasizes the interest in exploring the interpretative value of sound and the way it can affect users in context. By realizing the contextualizing potential of sound, researchers and designers can enhance users' experiences and provide richer sense making contexts. Appreciating sound woven into contexts can also foster holistic approaches that benefit overall design coherence.
Sound parameters for expressing geographic distance in a mobile navigation application BIBAFull-Text 1-7
  Mats Liljedahl; Stefan Lindberg
This paper presents work on finding acoustic parameters suitable to convey a sense of difference in geographic distance through the concepts of "near", "middle" and "far". The context for use is a mobile application for navigation services. A set of acoustic parameters was selected based on how sound naturally travels through and is dispersed by the atmosphere. One parameter without direct acoustic connection to distance was also selected. Previous works corroborate the choice of parameters in the context of the project. Results show that modulating multiple parameters simultaneously to express distance gives a more robust experience of difference in distance compared to modulating single parameters. The ecological parameters low-pass filter and reverb gave the test's subjects the most reliable and consistent experience of difference in distance. Modulating the parameter pitch alone was seen to be an unreliable method. Combining the pitch parameter with the reverb parameter gave more robust results.
RaPScoM: towards composition strategies in a rapid score music prototyping framework BIBAFull-Text 8-14
  Jakob Doppler; Julian Rubisch; Michael Jaksche; Hannes Raffaseder
Especially in the low-budget and amateur score music production workflow the triangular communication between editor, director and composer is constrained by limited resources, tight schedules and the lack of common domain knowledge. Often sound-a-like ideas and precomposed material end up as static temp tracks in the rough cut and thus limit the flexibility in the composition process. Our rapid score music prototyping framework RaPScoM aims at supporting the workflow with a toolchain for semi-automated and customizeable temp track generation by exposing a set of high-level parameters based on semantic movie annotation derived from movie clip analysis and a set of basic composition rules that could be derived from exemplary (score) music. In this paper we give an overview of the framework architecture and discuss technical details on our semantic movie annotation strategy and some core composition components.
An inspection on a deck for sound design in games BIBAFull-Text 15-22
  Valter Alves; Licinio Roque
In the context of an initiative to empower non-expert practitioners to perform sound design in games, assisted by a pattern language approach, it is helpful to hold an instrument that fosters the contact with the design patterns. With such purpose, we have been working on a deck of cards, which is also subject to an appreciation for sound explorations as an integral part of game design. We present the current rationale for the design of the deck, evidencing its suitability to maintain sound design opportunities at reach, in an expeditious and non-intrusive manner. We also report an exercise of inspection of the deck, aiming at a first refinement through the identification of foremost hindrances in terms of perception and content interpretation.
Towards a conceptual framework to integrate designerly and scientific sound design methods BIBAFull-Text 23-30
  Daniel Hug; Nicolas Misdariis
Sound design for interactive products is rapidly evolving to become a relevant topic in industry. Scientific research from the domains of Auditory Display (AD) and Sonic Interaction Design (SID) can play a central role in this development, but in order to make its way to market oriented applications, several issues still need to be addressed. Building on the sound design process employed at the Sound Perception and Design (SPD) team at Ircam, and the information gathered from interviews with professional sound designers, this paper focuses on revealing typical issues encountered in the design process of both science and design oriented communities, in particular the development of a valid and revisable, yet innovative, design hypothesis. A second aim is to improve the communication between sound and interaction designers. In order to address these challenges, a conceptual framework, which has been developed using both scientific and designerly methods, was presented and evaluated using expert reviews.
A climate of fear: considerations for designing a virtual acoustic ecology of fear BIBAFull-Text 31-38
  Tom Garner; Mark Grimshaw
This paper proposes a framework that incorporates fear, acoustics, thought processing and digital game sound theory; with the potential to not only improve understanding of our relationship with fear, but also generate a foundation for reliable and significant manipulation of the fear experience. A brief literature review provides the context for a discussion of fear and sound in virtual worlds before the framework is described; concluding remarks point to future empirical work testing and refining the framework.
Making gamers cry: mirror neurons and embodied interaction with game sound BIBAFull-Text 39-46
  Karen Collins
In this paper, I draw on an embodied cognition approach to describe how sound mediates our identification with and empathy for video game characters. This identification is discussed in terms of mirror neurons and body schema, drawing on theoretical and empirical research to explore ways in which identity is created from our embodied interaction with sound. I conclude by suggesting ways in which sound designers and composers can use this information to create more empathy and identification between players and their game characters.
Towards an open sound card: bare-bones FPGA board in context of PC-based digital audio: based on the AudioArduino open sound card system BIBAFull-Text 47-54
  Smilen Dimitrov; Stefania Serafin
The architecture of a sound card can, in simple terms, be described as an electronic board containing a digital bus interface hardware, and analog-to-digital (A/D) and digital-to-analog (D/A) converters; then, a soundcard driver software on a personal computer's (PC) operating system (OS) can control the operation of the A/D and D/A converters on board the soundcard, through a particular bus interface of the PC -- acting as an intermediary for high-level audio software running in the PC's OS.
   This project provides open-source software for a do-it-yourself (DIY) prototype board based on a Field-Programmable Gate Array (FPGA), that interfaces to a PC through the USB bus -- and demonstrates full-duplex, mono 8-bit/44.1 kHz soundcard operation. Thus, the inclusion of FPGA technology in this paper -- along with previous work with discrete part- and microcontroller-based designs -- completes an overview of architectures, currently available for DIY implementations of soundcards; serving as a broad introductory tutorial to practical digital audio.
GPU-based acoustical occlusion modeling with acoustical texture maps BIBAFull-Text 55-61
  Brent Cowan; Bill Kapralos
Although the direct path between a sound source and a receiver is often occluded, sound may still reach the receiver as it diffracts ("bends") around the occluding obstacle/object. Diffraction is an elementary means of sound propagation yet, despite its importance, it is often ignored in virtual reality and gaming applications altogether except perhaps for trivial environments. Given the widespread use and availability of computer graphics hardware and the graphics processing unit (GPU) in particular, GPUs have been successfully applied to other, non-graphics applications including audio processing and acoustical diffraction modeling. Here we build upon our previous work that approximates acoustical occlusion/diffraction effects in real-time utilizing the GPU. In contrast to our previous approach, the audio properties of an object are stored as a texture map and this allows the properties to vary across the surface of a model. The method is computationally efficient allowing it to be incorporated into real-time, dynamic, and interactive virtual environments and video games where the scene is arbitrarily complex.
Towards equalization of environmental sounds using auditory-based features BIBAFull-Text 62-66
  Jorge Garcia; Stefan Kersten; Jordi Janer
In this paper we describe methods to assist soundscape design, sound production and processing for interactive environments, like games and simulations. Using auditory filter banks and sound texture synthesis, we develop algorithms that can be integrated with existing audio engines and can additionally support the development of dedicated high-level audio tools aimed at content authoring or transformations based on samples. The relationship between the auditory excitation patterns and the computation algorithm is explained within the context of footstep sounds. Moreover, methods for sound texture synthesis of water streams with artificial expansion of timbre space using auditory filtering techniques are presented.
Identification of perceptual qualities in textural sounds using the repertory grid method BIBAFull-Text 67-74
  Thomas Grill; Arthur Flexer; Stuart Cunningham
This paper is about exploring which perceptual qualities are relevant to people listening to textural sounds. Knowledge about those personal constructs shall eventually lead to more intuitive interfaces for browsing large sound libraries. By conducting mixed qualitative-quantitative interviews within the repertory grid framework ten bi-polar qualities are identified. A subsequent web-based study yields measures for inter-rater agreement and mutual similarity of the perceptual qualities based on a selection of 100 textural sounds. Additionally, some initial experiments are conducted to test standard audio descriptors for their correlation with the perceptual qualities.
Eighth-notes performances: kinds of inégalité BIBAFull-Text 75-81
  Tilo Hähnel; Axel Berndt
From a technical perspective the impression of inégalité in musical pulse mainly refers to aspects of timing, loudness, and duration. Musicians tend to model these performance parameters intuitively and listeners seem to perceive them, to a certain degree, unconsciously.
   Expert musicians and non-experts were asked to interactively tune performance parameters in a short four-bar phrase. A recently developed performance synthesis tool furnished the technical base for this analysis. The results give insight into the relationships between performance parameters and give rise to discuss the role of expertise and skill in a slightly different light. Although the analysis of appropriate performance parameters is difficult, the need for it is still beyond question for the improvement of liveliness in synthetic performances.
Artist filtering for non-western music classification BIBAFull-Text 82-87
  Anna Kruspe; Hanna Lukashevich; Jakob Abeßer
The "album effect" is a known phenomenon in musical artist and genre recognition. Classification results are often better when songs from the same album are used in the training and evaluation data sets. Supposedly, this effect is caused by the production conditions of the album, e.g. recording quality, mixing and equalization preferences, effects etc. This behavior is not representative of real world scenarios, though, and should therefore be avoided when evaluating the performance of a classifier.
   The related "artist effect" also affects the results of genre recognition. It is caused by the appearance of the same artists in the training and evaluation data sets. Artist filters have been proposed previously to remove this influence. We perform three different experiments to characterize the "artist effect" somewhat better and to analyze it in conjunction with non-western music. First, we test the effect's influence on the classification of musical pieces into their regions of origin. We then repeat this experiment using only specific sets of features (for the timbre, rhythm, and tonality domains). Finally, we perform a finer genre recognition with genres from four different world regions. The influence of the aforementioned effect is evaluated for all experiments.
Sonic perceptual crossings: a tic-tac-toe audio game BIBAFull-Text 88-94
  Andreas Floros; Nicolas-Alexander Tatlas; Stylianos Potirakis
The development of audio-only computer games imposes a number of challenges for the sound designer, as well as for the human machine interface design approach. Modern sonification methods can be used for effective data and game-environment or conditions representation through sound, including earcons and auditory icons. In this work we take advantage of earcons fundamental characteristics, such as spatialization usually employed for concurrent/parallel reproduction, in order to implement a tic-tac-toe audio game prototype. The proposed sonic design is transparently integrated with a novel user control/interaction mechanism that can be easily implemented in state-of-the-art mobile devices incorporating movement sensors (i.e. accelerometers and gyroscope). The overall prototype design efficiency is assessed in terms of the employed sonification accuracy, while the playability achieved through the integration of the sonic design and the employed auditory user interface is assessed in real game-play conditions.
An RSS-feed auditory aggregator using earcons BIBAFull-Text 95-100
  Athina Bikaki; Andreas Floros
In this work we present a data sonification framework based on parallel/concurrent sonic earcons' representations for monitoring in real-time information related to stock market. The information under consideration is conveyed through the well-known Really Simple Syndication (RSS) feed Internet mechanism and includes both text and numeric values, converted to speech and earcons using existing speech synthesis techniques and sonic design guidelines. Due to the considered application characteristics, particular emphasis is provided on information representation concurrency, mainly achieved using sound source spatialization techniques and different timbre characteristics. Spatial positioning of sound sources is performed through typical binaural processing and reproduction. A number of systematic, subjective assessments performed have shown that the overall perceptual efficiency and sonic representation accuracy fulfills the overall application requirements, provided that the users are appropriately trained prior to using the proposed RSS-feed auditory aggregator.
How to not hit a virtual wall: aural spatial awareness for collision avoidance in virtual environments BIBAFull-Text 101-108
  Christian Afonso; Steffi Beckhaus
Compared to graphics, sound is still an underused modality for conveying information and providing users with more than just general ambience or targeted sound effects. Collision notification is one case of direct aural feedback: The moment a user hits a wall, they hear an appropriate sound e.g. a thump. We tried to go further by using contextual spatial sound to provide collision avoidance feedback, which plays continuously in the background, but, unlike ambient soundscapes, reacts accurately and in real-time to upcoming collision hazards. In a first experimental design, we provided directional spatial sound feedback for collision avoidance in a prototypical labyrinth environment and examined the performance and reactions of a group of test subjects, who navigated through the labyrinth. Our initial design already received positive reactions from the subjects and analysis of the performance data shows first results indicating the viability of this kind of spatial sound feedback.
Auditory feedback in an interactive rhythmic tutoring system BIBAFull-Text 109-115
  Antti Jylhä; Cumhur Erkut
We present the recent developments in the design of audio-visual feedback in iPalmas, the interactive Flamenco rhythm tutor. Based on evaluation of the original implementation, we have re-designed the interface to better support the user in learning and performing rhythmic patterns. The system measures the performance parameters of the user and provides auditory feedback on the performance with different sounds corresponding to different performance attributes. The design of these sounds is informed by several attributes derived from the evaluation. We propose informative, non-intrusive. and archetypal sounds to be used in the system.
Designing alarm sounds for the control of a hydraulic platform BIBAFull-Text 116-121
  Wesley Hatch; Antti Pirhonen
The design of alarm sounds is a subtle yet important challenge. Our conceptions and stereotypes of what alarm sounds sound like are usually quite entrenched, which may be limiting the acceptance of new alarm sounds into the domain of traditional ones. This paper presents the design approaches undertaken for the case of redesigning a set of alarm and notification sounds. An analysis of the approaches' effectiveness, some design decisions, and other challenges faced are presented herein, while preliminary feedback on their effectiveness is discussed.
Examining effects of acoustic feedback on perception and modification of movement patterns in on-water rowing training BIBAFull-Text 122-129
  Nina Schaffert; Klaus Mattes; Alfred O. Effenberg
This paper describes a concept for providing acoustic feedback during on-water training to elite rowers and its implementation into the training process. The final aim was to improve the mean boat velocity by a reduction of intra-cyclic interruptions in the boat acceleration. It was assumed to enhance athletes' perception for the modification of movement patterns and control in technique training because sound conveys time-critical structures subliminally. That is of crucial importance for the precision of modifying movements to improve their execution. Advances in technology allow the design for innovative feedback systems to communicate feedback information audibly to athletes. The acoustic feedback system Sofirow was designed and field-tested with elite athletes. The device presents the boat acceleration-time trace audibly and online to athletes and coaches. The results showed a significant increase in the mean boat velocity during the sections with acoustic feedback compared to the sections without. It is thus very supportive to implement acoustic feedback regularly into training processes for elite athletes. A behavioral dynamics approach was invoked to provide a theoretical basis for this concept.