[1]
SoundGuides: Adapting Continuous Auditory Feedback to Users
Late-Breaking Works: People and Contexts
/
Françoise, Jules
/
Chapuis, Olivier
/
Hanneton, Sylvain
/
Bevilacqua, Frédéric
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.2829-2836
© Copyright 2016 ACM
Summary: We introduce SoundGuides, a user adaptable tool for auditory feedback on
movement. The system is based on a interactive machine learning approach, where
both gestures and sounds are first conjointly designed and conjointly learned
by the system. The system can then automatically adapt the auditory feedback to
any new user, taking into account the particular way each user performs a given
gesture. SoundGuides is suitable for the design of continuous auditory feedback
aimed at guiding users' movements and helping them to perform a specific
movement consistently over time. Applications span from movement-based
interaction techniques to auditory-guided rehabilitation. We first describe our
system and report a study that demonstrates a 'stabilizing effect' of our
adaptive auditory feedback method.
[2]
Human-Centred Machine Learning
Workshop Summaries
/
Gillies, Marco
/
Fiebrink, Rebecca
/
Tanaka, Atau
/
Garcia, Jérémie
/
Bevilacqua, Frédéric
/
Heloir, Alexis
/
Nunnari, Fabrizio
/
Mackay, Wendy
/
Amershi, Saleema
/
Lee, Bongshin
/
d'Alessandro, Nicolas
/
Tilmanne, Joëlle
/
Kulesza, Todd
/
Caramiaux, Baptiste
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.3558-3565
© Copyright 2016 ACM
Summary: Machine learning is one of the most important and successful techniques in
contemporary computer science. It involves the statistical inference of models
(such as classifiers) from data. It is often conceived in a very impersonal
way, with algorithms working autonomously on passively collected data. However,
this viewpoint hides considerable human work of tuning the algorithms,
gathering the data, and even deciding what should be modeled in the first
place. Examining machine learning from a human-centered perspective includes
explicitly recognising this human work, as well as reframing machine learning
workflows based on situated human working practices, and exploring the
co-adaptation of humans and systems. A human-centered understanding of machine
learning in human context can lead not only to more usable machine learning
tools, but to new ways of framing learning computationally. This workshop will
bring together researchers to discuss these issues and suggest future research
questions aimed at creating a human-centered approach to machine learning.
[3]
GaussBox: Prototyping Movement Interaction with Interactive Visualizations
of Machine Learning
Interactivity Demos
/
Françoise, Jules
/
Bevilacqua, Frédéric
/
Schiphorst, Thecla
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.3667-3670
© Copyright 2016 ACM
Summary: We present GaussBox, a design support tool for prototyping movement
interaction using machine learning. In particular, we propose novel,
interactive visualizations that expose the behavior and internal values of
machine learning models rather than their sole results. Such visualizations
have both pedagogical and creative potentials to guide users in the
exploration, experience and craft of machine learning for interaction design.
[4]
Collective Sound Checks: Exploring Intertwined Sonic and Social Affordances
of Mobile Web Applications
Work-in-Progress: Poster/Demo Presentations
/
Schnell, Norbert
/
Robaszkiewicz, Sébastien
/
Bevilacqua, Frederic
/
Schwarz, Diemo
Proceedings of the 2015 International Conference on Tangible and Embedded
Interaction
2015-01-15
p.685-690
© Copyright 2015 ACM
Summary: We present the Collective Sound Checks, an exploration of user scenarios
based on mobile web applications featuring motion-controlled sound that enable
groups of people to engage in spontaneous collaborative sound and music
performances. These new forms of musical expression strongly shift the focus of
design from human-computer interactions towards the emergence of computer
mediated interactions between players based on sonic and social affordances of
ubiquitous technologies. At this early stage, our work focuses on experimenting
with different user scenarios while observing the relationships between
different interactions and affordances.
[5]
Adaptive Gesture Recognition with Variation Estimation for Interactive
Systems
Special Issue on Activity Recognition for Interaction
/
Caramiaux, Baptiste
/
Montecchio, Nicola
/
Tanaka, Atau
/
Bevilacqua, Frédéric
ACM Transactions on Interactive Intelligent Systems
2015-01
v.4
n.4
p.18
© Copyright 2015 ACM
Summary: This article presents a gesture recognition/adaptation system for human --
computer interaction applications that goes beyond activity classification and
that, as a complement to gesture labeling, characterizes the movement
execution. We describe a template-based recognition method that simultaneously
aligns the input gesture to the templates using a Sequential Monte Carlo
inference technique. Contrary to standard template-based methods based on
dynamic programming, such as Dynamic Time Warping, the algorithm has an
adaptation process that tracks gesture variation in real time. The method
continuously updates, during execution of the gesture, the estimated parameters
and recognition results, which offers key advantages for continuous human --
machine interaction. The technique is evaluated in several different ways:
Recognition and early recognition are evaluated on 2D onscreen pen gestures;
adaptation is assessed on synthetic data; and both early recognition and
adaptation are evaluated in a user study involving 3D free-space gestures. The
method is robust to noise, and successfully adapts to parameter variation.
Moreover, it performs recognition as well as or better than nonadapting offline
template-based methods.
[6]
Using sound in multi-touch interfaces to change materiality and touch
behavior
/
Tajadura-Jiménez, Ana
/
Liu, Bin
/
Bianchi-Berthouze, Nadia
/
Bevilacqua, Frédéric
Proceedings of the 8th Nordic Conference on Human-Computer Interaction
2014-10-26
p.199-202
© Copyright 2014 ACM
Summary: Current development in multimodal interfaces allows us to interact with
digitally represented objects. Sadly, these representations are often poor due
to technical limitations in representing some of the sensorial properties. Here
we explore the possibility of overcoming these limitations by exploiting
multisensory integration processes and propose a sound-based interaction
technique to alter the perceived materiality of a surface being touched and to
shape users' touch behavior. The latter can be seen both as a cue of, and as a
means to reinforce, the altered perception. We designed a prototype that
dynamically alters the texture-related sound feedback based on touch behavior,
as in natural surface touch interactions. A user study showed that the
frequency of the sound feedback alters texture perception (coldness and
material type) and touch behavior (velocity and pressure). We conclude by
discussing lessons learnt from this work in terms of HCI applications and
questions opened by this research.
[7]
Demos
NIME 2014: New Interfaces for Musical Expression
2014-06-30
p.25
© Copyright 2014 Authors
16-CdS: A Surface Controller for the Simultaneous Manipulation of Multiple Analog Components
+ Dominguez, Carlos
B.O.M.B. -- Beat Of Magic Box -: Stand-Alone Synthesizer Using Wireless Synchronization System For Musical Session and Performance
+ Nakanishi, Yoshihito
+ Matsumura, Seiichiro
+ Arakawa, Chuichi
CloudOrch: A Portable SoundCard in the Cloud
+ Hindle, Abram
Engravings for Prepared Snare Drum, iPad, and Computer
+ Polashek, Timothy
+ Meyer, Brad
Funky Sole Music: Gait Recognition and Adaptive Mapping
+ Nymoen, Kristian
+ Song, Sichao
+ Hafting, Yngve
+ Torresen, Jim
Probabilistic Models for Designing Motion and Sound Relationships
+ Françoise, Jules
+ Schnell, Norbert
+ Borghesi, Riccardo
+ Bevilacqua, Frédéric
Radear: A Tangible Spinning Music Sequencer
+ Gabana, Daniel
+ McPherson, Andrew
Rich Contacts: Corpus-Based Convolution of Piezo-Captured Audio Gestures for Enhanced Musical Expression
+ Schwarz, Diemo
+ Harker, Alex
+ Tremblay, Pierre Alexandre
Rule-based Performative Synthesis of Sung Syllables
+ Feugère, Lionel
+ d'Alessandro, Christophe
The Well-Sequenced Synthesizer
+ Pereira, Luisa
[8]
Probabilistic Models for Designing Motion and Sound Relationships
Papers: Machine Learning Applied
/
Françoise, Jules
/
Schnell, Norbert
/
Borghesi, Riccardo
/
Bevilacqua, Frédéric
NIME 2014: New Interfaces for Musical Expression
2014-06-30
p.45
© Copyright 2014 Authors
Summary: We present a set of probabilistic models that support the design of movement
and sound relationships in interactive sonic systems. We focus on a
mapping-by-demonstration approach in which the relationships between motion and
sound are defined by a machine learning model that learns from a set of user
examples. We describe four probabilistic models with complementary
characteristics in terms of multimodality and temporality. We illustrate the
practical use of each of the four models with a prototype application for sound
control built using our Max implementation.
[9]
Moco Panel discussion on Movement and Computing
Panels
/
Bevilacqua, Frédéric
/
Alaoui, Sarah Fdili
/
Françoise, Jules
/
Pasquier, Philippe
/
Schiphorst, Thecla
NIME 2014: New Interfaces for Musical Expression
2014-06-30
p.73
© Copyright 2014 Authors
Summary: The Moco panel bridges interdisciplinary communities in movement, computing,
music and interaction. We will present the outcomes of a workshop called
MOCO2014 (Movement and Computing) premiering at Ircam in Paris in June 2014.
Although the primary target of MOCO is movement and computing, we address a
community that overlaps with the NIME community, sharing topics on
expressivity, embodied interaction, interactive machine learning, compositional
modeling and generative systems to name a few. We are interested in creating a
dialogue between researchers and artists involved in these communities, as well
as the larger community interested in the intersection between arts, science
and technology. The panel will include contributors to MOCO alongside
researchers and artists in the NIME community that explore the space between
sound and movement. Our goal is to share research concepts and to develop
future relationships that will be beneficial to both communities.
[10]
Vocalizing dance movement for interactive sonification of Laban Effort
Factors
Performing interactions
/
Françoise, Jules
/
Alaoui, Sarah Fdili
/
Schiphorst, Thecla
/
Bevilacqua, Frederic
Proceedings of DIS'14: Designing Interactive Systems
2014-06-21
v.1
p.1079-1082
© Copyright 2014 ACM
Summary: We investigate the use of interactive sound feedback for dance pedagogy
based on the practice of vocalizing while moving. Our goal is to allow dancers
to access a greater range of expressive movement qualities through
vocalization. We propose a methodology for the sonification of Effort Factors,
as defined in Laban Movement Analysis, based on vocalizations performed by
movement experts. Based on the experiential outcomes of an exploratory
workshop, we propose a set of design guidelines that can be applied to
interactive sonification systems for learning to perform Laban Effort Factors
in a dance pedagogy context.
[11]
Fluid gesture interaction design: Applications of continuous recognition for
the design of modern gestural interfaces
/
Zamborlin, Bruno
/
Bevilacqua, Frederic
/
Gillies, Marco
/
D'Inverno, Mark
ACM Transactions on Interactive Intelligent Systems
2014-01
v.3
n.4
p.22
© Copyright 2014 ACM
Summary: This article presents Gesture Interaction DEsigner (GIDE), an innovative
application for gesture recognition. Instead of recognizing gestures only after
they have been entirely completed, as happens in classic gesture recognition
systems, GIDE exploits the full potential of gestural interaction by tracking
gestures continuously and synchronously, allowing users to both control the
target application moment to moment and also receive immediate and synchronous
feedback about system recognition states. By this means, they quickly learn how
to interact with the system in order to develop better performances.
Furthermore, rather than learning the predefined gestures of others, GIDE
allows users to design their own gestures, making interaction more natural and
also allowing the applications to be tailored by users' specific needs. We
describe our system that demonstrates these new qualities -- that combine to
provide fluid gesture interaction design -- through evaluations with a range of
performers and artists.
[12]
Gesture-based control of physical modeling sound synthesis: a
mapping-by-demonstration approach
Demos
/
Françoise, Jules
/
Schnell, Norbert
/
Bevilacqua, Frédéric
Proceedings of the 2013 ACM International Conference on Multimedia
2013-10-21
p.447-448
© Copyright 2013 ACM
Summary: We address the issue of mapping between gesture and sound for gesture-based
control of physical modeling sound synthesis. We propose an approach called
mapping by demonstration, allowing users to design the mapping by performing
gestures while listening to sound examples. The system is based on a multimodal
model able to learn the relationships between gestures and sounds.
[13]
A multimodal probabilistic model for gesture -- based control of sound
synthesis
Posters
/
Françoise, Jules
/
Schnell, Norbert
/
Bevilacqua, Frédéric
Proceedings of the 2013 ACM International Conference on Multimedia
2013-10-21
p.705-708
© Copyright 2013 ACM
Summary: In this paper, we propose a multimodal approach to create the mapping
between gesture and sound in interactive music systems. Specifically, we
propose to use a multimodal HMM to conjointly model the gesture and sound
parameters. Our approach is compatible with a learning method that allows users
to define the gesture -- sound relationships interactively. We describe an
implementation of this method for the control of physical modeling sound
synthesis. Our model is promising to capture expressive gesture variations
while guaranteeing a consistent relationship between gesture and sound.
[14]
Beyond recognition: using gesture variation for continuous interaction
alt.chi: Design Lessons
/
Caramiaux, Baptiste
/
Bevilacqua, Frederic
/
Tanaka, Atau
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.2109-2118
© Copyright 2013 ACM
Summary: Gesture-based interaction is widespread in touch screen interfaces. The goal
of this paper is to tap the richness of expressive variation in gesture to
facilitate continuous interaction. We achieve this through novel techniques of
adaptation and estimation of gesture characteristics. We describe two
experiments. The first aims at understanding whether users can control certain
gestural characteristics and if that control depends on gesture vocabulary. The
second study uses a machine learning technique based on particle filtering to
simultaneously recognize and measure variation in a gesture. With this
technology, we create a gestural interface for a playful photo processing
application. From these two studies, we show that 1) multiple characteristics
can be varied independently in slower gestures (Study 1), and 2) users find
gesture-only interaction less pragmatic but more stimulating than traditional
menu-based systems (Study 2).
[15]
SIG NIME: music, technology, and human-computer interaction
SIGs
/
Bevilacqua, Frederic
/
Fels, Sidney
/
Jensenius, Alexander R.
/
Lyons, Michael J.
/
Schnell, Norbert
/
Tanaka, Atau
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.2529-2532
© Copyright 2013 ACM
Summary: This SIG intends to investigate the ongoing dialogue between music
technology and the field of human-computer interaction. Our specific aims are
to consider major findings of musical interface research over recent years and
discuss how these might best be conveyed to CHI researchers interested but not
yet active in this area, as well as to consider how to stimulate future
collaborations between music technology and CHI research communities.
[16]
De-Mo: designing action-sound relationships with the mo interfaces
Interactivity: exploration
/
Bevilacqua, Frédéric
/
Schnell, Norbert
/
Rasamimanana, Nicolas
/
Bloit, Julien
/
Flety, Emmanuel
/
Caramiaux, Baptiste
/
Françoise, Jules
/
Boyer, Eric
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.2907-2910
© Copyright 2013 ACM
Summary: The Modular Musical Objects (MO) are an ensemble of tangible interfaces and
software modules for creating novel musical instruments or for augmenting
objects with sound. In particular, the MOs allow for designing action-sound
relationships and behaviors based on the interaction with tangible objects or
free body movements.
Such interaction scenarios can be inspired by the affordances of particular
objects (e.g. a ball, a table), by interaction metaphors based on the playing
techniques of musical instruments or games. We describe specific examples of
action-sound relationships that are made possible by the MO software modules
and which take advantage of machine learning techniques.
[17]
Chiseling bodies: an augmented dance performance
Interactivity: exploration
/
Alaoui, Sarah Fdili
/
Jacquemin, Christian
/
Bevilacqua, Frédéric
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.2915-2918
© Copyright 2013 ACM
Summary: Chiseling Bodies is an interactive augmented dance performance, where a
dancer interacts with abstract visuals. They are massive mass-spring systems
whose dynamical behaviors are echoing the dancer's movement qualities.
[18]
Movement qualities as interaction modality
Designing for the body
/
Alaoui, Sarah Fdili
/
Caramiaux, Baptiste
/
Serrano, Marcos
/
Bevilacqua, Frédéric
Proceedings of DIS'12: Designing Interactive Systems
2012-06-11
p.761-769
© Copyright 2012 ACM
Summary: In this paper, we explore the use of movement qualities as interaction
modality. The notion of movement qualities is widely used in dance practice and
can be understood as how the movement is performed, independently of its
specific trajectory in space. We implemented our approach in the context of an
artistic installation called A light touch. This installation invites the
participant to interact with a moving light spot reacting to the hand movement
qualities. We conducted a user experiment that showed that such an interaction
based on movement qualities tends to enhance the user experience favouring
explorative and expressive usage.
[19]
Extracting Human Expression For Interactive Composition with the Augmented
Violin
Paper Session VII (Augmented Instruments II)
/
Kimura, Mari
/
Rasamimanana, Nicolas
/
Bevilacqua, Frédéric
/
Schnell, Norbert
/
Zamborlin, Bruno
/
Fléty, Emmanuel
NIME 2012: New Interfaces for Musical Expression
2012-05-21
p.279
Keywords: Augmented Violin, Gesture Follower, Interactive Performance
© Copyright 2012 Authors
Summary: As a 2010 Artist in Residence in Musical Research at IRCAM, Mari Kimura used
the Augmented Violin to develop new compositional approaches, and new ways of
creating interactive performances [1]. She contributed her empirical and
historical knowledge of violin bowing technique, working with the Real Time
Musical Interactions Team at IRCAM. Thanks to this residency, her ongoing
long-distance collaboration with the team since 2007 dramatically accelerated,
and led to solving several compositional and calibration issues of the Gesture
Follower (GF) [2]. Kimura was also the first artist to develop projects between
the two teams at IRCAM, using OMAX (Musical Representation Team) with GF. In
the past year, the performance with Augmented Violin has been expanded in
larger scale interactive audio/visual projects as well. In this paper, we
report on the various techniques developed for the Augmented Violin and
compositions by Kimura using them, offering specific examples and scores.
[20]
The urban musical game: using sport balls as musical interfaces
Interactivity presentations
/
Rasamimanana, Nicolas
/
Bevilacqua, Frédéric
/
Bloit, Julien
/
Schnell, Norbert
/
Fléty, Emmanuel
/
Cera, Andrea
/
Petrevski, Uros
/
Frechin, Jean-Louis
Extended Abstracts of ACM CHI'12 Conference on Human Factors in Computing
Systems
2012-05-05
v.2
p.1027-1030
© Copyright 2012 ACM
Summary: We present Urban Musical Game, an installation using augmented sports balls
to manipulate and transform an interactive music environment. The interaction
is based on playing techniques, a concept borrowed from traditional music
instruments and applied here to non musical objects.
[21]
Gestural Embodiment of Environmental Sounds: an Experimental Study
/
Caramiaux, Baptiste
/
Susini, Patrick
/
Bianco, Tommaso
/
Bevilacqua, Frédéric
/
Houix, Olivier
/
Schnell, Norbert
/
Misdariis, Nicolas
NIME 2011: New Interfaces for Musical Expression
2011-05-30
p.144-148
Keywords: Embodiment, Environmental Sound Perception, Listening, Gesture Sound
Interaction
© Copyright 2011 Authors
Summary: In this paper we present an experimental study concerning gestural
embodiment of environmental sounds in a listening context. The presented work
is part of a project aiming at modeling movement-sound relationships, with the
end goal of proposing novel approaches for designing musical instruments and
sounding objects. The experiment is based on sound stimuli corresponding to
"causal" and "non-causal" sounds. It is divided into a performance phase and an
interview. The experiment is designed to investigate possible correlation
between the perception of the "causality" of environmental sounds and different
gesture strategies for the sound embodiment. In analogy with the perception of
the sounds' causality, we propose to distinguish gestures that "mimic" a
sound's cause and gestures that "trace" a sound's morphology following temporal
sound characteristics. Results from the interviews show that, first, our causal
sounds database lead to consistent descriptions of the action at the origin of
the sound and participants mimic this action. Second, non-causal sounds lead to
inconsistent metaphoric descriptions of the sound and participants make
gestures following sound "contours". Quantitatively, the results show that
gesture variability is higher for causal sounds that noncausal sounds.
[22]
Sound Selection by Gestures
/
Caramiaux, Baptiste
/
Bevilacqua, Frédéric
/
Schnell, Norbert
NIME 2011: New Interfaces for Musical Expression
2011-05-30
p.329-330
Keywords: Query by Gesture, Time Series Analysis, Sonic Interaction
© Copyright 2011 Authors
Summary: This paper presents a prototypical tool for sound selection driven by users'
gestures. Sound selection by gestures is a particular case of "query by
content" in multimedia databases. Gesture-to-Sound matching is based on
computing the similarity between both gesture and sound parameters' temporal
evolution. The tool presents three algorithms for matching gesture query to
sound target. The system leads to several applications in sound design, virtual
instrument design and interactive installation.
[23]
Playing the "MO" -- Gestural Control and Re-Embodiment of Recorded Sound and
Music
/
Schnell, Norbert
/
Bevilacqua, Frédéric
/
Rasamimanana, Nicolas
/
Blois, Julien
/
Guédy, Fabrice
/
Fléty, Emmanuel
NIME 2011: New Interfaces for Musical Expression
2011-05-30
p.535-536
Keywords: Music, Gesture, Interface, Wireless Sensors, Gesture Recognition, Audio
Processing, Design, Interaction
© Copyright 2011 Authors
Summary: We are presenting a set of applications that have been realized with the MO
modular wireless motion capture device and a set of software components
integrated into Max/MSP. These applications, created in the context of artistic
projects, music pedagogy, and research, allow for the gestural reembodiment of
recorded sound and music. They demonstrate a large variety of different
"playing techniques" in musical performance using wireless motion sensor
modules in conjunction with gesture analysis and real-time audio processing
components.
[24]
Modular musical objects towards embodied control of digital music
Audio and video
/
Rasamimanana, Nicolas
/
Bevilacqua, Frederic
/
Schnell, Norbert
/
Guedy, Fabrice
/
Flety, Emmanuel
/
Maestracci, Come
/
Zamborlin, Bruno
/
Frechin, Jean-Louis
/
Petrevski, Uros
Proceedings of the 5th International Conference on Tangible and Embedded
Interaction
2011-01-22
p.9-12
© Copyright 2011 ACM
Summary: We present an ensemble of tangible objects and software modules designed for
musical interaction and performance. The tangible interfaces form an ensemble
of connected objects communicating wirelessly. A central concept is to let
users determine the final musical function of the objects, favoring
customization, assembling, repurposing. This might imply assembling the
wireless interfaces with existing everyday objects or musical instruments.
Moreover, gesture analysis and recognition modules allow the users to define
their own action/motion for the control of sound parameters. Various sound
engines and interaction scenarios were built and experimented. Some examples
that were developed in a music pedagogy context are described.
[25]
Continuous Realtime Gesture Following and Recognition
Gesture Recognition
/
Bevilacqua, Frédéric
/
Zamborlin, Bruno
/
Sypniewski, Anthony
/
Schnell, Norbert
/
Guédy, Fabrice
/
Rasamimanana, Nicolas H.
GW 2009: Gesture Workshop
2009-02-25
p.73-84
Keywords: gesture recognition; gesture following; Hidden Markov Model; music;
interactive systems
© Copyright 2009 Springer-Verlag
Summary: We present a HMM based system for real-time gesture analysis. The system
outputs continuously parameters relative to the gesture time progression and
its likelihood. These parameters are computed by comparing the performed
gesture with stored reference gestures. The method relies on a detailed
modeling of multidimensional temporal curves. Compared to standard HMM systems,
the learning procedure is simplified using prior knowledge allowing the system
to use a single example for each class. Several applications have been
developed using this system in the context of music education, music and dance
performances and interactive installation. Typically, the estimation of the
time progression allows for the synchronization of physical gestures to sound
files by time stretching/compressing audio buffers or videos.