| User Interface for Browsing Geotagged Data -- Design and Evaluation | | BIBAK | Full-Text | 1-11 | |
| Erika Reponen; Jaakko Keränen; Viljakaisa Aaltonen | |||
| The surface of the Earth is getting covered with geotagged data. We describe
a mobile application and UI that combines embodied interaction and a dynamic
GUI for browsing geotagged data. We present the design process and analyze
results from a user study. UI is based on a dynamic grid visualization that
shows geotagged content from the places around the world where the user is
pointing. It shows a continuous and interactive flow of items, including
real-time content such as live videos. The application is aimed for
entertaining and serendipitous use. The study results show that usability and
intuitiveness were improved by providing an additional, familiar view and
controls; showing transitions between view modes; and enhancing the unfamiliar
views. Also, the content grid UI was found to be a good way to browse geotagged
data. Keywords: Geotagged data; Embodied interaction; User interface; Augmented reality | |||
| Towards Multimodal, Multi-party, and Social Brain-Computer Interfacing | | BIBAK | Full-Text | 12-17 | |
| Anton Nijholt | |||
| In this paper we identify developments that have led to the current interest
from computer scientists in Brain-Computer Interfacing (BCI). Non-disabled
users have become a target group for BCI applications. Non-disabled users can
not be treated as patients. They are free to move and use their hands during
the interaction with an application. Therefore BCI should be integrated in a
multimodal approach. Games are an important research area since shortcomings of
BCI can be translated into challenges in multimodal cooperative, competitive,
social and casual games. Keywords: Brain-Computer Interfacing; Human-Computer Interaction; Multimodal
interaction; Games | |||
| Brain-Computer Interfaces: Proposal of a Paradigm to Increase Output Commands | | BIBAK | Full-Text | 18-27 | |
| Ricardo Ron-Angevin; Francisco Velasco-Álvarez; Salvador Sancha-Ros | |||
| A BCI (Brain-Computer Interface) is based on the analysis of the brain
activity recorded during certain mental activities, to control an external
device. Some of these systems are based on discrimination of different mental
tasks, matching the number of mental tasks to the number of control commands
and providing the users with one to three commands. The main objective of this
paper is to introduce the navigation paradigm proposed by the University of
Málaga (UMA-BCI) which, using only two mental states, offers the user
several navigation commands to be used to control a virtual wheelchair in a
virtual environment (VE). In the same way, this paradigm should be used to
provide different control commands to interact with videogames. In order to
control the new paradigm, subjects are submitted in a progressive training
based in different VEs and games. Encouraging results supported by several
experiments show the usability of the paradigm. Keywords: Brain-Computer Interfaces (BCI); Motor Imagery; Navigation commands; Virtual
Environment (VE); Motivation; Games | |||
| Steady State Visual Evoked Potential Based Computer Gaming -- The Maze | | BIBA | Full-Text | 28-37 | |
| Nikolay Chumerin; Nikolay V. Manyakov; Adrien Combaz; Arne Robben; Marijn van Vliet; Marc M. Van Hulle | |||
| We introduce a game, called "The Maze", as a brain-computer interface (BCI) application in which an avatar is navigated through a maze by analyzing the player's steady-state visual evoked potential (SSVEP) responses recorded with electroencephalography (EEG). The same computer screen is used for displaying the game environment and for the visual stimulation. The algorithms for EEG data processing and SSVEP detection are discussed in depth. We propose the system parameter values, which provide an acceptable trade-off between the game control accuracy and interactivity. | |||
| Single Value Devices | | BIBA | Full-Text | 38-47 | |
| Angelika Mader; Edwin Dertien; Dennis Reidsma | |||
| We live in a world of continuous information overflow, but the quality of
information and communication is suffering. Single value devices contribute to
information and communication quality by focussing on one explicit, relevant
piece of information. The information is decoupled from a computer and
represented in an object, integrated into daily life.
The contribution of this paper is on different levels: Firstly, we identify single value devices as a class, and, secondly, illustrate it through examples in a survey. Thirdly, we collect characterizations of single value devices into a taxonomy. The taxonomy also provides a collection of design choices that allow one to more easily find new combinations or alternatives, and that facilitate the design of new, meaningful, effective and working objects. Finally, when we want to step from experimental examples to commercializable products, a number of issues become relevant that are identified and discussed in the remainder of this paper. | |||
| A Kinect-Based Natural Interface for Quadrotor Control | | BIBAK | Full-Text | 48-56 | |
| Andrea Sanna; Fabrizio Lamberti; Gianluca Paravati; Eduardo Andres Henao Ramirez; Federico Manuri | |||
| The evolution of input device technologies led to identification of the
natural user interface (NUI) as the clear evolution of the human-machine
interaction, following the shift from command-line interfaces (CLI) to
graphical user interfaces (GUI). The design of user interfaces requires a
careful mapping of complex user "actions" in order to make the human-computer
interaction (HCI) more intuitive, usable, and receptive to the user's needs: in
other words, more user-friendly and, why not, fun. NUIs constitute a direct
expression of mental concepts and the naturalness and variety of gestures,
compared with traditional interaction paradigms, can offer unique opportunities
also for new and attracting forms of human-machine interaction. In this paper,
a kinect-based NUI is presented; in particular, the proposed NUI is used to
control the Ar.Drone quadrotor. Keywords: Natural User Interface; Kinect; Quadrotor control; Interactive systems | |||
| Smart Material Interfaces: A Vision | | BIBAK | Full-Text | 57-62 | |
| Andrea Minuto; Dhaval Vyas; Wim Poelman; Anton Nijholt | |||
| In this paper, we introduce a vision called Smart Material Interfaces
(SMIs), which takes advantage of the latest generation of engineered materials
that has a special property defined "smart". They are capable of changing their
physical properties, such as shape, size and color, and can be controlled by
using certain stimuli (light, potential difference, temperature and so on). We
describe SMIs in relation to Tangible User Interfaces (TUIs) to convey the
usefulness and a better understanding of SMIs. Keywords: Tangible User Interfaces; Ubiquitous Computing; Smart Material Interfaces | |||
| User-Centered Evaluation of the Virtual Binocular Interface | | BIBAK | Full-Text | 63-72 | |
| Donald Glowinski; Maurizio Mancini; Paolo Coletta; Simone Ghisio; Carlo Chiorri; Antonio Camurri; Gualtiero Volpe | |||
| This paper describes a full-body pointing interface based on the mimicking
of the use of binoculars, the Virtual Binocular Interface. This interface is a
component of the interactive installation "Viaggiatori di Sguardo", located at
Palazzo Ducale, Genova, Italy, and visited by more than 5,000 visitors so far.
This paper focuses on the evaluation of such an interface. Keywords: Virtual Reality; Interactive Museum Applications and Guides; Novel
Interaction Technologies | |||
| Does Movement Recognition Precision Affect the Player Experience in Exertion Games? | | BIBAK | Full-Text | 73-82 | |
| Jasmir Nijhar; Nadia Bianchi-Berthouze; Gemma Boguslawski | |||
| A new generation of exertion game controllers are emerging with a high level
of movement recognition precision which can be described as the ability to
accurately discriminate between complex movements with regards to gesture
recognition and in turn provide better on-screen feedback. These controllers
offer the possibility to create a more realistic set of controls but they may
require more complex coordination skills. This study examines the effect of
increased movement recognition precision on the exertion gaming experience. The
results showed that increasing the level of movement recognition precision lead
to higher levels of immersion. We argue that the reasons why players are more
immersed vary on the basis of their individual motivations for playing (i.e. to
'relax' or to 'achieve'). Keywords: computer games; control devices; movement recognition precision; exertion
games; immersion | |||
| Elckerlyc in Practice -- On the Integration of a BML Realizer in Real Applications | | BIBAK | Full-Text | 83-92 | |
| Dennis Reidsma; Herwin van Welbergen | |||
| Building a complete virtual human application from scratch is a daunting
task, and it makes sense to rely on existing platforms for behavior generation.
When building such an interactive application, one needs to be able to adapt
and extend the capabilities of the virtual human offered by the platform,
without having to make invasive modifications to the platform itself. This
paper describes how Elckerlyc, a novel platform for controlling a virtual
human, offers these possibilities. Keywords: Virtual Humans; Embodied Conversational Agents; Architecture; System
Integration; Customization | |||
| Evaluation of the Mobile Orchestra Explorer Paradigm | | BIBAK | Full-Text | 93-102 | |
| Donald Glowinski; Maurizio Mancini; Alberto Massari | |||
| The Mobile Orchestra Explorer paradigm enables active experience of
prerecorded music: users can navigate and express themselves in a shared
(physical or virtual) orchestra space, populated by the sections of a
prerecorded music. The user moves in a room with his/her mobile phone in
his/her hand: the music performed by the orchestra sections is rendered
according to the user position and movement. In this paper we present an
evaluation study conducted during the Festival of Science 2010 in Genova,
Italy. Forty participants interacted with the Mobile Orchestra Explorer and
filled questionnaires about their active music listening experience. Keywords: mobile; orchestra; paradigm; evaluation; explore; active listening | |||
| As Wave Impels a Wave Active Experience of Cultural Heritage and Artistic Content | | BIBAK | Full-Text | 103-112 | |
| Francesca Cavallero; Antonio Camurri; Corrado Canepa; Nicola Ferrari; Barbara Mazzarino; Gualtiero Volpe | |||
| This paper presents the interactive installation "Come un'Onda premuta da
un'Onda" ("As Wave impels a Wave", a citation from Ovidio's "Metamorphoses" as
a metaphor of time). The installation, presented in its early version at the
Festival della Scienza 2009, introduces visitors to the rich history and
artistic content of a monumental building: a virtual walk through the time. The
core idea is to support an active experience based on novel paradigms of
interaction and narration. The active experience is grounded on an
informational environment characterized by an invisible "sound scent" map. The
research is partially supported by the EU FP7 ICT I-SEARCH project. Keywords: active experience of cultural and artistic content; multimodal audiovisual
content search; Mixed Reality; museum ecology | |||
| An Intelligent Instructional Tool for Puppeteering in Virtual Shadow Puppet Play | | BIBAK | Full-Text | 113-122 | |
| Sirot Piman; Abdullah Zawawi Talib | |||
| Shadow puppet play has been a popular storytelling tradition for many
centuries in many parts of Asia. In this paper, we present an initial idea and
architecture of a software tool that allows people to experience shadow puppet
play in the virtual world. Normally, a virtual puppet show is controlled
automatically by the application. However, our tool allows the user to create
storyline and control the puppets directly in real-time with a special device
that can improve the skill of a puppeteer. This paper focuses in detail on the
design and issues of a component of the software tool which is the intelligent
instructional tool for puppeteering of virtual shadow puppet play. The result
of the preliminary evaluation has shown that the tool is able to help users
more beneficially and a higher degree of satisfaction among the respondents
which include professional puppeteers and potential users. Keywords: Shadow puppet play; virtual puppet; virtual storytelling | |||
| A Tabletop Board Game Interface for Multi-user Interaction with a Storytelling System | | BIBA | Full-Text | 123-128 | |
| Thijs Alofs; Mariët Theune; Ivo Swartjes | |||
| The Interactive Storyteller is an interactive storytelling system with a multi-user tabletop interface. Our goal was to design a generic framework combining emergent narrative, where stories emerge from the actions of autonomous intelligent agents, with the social aspects of traditional board games. As a visual representation of the story world, a map is displayed on a multi-touch table. Users can interact with the story by touching an interface on the table surface with their fingers and by moving tangible objects that represent the characters. This type of interface, where multiple users are gathered around a table with equal access to the characters and the story world, offers a more social setting for interaction than most existing interfaces for AI-based interactive storytelling. | |||
| Design of an Interactive Playground Based on Traditional Children's Play | | BIBA | Full-Text | 129-138 | |
| Daniel Tetteroo; Dennis Reidsma; Betsy van Dijk; Anton Nijholt | |||
| This paper presents a novel method for interactive playground design, based on traditional children's play. This method combines the rich interaction possibilities of computer games with the physical and open-ended aspects of traditional children's games. The method is explored by the development of a prototype interactive playground, which has been implemented and evaluated over two iterations. | |||
| Designing a Museum Multi-touch Table for Children | | BIBAK | Full-Text | 139-148 | |
| Betsy van Dijk; Frans van der Sluis; Anton Nijholt | |||
| Tangible user interfaces allow children to take advantage of their
experience in the real world with multimodal human interactions when
interacting with digital information. In this paper we describe a model for
tangible user interfaces that focuses mainly on the user experience during
interaction. This model is related to other models and used to design a
multi-touch tabletop application for a museum. We report about our first
experiences with this museum application. Keywords: tangible user interfaces; multi-touch table; tabletop; information access;
children | |||
| Automatic Recognition of Affective Body Movement in a Video Game Scenario | | BIBAK | Full-Text | 149-159 | |
| Nikolaos Savva; Nadia Bianchi-Berthouze | |||
| This study aims at recognizing the affective states of players from
non-acted, non-repeated body movements in the context of a video game scenario.
A motion capture system was used to collect the movements of the participants
while playing a Nintendo Wii tennis game. Then, a combination of body movement
features along with a machine learning technique was used in order to
automatically recognize emotional states from body movements. Our system was
then tested for its ability to generalize to new participants and to new body
motion data using a sub-sampling validation technique. To train and evaluate
our system, online evaluation surveys were created using the body movements
collected from the motion capture system and human observers were recruited to
classify them into affective categories. The results showed that observer
agreement levels are above chance level and the automatic recognition system
achieved recognition rates comparable to the observers' benchmark. Keywords: Body movement; automatic emotion recognition; exertion game | |||
| Towards Mimicry Recognition during Human Interactions: Automatic Feature Selection and Representation | | BIBAK | Full-Text | 160-169 | |
| Xiaofan Sun; Anton Nijholt; Maja Pantic | |||
| During face-to-face interpersonal interaction people have a tendency to
mimic each other, that is, they change their own behaviors to adjust to the
behavior expressed by a partner. In this paper we describe how behavioral
information expressed between two interlocutors can be used to detect and
identify mimicry and improve recognition of interrelationship and affect
between them in a conversation. To automatically analyze how to extract and
integrate this behavioral information into a mimicry detection framework for
improving affective computing, this paper addresses the main challenge: mimicry
representation in terms of optimal behavioral feature extraction and automatic
integration. Keywords: mimicry representation; human-human interaction; human behavior analysis;
motion energy | |||
| A Playable Evolutionary Interface for Performance and Social Engagement | | BIBAK | Full-Text | 170-182 | |
| Insook Choi; Robin Bargar | |||
| An advanced interface for playable media is presented for enabling both
musical performance and multiple agents' play. A large format capacitive
sensing panel provides a surface to project visualizations of swarm simulations
as well as the sensing mechanism for introducing human players' actions to the
simulation. An evolutionary software interface is adapted to this project by
integrating swarm algorithms to playable interface functionality with
continuous auditory feedback. A methodology for using swarm agents' information
to model sound synthesis is presented. Relevant feature extraction techniques
are discussed along with design criteria for choosing them. The novel
configuration of the installation facilitates a unique interaction paradigm
that sustains social engagement seamlessly alternating between cooperative and
competitive play modes. Keywords: evolutionary interface; agents; swarms simulation; sound model; interactive;
playable media; social engagement | |||
| Social Interaction in a Cooperative Brain-Computer Interface Game | | BIBAK | Full-Text | 183-192 | |
| Michel Obbink; Hayrettin Gürkök; Danny Plass-Oude Bos; Gido Hakvoort; Mannes Poel; Anton Nijholt | |||
| Does using a BCI influence the social interaction between people when
playing a cooperative game? By measuring the amount of speech, utterances,
instrumental gestures and empathic gestures during a cooperative game where two
participants had to reach a certain goal, and questioning participants about
their own experience afterwards this study attempts to provide answers to this
question. The results showed that social interaction changed when using a BCI
compared to using a mouse. There was a higher amount of utterances and empathic
gestures. This indicates that the participants reacted more to the higher
difficulty of the BCI selection method. Participants also reported that they
felt they cooperated better during the use of the mouse. Keywords: brain-computer interfaces; social interaction; games; cooperation | |||
| LUCIA: An Open Source 3D Expressive Avatar for Multimodal h.m.i. | | BIBAK | Full-Text | 193-202 | |
| G. Riccardo Leone; Giulio Paci; Piero Cosi | |||
| LUCIA is an MPEG-4 facial animation system developed at ISTC-CNR. It works
on standard Facial Animation Parameters and speaks with the Italian version of
FESTIVAL TTS. To achieve an emotive/expressive talking head LUCIA was built
from real human data physically extracted by ELITE optic-tracking movement
analyzer. LUCIA can copy a real human being by reproducing the movements of
passive markers positioned on his face and recorded by the ELITE device or can
be driven by an emotional XML tagged input text, thus realizing true
audio/visual emotive/expressive synthesis. Synchronization between visual and
audio data is very important in order to create the correct WAV and FAP files
needed for the animation. LUCIA's voice is based on the ISTC Italian version of
FESTIVAL-MBROLA packages, modified by means of an appropriate APML/VSML tagged
language. LUCIA is available in two different versions: an open source
framework and the "work in progress" WebGL. Keywords: talking head; TTS; facial animation; mpeg4; 3D avatar; virtual agent;
affective computing; LUCIA; FESTIVAL | |||
| The AnimaTricks System: Animating Intelligent Agents from High-Level Goal Declarations | | BIBAK | Full-Text | 203-208 | |
| Vincenzo Lombardo; Fabrizio Nunnari; Rossana Damiano | |||
| This paper presents AnimaTricks, a system for the generation of the behavior
of an animated agent from a high level description of its goals. The
deliberation component generates a sequence of actions given a set of goals.
The animation component, then, translates it into an animation language,
leaving to the animation engine the task of generating the actual animation.
The purpose of the system is two-fold. First, we test how deliberation can be effectively tied to the animated counterpart. Second, by generating complex animations from high-level goals, AnimaTricks supports the work of directors and animators in a pre-visualization and re-use perspective. Keywords: animation; virtual characters; multimedia production | |||
| A Framework for Designing 3D Virtual Environments | | BIBAK | Full-Text | 209-218 | |
| Salvatore Catanese; Emilio Ferrara; Giacomo Fiumara; Francesco Pagano | |||
| The process of design and development of virtual environments can be
supported by tools and frameworks, to save time in technical aspects and
focusing on the content. In this paper we present an academic framework which
provides several levels of abstraction to ease this work. It includes
state-of-the-art components we devised or integrated adopting open-source
solutions in order to face specific problems. Its architecture is modular and
customizable, the code is open-source. Keywords: Virtual Environments; Games | |||
| The Mobile Orchestra Explorer | | BIBA | Full-Text | 219-220 | |
| Donald Glowinski; Maurizio Mancini; Alberto Massari | |||
| Active listening is a new concept in Human-Computer Interaction in which
novel paradigms for expressive multimodal interfaces have been developed [1],
empowering users to interact with and shape the audio content by intervening
actively into the experience. Active listening applications are implemented
using noninvasive technology and are based on natural gesture interaction [2].
The goal of this paper is to present the Mobile Orchestra Explorer application, developed in the framework of the EU Project MIROR. The Mobile Orchestra Explorer application entails the user to set up the position of a virtual orchestra instruments/sections and then to explore the resulting virtual ensemble by walking through the orchestra space. | |||
| Realtime Expressive Movement Detection Using the EyesWeb XMI Platform | | BIBA | Full-Text | 221-222 | |
| Maurizio Mancini; Donald Glowinski; Alberto Massari | |||
| In the last few years one of the key issues in Human Computer Interaction is the design and creation of a new type of interfaces, able to adapt HCI to human-human communication capabilities. In this direction the ability of computers to detect and synthesize human expressivity of behavior is particularly relevant, that is, computers must be equipped with interfaces able to establish a Sensitive interaction with the user (see [3]). | |||
| i-Theatre: Tangible Interactive Storytelling | | BIBAK | Full-Text | 223-228 | |
| Jesús Muñoz; Michele Marchesoni; Cristina Costa | |||
| Storytelling is fundamental for the cognitive and emotional development of
children. New technologies combined with playful learning can be an effective
instrument for developing narrative skills. In this paper we describe
i-Theatre, a collaborative storytelling system designed for pre-school
children: with it, it is possible to use characters and scenarios drawn on
paper for creating a digital story, using simple animation techniques and
recording voices and sounds. For implementing it, we combined a multitouch
surface with a set of tangible objects. This choice allowed lowering the
learning effort of a new interface, letting the child to be immersed directly
into the storytelling process from the very beginning. Keywords: Storytelling; multitouch; children; education; tangible technologies;
collaboration | |||
| An Invisible Line: Remote Communication Using Expressive Behavior | | BIBA | Full-Text | 229-230 | |
| Andrea Cera; Andrew Gerzso; Corrado Canepa; Maurizio Mancini; Donald Glowinski; Simone Ghisio; Paolo Coletta; Antonio Camurri | |||
| An Invisible Line is an installation focusing on the remote communication between 2 human users, based on the analysis of full-body expressivity. It aims at creating shared, networked, social experiences. It is the result of a scientific and artistic collaboration between Casa Paganini -- InfoMus Lab (Genova, Italy), IRCAM (Paris, Italy) and The Hochschule für Musik und Theater (Hamburg, Germany). | |||
| Teaching by Means of a Technologically Augmented Environment: The Stanza Logo-Motoria | | BIBAK | Full-Text | 231-235 | |
| Serena Zanolla; Antonio Rodà; Filippo Romano; Francesco Scattolin; Gian Luca Foresti; Sergio Canazza; Corrado Canepa; Paolo Coletta; Gualtiero Volpe | |||
| The Stanza Logo-Motoria is an interactive multimodal environment, designed
to support and aid learning in Primary Schools, with particular attention to
children with Learning Disabilities. The system is permanently installed in a
classroom of the "Elisa Frinta" Primary School in Gorizia where for over a year
now, it has been used as an alternative and/or additional tool to traditional
teaching strategies; the on-going experimentation is confirming the already
excellent results previously assessed, in particular for ESL (English as a
Second Language). The Stanza Logo-Motoria, also installed for scientific
research purposes at the Engineering Information Department (DEI) of University
of Padova, has sparked the interest of teachers, students and educationalists
and makes us believe that this is but the beginning of a path, which could lead
to the introduction of technologically augmented learning in schools. Keywords: Stanza Logo-Motoria; interactive and multimodal environment; augmented
reality; augmented environment for teaching; Learning Disability | |||
| INSIDE: Intuitive Sonic Interaction Design for Education and Entertainment | | BIBAK | Full-Text | 236-239 | |
| Alain Crevoisier; Cécile Picard-Limpens | |||
| The project INSIDE -- Intuitive Sonic Interaction Design for Education and
Entertainment, aims at offering children and adults without previous musical
experience the possibility to create sounds and make music on a very intuitive
and playful manner. We develop a concept of tangible interaction using objects
that can be placed on any conventional surface, like a table. The objects can
be fitted with meaningful icons representing various aspects and functions
related to sound and music, such as sound sources, sound modifiers, or mixers. Keywords: interface; interaction; sound; education; entertainment | |||
| My Presenting Avatar | | BIBAK | Full-Text | 240-242 | |
| Laurent Ach; Laurent Durieu; Benoit Morel; Karine Chevreau; Hugues de Mazancourt; Bernard Normier; Catherine Pelachaud; André-Marie Pez | |||
| We have developed an application that offers to users the possibility to
transmit documents via a virtual agent. Keywords: Virtual agent; linguistic extraction; nonverbal behavior; animation | |||
| Interacting with Emotional Virtual Agents | | BIBAK | Full-Text | 243-245 | |
| Elisabetta Bevacqua; Florian Eyben; Dirk Heylen; Mark ter Maat; Sathish Pammi; Catherine Pelachaud; Marc Schröder; Björn Schuller; Etienne de Sevin; Martin Wöllmer | |||
| Sensitive Artificial Listener (SAL) is a multimodal dialogue system which
allows users to interact with virtual agents. Four characters with different
emotional traits engage users is emotionally coloured interactions. They not
only encourage the users into talking but also try to drag them towards
specific emotional states. Despite the agents very limited verbal
understanding, they are able to react appropriately to the user's non-verbal
behaviour. The demonstrator shows an final version of the fully autonomous SAL
system. Keywords: Embodied Conversational Agents; human-machine interaction | |||
| Traditional Shadow Puppet Play -- The Virtual Way | | BIBAK | Full-Text | 246-248 | |
| Abdullah Zawawi Talib; Mohd Azam Osman; Kian Lam Tan; Sirot Piman | |||
| In this paper, we present a virtual shadow puppet play application that
allows real-time play of the puppet and gives the user the impression of being
a storyteller or a shadow play puppeteer. Through this tool, everybody can be a
puppeteer digitally regardless of the ability to perform the traditional art. Keywords: Shadow puppet play; virtual puppet; virtual storytelling | |||
| The Attentive Machine: Be Different! | | BIBA | Full-Text | 249-251 | |
| Julien Leroy; Nicolas Riche; François Zajega; Matei Mancas; Joelle Tilmanne; Bernard Gosselin; Thierry Dutoit | |||
| We will demonstrate an intelligent Machine which is capable to choose within a small group of people (typically 3 people) the one it will interact with. Depending on people behavior, this person may change. The participants can thus compete to be chosen by the Machine. We use the Kinect sensor to capture both classical 2D video and depth map of the participants. Video-projection and audio feedback are provided to the participants. | |||
| Towards a Dynamic Approach to the Study of Emotions Expressed by Music | | BIBAK | Full-Text | 252-259 | |
| Kim Torres-Eliard; Carolina Labbé; Didier Grandjean | |||
| The emotions expressed through music have often been investigated by asking
listeners to fill questionnaires at the end of a given musical performance or
an excerpt; only few studies have been dedicated to the understanding of the
dynamics of emotions expressed by music in laboratory or in social contexts.
Based on a specific model of emotions related to music, the Geneva Emotion
Music Scale (GEMS), we tested to what extent such dynamic judgments are
reliable and might be a promising avenue to better understand how listeners are
able to attribute different kinds of emotions expressed through music and how
the social contexts might influence such judgments. The results indicate a high
reliability between listeners for different musical excerpts and for different
contexts of listening including concerts, i.e. a social context, and laboratory
experiments. Keywords: Emotion; music; dynamic judgment; musical expressiveness | |||
| Mutual Engagement in Social Music Making | | BIBAK | Full-Text | 260-266 | |
| Nick Bryan-Kinns | |||
| Mutual engagement occurs when people creatively spark together. In this
paper we suggest that mutual engagement is key to creating new forms of
multi-user social music systems which will capture the public's heart and
imagination. We propose a number of design features which support mutual
engagement, and a set of techniques for evaluating mutual engagement by
examining inter-person interaction. We suggest how these techniques could be
used in empirical studies, and how they might be used to inform artistic
practice to design and evaluate new forms of collaborative music making. Keywords: Design; Evaluation; Mutual Engagement; Multi-User; Social Interaction; Human
Communication | |||
| Measuring Ensemble Synchrony through Violin Performance Parameters: A Preliminary Progress Report | | BIBAK | Full-Text | 267-272 | |
| Panagiotis Papiotis; Marco Marchini; Esteban Maestre; Alfonso Perez | |||
| In this article we present our ongoing work on expressive performance
analysis for violin and string ensembles, in terms of synchronization in
intonation, timing, dynamics and articulation. Our current research objectives
are outlined, along with an overview for the methods used to achieve them;
finally, focusing on the case of intonation synchronization in violin duets,
some preliminary results and conclusions based on experimental recordings are
discussed. Keywords: violin; expressive performance; intonation; ensemble performance; bowing
gestures; motion capture | |||
| Communication in Orchestra Playing as Measured with Granger Causality | | BIBAK | Full-Text | 273-275 | |
| Alessandro D'ausilio; Leonardo Badino; Yi Li; Sera Tokay; Laila Craighero; Rosario Canto; Yiannis Aloimonos; Luciano Fadiga | |||
| Coordinated action between music orchestra performance, driven by a
conductor, is a remarkable instance of interaction/communication. However, a
rigorous testing of inter-individual coordination in an ecological scenario
poses a series of technical problems. Here we recorded violinists' and
conductor's movements kinematics in an ecological interactive scenario. We
searched for directed influences between conductor and musicians and among
musicians by using the Granger Causality method. Our results quantitatively
show the dynamic pattern of communication among conductors and musicians.
Interestingly, we found evidence that the aesthetic appreciation of music
orchestras' performance is based on the concurrent increase of
conductor-to-musicians causal influence and reduction of musician-to-musician
information flow. Keywords: communication; action coordination; joint action; neuroscience of music;
music performance; movement kinematics; Granger causality; neuroaesthetic | |||