HCI Bibliography Home | HCI Conferences | GW Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GW Tables of Contents: 96979901030507091113

GW 2011: Gesture Workshop

Fullname:GW 2011: Gesture and Sign Language in Human-Computer Interaction and Embodied Communication: 9th International Gesture Workshop Revised Selected Papers
Editors:Eleni Efthimiou; Georgios Kouroupetroglou; Stavroula-Evita Fotinea
Location:Athens, Greece
Dates:2011-May-25 to 2011-May-27
Publisher:Springer Berlin Heidelberg
Series:Lecture Notes in Computer Science 7206
Standard No:DOI: 10.1007/978-3-642-34182-3; hcibib: GW11; ISBN: 978-3-642-34181-6 (print), 978-3-642-34182-3 (online)
Papers:24
Pages:273
Links:Online Proceedings | Conference Website | Online Proceedings
  1. Human Computer Interaction
  2. Cognitive Processes
  3. Notation Systems and Animation
  4. Gestures and Signs: Linguistic Analysis and Tools
  5. Gestures and Speech

Human Computer Interaction

Gestures in Assisted Living Environments BIBAKFull-Text 1-12
  Dimitra Anastasiou
This paper is concerned with multimodal assisted living environments, particularly based on gesture interaction. The research of ambient assisted living is about the provision of a safe, comfortable, and independent lifestyle at a domestic environment. We refer to spatial gestures and gesture recognition software and present an observational user study related to gestures in the Bremen Ambient Assisted Living Lab (BAALL), a 60m² apartment suitable for the elderly and people with physical or cognitive impairments.
Keywords: assisted environment; localization; smart devices; spatial gestures
Using Wiimote for 2D and 3D Pointing Tasks: Gesture Performance Evaluation BIBAFull-Text 13-23
  Georgios Kouroupetroglou; Alexandros Pino; Athanasios Balmpakakis; Dimitrios Chalastanis; Vasileios Golematis; Nikolaos Ioannou; Ioannis Koutsoumpas
We present two studies to comparatively evaluate the performance of gesture-based 2D and 3D pointing tasks. In both of them, a Wiimote controller and a standard mouse were used by six participants. For the 3D experiments we introduce a novel configuration analogous to the ISO 9241-9 standard methodology. We examine the pointing devices' conformance to Fitts' law and we measure eight extra parameters that describe more accurately the cursor movement trajectory. For the 2D tasks using Wiimote, Throughput is 41,2% lower than using the mouse, target re-entry is almost the same, and missed clicks count is three times higher. For the 3D tasks using Wiimote, Throughput is 56,1% lower than using the mouse, target re-entry is increased by almost 50%, and missed clicks count is sixteen times higher.
   Fitts' law, 3D pointing, Gesture User Interface, Wiimote
Choosing and Modeling the Hand Gesture Database for a Natural User Interface BIBAKFull-Text 24-35
  Przemyslaw Glomb; Michal Romaszewski; Sebastian Opozda; Arkadiusz Sochan
This paper presents a database of natural hand gestures ('IITiS Gesture Database') recorded with motion capture devices. For the purpose of benchmarking and testing the gesture interaction system we have selected twenty-two natural hand gestures and recorded them on three different motion capture gloves with a number of participants and movement speeds. The methodology for the gesture selection, details of the acquisition process, and data analysis results are presented in the paper.
Keywords: human-computer interaction; gesture interfaces; reference gesture database; choosing gestures for HCI; gesture recognition; hand gesture vocabulary design; motion capture gloves
User Experience of Gesture Based Interfaces: A Comparison with Traditional Interaction Methods on Pragmatic and Hedonic Qualities BIBAKFull-Text 36-47
  Maurice H. P. H. van Beurden; Wijnand A. IJsselsteijn; Yvonne de Kort
Studies into gestural interfaces -- and interfaces in general -- typically focus on pragmatic or usability aspects (e.g., ease of use, learnability). Yet the merits of gesture-based interaction likely go beyond the purely pragmatic and impact a broader class of experiences, involving also qualities such as enjoyment, stimulation, and identification. The current study compared gesture-based interaction with device-based interaction, in terms of both their pragmatic and hedonic qualities. Two experiments were performed, one in a near-field context (mouse vs. gestures), and one in a far-field context (Wii vs. gestures). Results show that, whereas device-based interfaces generally scored higher on perceived performance, and the mouse scored higher on pragmatic quality, embodied interfaces (gesture-based interfaces, but also the Wii) scored higher in terms of hedonic quality and fun. A broader perspective on evaluating embodied interaction technologies can inform the design of such technologies and allow designers to tailor them to the appropriate application.
Keywords: Interaction technologies; gesture-based interaction; user experience; hedonic quality; pragmatic quality; user interfaces; embodied interaction
Low Cost Force-Feedback Interaction with Haptic Digital Audio Effects BIBAKFull-Text 48-56
  Alexandros Kontogeorgakopoulos; Georgios Kouroupetroglou
We present the results of an experimental study of Haptic Digital Audio Effects with and without force feedback. Participants experienced through a low cost Falcon haptic device two new real-time physical audio effect models we have developed under the CORDIS-ANIMA formalism. The results indicate that the haptic modality changed the user's experience significantly.
Keywords: haptics; digital audio effects; physical modeling; CORDIS-ANIMA

Cognitive Processes

The Role of Spontaneous Gestures in Spatial Problem Solving BIBAKFull-Text 57-68
  Mingyuan Chu; Sotaro Kita
When solving spatial problems, people often spontaneously produce hand gestures. Recent research has shown that our knowledge is shaped by the interaction between our body and the environment. In this article, we review and discuss evidence on: 1) how spontaneous gesture can reveal the development of problem solving strategies when people solve spatial problems; 2) whether producing gestures can enhance spatial problem solving performance. We argue that when solving novel spatial problems, adults go through deagentivization and internalization processes, which are analogous to young children's cognitive development processes. Furthermore, gesture enhances spatial problem solving performance. The beneficial effect of gesturing can be extended to non-gesturing trials and can be generalized to a different spatial task that shares similar spatial transformation processes.
Keywords: gesture; spatial problem solving; mental rotation; cognitive development
Effects of Spectral Features of Sound on Gesture Type and Timing BIBAKFull-Text 69-80
  Mariusz Kozak; Kristian Nymoen; Rolf Inge Godøy
In this paper we present results from an experiment in which infrared motion capture technology was used to record participants' movement in synchrony to different rhythms and different sounds. The purpose was to determine the effects of the sounds' spectral and temporal features on synchronization and gesture characteristics. In particular, we focused on the correlation between sounds and three gesture features: maximum acceleration, discontinuity, and total quantity of motion. Our findings indicate that discrete, discontinuous motion resulted in better synchronization, while spectral features of sound had a significant effect on the quantity of motion.
Keywords: Gesture; Music; Synchronization; Sound envelope
Human-Motion Saliency in Complex Scenes BIBAKFull-Text 81-92
  Fiora Pirri; Matia Pizzoli; Matei Mancas
We present a new and original method for human motion analysis and evaluation, developed on the basis of the role played by attention in the perception of human motion. Attention is particularly relevant both in a multi-motion scene and in social interactions, when it comes to select and discern why and what to focus on. The first crucial role of attention concerns the saliency of human motion within a scene where other dynamics might occur. The second role, in social-close interactions, is highlighted by the selectivity shown towards gesture modalities both in peripheral and central vision. Experiments for both modeling and testing have been based on a dynamic 3D gaze tracker.
Keywords: human motion; selective attention; saliency; gestures segmentation
What, Why, Where and How Do Children Think? Towards a Dynamic Model of Spatial Cognition as Action BIBAKFull-Text 93-105
  Marilyn Panayi
The Spatial Cognition in Action (SCA) model described here takes a dynamic systems approach to the biological concept of the Perception-Cognition-Action cycle. This partial descriptive feature based model is theoretically developed and empirically informed by the examination of children's embodied gestures that are rooted in action. The model brings together ecological and corporeal paradigms with evidence from neurobiological and cognitive science research. Such a corporeal approach places the 'action ready body' centre stage. A corpus of gesture repertoires from both neuro-atypical and neuro-typical children has been created from ludic interaction studies. The model is proposed as a dynamic construct for intervention, involving the planning and design of interaction technology for neuro-atypical children's pedagogy and rehabilitation.
Keywords: spatial cognition; model; embodied child gesture; action-readybody; neurobiological; atypical; future technology; intervention; pedagogy; rehabilitation; dynamic systems; emergent; perception-action-cycle

Notation Systems and Animation

A Labanotation Based Ontology for Representing Dance Movement BIBAKFull-Text 106-117
  Katerina El Raheb; Yannis E. Ioannidis
In this paper, we present a Knowledge Based System for describing and storing dances that takes advantage of the expressivity of Description Logics. We propose exploiting the tools of the Semantic Web Technologies in representing and archiving dance choreographies by developing a Dance Ontology in OWL-2. Description Logics allow us to express complex relations and inference rules for the domain of dance movement, while Reasoning capabilities make it easy to extract new knowledge from existing knowledge. Furthermore, we can search within the ontology based on the steps and movements of dances by writing SPARQL queries. The constructing elements of the ontology and their relationships to construct the dance model are based on the semantics of the Labanotation system, a widely applied language that uses symbols to denote dance choreographies.
Keywords: Semantic Web Technologies; Ontology; Description Logics; Dance Notation; Labanotation
ISOcat Data Categories for Signed Language Resources BIBAKFull-Text 118-128
  Onno Crasborn; Menzo Windhouwer
As the creation of signed language resources is gaining speed world-wide, the need for standards in this field becomes more acute. This paper discusses the state of the field of signed language resources, their metadata descriptions, and annotations that are typically made. It then describes the role that ISOcat may play in this process and how it can stimulate standardisation without imposing standards. Finally, it makes some initial proposals for the thematic domain 'sign language' that was introduced in 2011.
Keywords: signed language resources; metadata; data categories; standardization
Assessing Agreement on Segmentations by Means of Staccato, the Segmentation Agreement Calculator according to Thomann BIBAFull-Text 129-138
  Andy Lücking; Sebastian Ptock; Kirsten Bergmann
Staccato, the Segmentation Agreement Calculator According to Thomann, is a software tool for assessing the degree of agreement of multiple segmentations of some time-related data (e.g., gesture phases or sign language constituents). The software implements an assessment procedure developed by Bruno Thomann and will be made publicly available. The article discusses the rationale of the agreement assessment procedure and points at future extensions of Staccato.
How Do Iconic Gestures Convey Visuo-Spatial Information? Bringing Together Empirical, Theoretical, and Simulation Studies BIBAFull-Text 139-150
  Hannes Rieser; Kirsten Bergmann; Stefan Kopp
We investigate the question of how co-speech iconic gestures are used to convey visuo-spatial information in an interdisciplinary way, starting with a corpus-based empirical and theoretical perspective on how a typology of gesture form and a partial ontology of gesture meaning are related. Results provide the basis for a computational modeling approach that allows us to simulate the production of speaker-specific gesture forms to be realized with virtual agents. An evaluation of our simulation results and our methodology shows that the model is able to successfully approximate human gestural behavior use of iconic gestures, and moreover, that gestural behavior can improve how humans rate a virtual agent in terms of eloquence, competence, human-likeness, or likeability.
Thumb Modelling for the Generation of Sign Language BIBAKFull-Text 151-160
  Maxime Delorme; Michael Filhol; Annelies Braffort
We present a simple kinematic model of the thumb for the animation of virtual characters. The animation is made through a purely kinematic approach, thus requires very precise limitations on the rotations of the thumb to be realistic. The thumb is made opposable thanks to the addition of two bones simulating the carpo-metacarpal complex. The bones are laid out to build a virtual axis of rotation allowing the thumb to move in the opposed position. The model is then evaluated by generating 22 static hand-shapes of Sign Language and compared to previous work in animation.
Keywords: Sign Language Synthesis; Skeleton Modelling; Inverse Kinematics; Thumb Model

Gestures and Signs: Linguistic Analysis and Tools

Toward a Motor Theory of Sign Language Perception BIBAKFull-Text 161-172
  Sylvie Gibet; Pierre-François Marteau; Kyle Duarte
Researches on signed languages still strongly dissociate linguistic issues related on phonological and phonetic aspects, and gesture studies for recognition and synthesis purposes. This paper focuses on the imbrication of motion and meaning for the analysis, synthesis and evaluation of sign language gestures. We discuss the relevance and interest of a motor theory of perception in sign language communication. According to this theory, we consider that linguistic knowledge is mapped on sensory-motor processes, and propose a methodology based on the principle of a synthesis-by-analysis approach, guided by an evaluation process that aims to validate some hypothesis and concepts of this theory. Examples from existing studies illustrate the different concepts and provide avenues for future work.
Keywords: motor theory; sign language
Analysis and Description of Blinking in French Sign Language for Automatic Generation BIBAKFull-Text 173-182
  Annelies Braffort; Emilie Chételat-Pelé
The present paper tackles blinking description within the context of automatic generation of Sign Languages (SLs). Blinking is not much taken into account into SL processing systems even though its importance is underlined in several studies. Our purpose is to improve knowledge on blinking so as to be able to generate them. We present the methodology we used for this purpose and the results we obtained. We list the main categories we have identified in our corpus, and present in more details an excerpt of our results corresponding to the most frequent category, i.e. segmentation.
Keywords: Sign Language; Non Manuals Gestures; Blinking
Grammar/Prosody Modelling in Greek Sign Language: Towards the Definition of Built-In Sign Synthesis Rules BIBAKFull-Text 183-193
  Athanasia-Lida Dimou; Theodore Goulas; Eleni Efthimiou
The aim of the present work is to discuss a limited set of issues which concern the grammar modelling of Greek Sign Language (GSL) within the framework of improving the naturalness and more specifically the grammaticality of synthetic GSL signing. This preliminary study addresses the linguistic issues relating to specific grammar structures and their related prosodic expressive markers through experimental implementation of the respective rules within a sign synthesis support environment that was initially developed in order to create lexical resources for GSL synthesis.
Keywords: Greek Sign Language; sign synthesis; sign language grammar/ prosodic modelling
Combining Two Synchronisation Methods in a Linguistic Model to Describe Sign Language BIBAKFull-Text 194-203
  Michael Filhol
The context is Sign Language modelling for synthesis with 3d virtual signers as output. Sign languages convey multi-linear information, hence allow for many synchronisation patterns between the articulators of the body. Addressing the problem that current models usually at best only cover one type of those patterns, and in the wake of the recent description model Zebedee, we introduce the Azalee extension, made to enable the description of any type of synchronisation in Sign Language.
Keywords: Sign Language modelling; synchronisation; AZee
Sign Segmentation Using Dynamics and Hand Configuration for Semi-automatic Annotation of Sign Language Corpora BIBAKFull-Text 204-215
  Matilde Gonzalez; Christophe Collet
This paper address the problem of sign language video annotation. Nowadays sign language segmentation is manually performed. This is time consuming, error prone and no reproducible. In this paper we intend to provide an automatic approach to segment signs. We use a particle filter based approach to track hands and head. Motion features are used to classify segments performed with one or two hands and to detect events. Events that have been detected in the middle of a sign are removed considering hand shape features. Hand shape is characterized using similarity measurements. Evaluation has been performed and has shown the performance and limitation of the proposed approach.
Keywords: Sign language; sign segmentation; automatic annotation

Gestures and Speech

Integration of Gesture and Verbal Language: A Formal Semantics Approach BIBAKFull-Text 216-227
  Gianluca Giorgolo
The paper presents a formal framework to model the fusion of gesture meaning with the meaning of the co-occurring verbal fragment. The framework is based on the formalization of two simple concepts, intersectivity and iconicity, which form the core of most descriptive accounts of the interaction of the two modalities. The formalization is presented as an extension of a well-known framework for the analysis of meaning in natural language. We claim that a proper formalization of these two concepts is sufficient to provide a principled explanation of gestures accompanying different types of linguistic expressions. The formalization also aims at providing a general mechanism (iconicity) by which the meaning of gestures is extracted from their formal appearance.
Keywords: gesture language integration; formal gesture semantics; applied spatial logics
Generating Co-speech Gestures for the Humanoid Robot NAO through BML BIBAKFull-Text 228-237
  Quoc Anh Le; Catherine Pelachaud
We extend and develop an existing virtual agent system to generate communicative gestures for different embodiments (i.e. virtual or physical agents). This paper presents our ongoing work on an implementation of this system for the NAO humanoid robot. From a specification of multi-modal behaviors encoded with the behavior markup language, BML, the system synchronizes and realizes the verbal and nonverbal behaviors on the robot.
Keywords: Conversational humanoid robot; expressive gestures; gesture-speech production and synchronization; Human-Robot Interaction; NAO; GRETA; FML; BML; SAIBA
Interaction between Speech and Gesture: Strategies for Pointing to Distant Objects BIBAKFull-Text 238-249
  Thies Pfeiffer
Referring to objects using multimodal deictic expressions is an important form of communication. This work addresses the question on how pragmatic factors affect content distribution between the modalities speech and gesture. This is done by analyzing a study on deictic pointing gestures to objects under two conditions: with and without speech. The relevant pragmatic factor was the distance to the referent object. As one main result two strategies were identified which were used by participants to adapt their gestures to the condition. This knowledge can be used, e.g., to improve the naturalness of pointing gestures employed by embodied conversational agents.
Keywords: object deixis; pointing; multimodal expressions
Making Space for Interaction: Architects Design Dialogues BIBAKFull-Text 250-261
  Claude P. R. Heath; Patrick G. T. Healey
This exploratory research has taken a set of theoretical concepts as the basis for testing a visualisation of body-centric gesture space: 1). Kendon's transactional segments, 2). the manubrium as a central anatomical marker for bodily movement, and 3). physical reach space. Using these, a 3D model of gesture space has been designed in order to be applied to empirical data from architects design meetings, articulating the role of gesture space overlaps within the interaction.
   Multi-dimensional drawing techniques have resulted in detailed visualisations of these overlaps. Illustrations show that the dialogue contributions can be mapped to distinct locations in the changing shared spaces, creating a spatial framework for the analysis and visualisation of the multi-dimensional topology of the interaction. This paper discusses a Case Study where this type of modelling can be applied empirically, indexing speech and gesture to the drawing subspaces of a group of architects.
Keywords: Gesture space; interactional topologies; spatial indexing; spatial resources; visuo-spatial deixis
Iconic Gestures in Face-to-Face TV Interviews BIBAKFull-Text 262-273
  Maria Koutsombogera; Harris Papageorgiou
This paper presents a study of iconic gestures as attested in a corpus of Greek face-to-face television interviews. The communicative significance of the iconic gestures situated in an interactional context is examined with regards to their semantics as well as the syntactic properties of the accompanying speech. Iconic gestures are classified according to their semantic equivalents, and are further linked to the phrasal units of the words co-occurring with them, in order to provide evidence about the actual syntactic structures that induce them. The findings support the communicative power of iconic gestures and suggest a framework for their interpretation based on the interplay of semantic and syntactic cues.
Keywords: iconic gestures; face-to-face interviews; multimodal interaction; semantic equivalents; syntactic structures