HCI Bibliography Home | HCI Conferences | GW Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GW Tables of Contents: 96979901030507091113

GW 2007: Gesture Workshop

Fullname:GW 2007: Gesture-Based Human-Computer Interaction and Simulation: 7th International Gesture Workshop Revised Selected Papers
Editors:Miguel Sales Dias; Sylvie Gibet; Marcelo M. Wanderley; Rafael Bastos
Location:Lisbon, Portugal
Dates:2007-May-23 to 2007-May-27
Publisher:Springer Berlin Heidelberg 2009
Series:Lecture Notes in Computer Science 5085
Standard No:DOI: 10.1007/978-3-540-92865-2; hcibib: GW07; ISBN: 978-3-540-92864-5 (print), 978-3-540-92865-2 (online)
Papers:30
Pages:281
Links:Online Proceedings | Conference Website
  1. Analysis and Synthesis of Gesture
  2. Theoretical Aspects of Gestural Communication and Interaction
  3. Vision-Based Gesture Recognition
  4. Sign Language Processing
  5. Gesturing with Tangible Interfaces and in Virtual and Augmented Reality
  6. Gesture for Music and Performing Arts
  7. Gesture for Therapy and Rehabilitation
  8. Gesture in Mobile Computing and Usability Studies

Analysis and Synthesis of Gesture

Gesture Recognition Based on Elastic Deformation Energies BIBAKFull-Text 1-12
  Radu-Daniel Vatavu; Laurent Grisoni; Stefan Gheorghe Pentiuc
We present a method for recognizing gesture motions based on elastic deformable shapes and curvature templates. Gestures are modeled using a spline curve representation that is enhanced with elastic properties: the entire spline or any of its parts may stretch or bend. The energy required to transform a gesture into a given template gives an estimation of the similarity between the two. We demonstrate the results of our gesture classifier with a video-based acquisition approach.
Keywords: gesture recognition; elastic matching; deformation energies; video; gesture acquisition
Approximation of Curvature and Velocity for Gesture Segmentation and Synthesis BIBAFull-Text 13-23
  Sylvie Gibet; Pierre-Francois Marteau
This paper describes a new approach to analyze hand gestures, based on an experimental approximation of the shape and kinematics of compressed arm trajectories. The motivation of such a model is on the one hand the reduction of the gesture data, and on the other hand the possibility to segment gestures into meaningful units, yielding to an analysis tool for gesture coding and synthesis. We show that the measures of the inverse of the distance between adaptive samples and velocity estimated at these points are respectively correlated to the instantaneous curvature and tangential velocity directly computed on motion capture data. Based on these correlation results, we propose a newway to automatically segment hand gestures. We show also that this approach can be applied to a global analysis / synthesis framework, useful for automatic animation of virtual characters performing sign langue gestures.
Motion Primitives and Probabilistic Edit Distance for Action Recognition BIBAFull-Text 24-35
  Preben Fihl; Michael B. Holte; Thomas B. Moeslund
The number of potential applications has made automatic recognition of human actions a very active research area. Different approaches have been followed based on trajectories through some state space. In this paper we also model an action as a trajectory through a state space, but we represent the actions as a sequence of temporal isolated instances, denoted primitives. These primitives are each defined by four features extracted from motion images. The primitives are recognized in each frame based on a trained classifier resulting in a sequence of primitives. From this sequence we recognize different temporal actions using a probabilistic Edit Distance method. The method is tested on different actions with and without noise and the results show recognition rates of 88.7% and 85.5%, respectively.

Theoretical Aspects of Gestural Communication and Interaction

On the Parametrization of Clapping BIBAFull-Text 36-47
  Herwin van Welbergen; Zsófia Ruttkay
For a Reactive Virtual Trainer(RVT), subtle timing and lifelikeness of motion is of primary importance. To allow for reactivity, movement adaptation, like a change of tempo, is necessary. In this paper we investigate the relation between movement tempo, its synchronization to verbal counting, time distribution, amplitude, and left-right symmetry of a clapping movement. We analyze motion capture data of two subjects performing a clapping exercise, both freely and timed by a metronome.
   Our findings are compared to results from existing gesture research and existing biomechanical models. We found that, for our subjects, verbal counting adheres to the phonological synchrony rule. A linear relationship between the movement path length and the tempo was found. The symmetry between the left and the right hand can be described by the biomechanical model of two coupled oscillators.
Improving the Believability of Virtual Characters Using Qualitative Gesture Analysis BIBAKFull-Text 48-56
  Barbara Mazzarino; Manuel Peinado; Ronan Boulic; Gualtiero Volpe; Marcelo M. Wanderley
This paper describes preliminary results of a research performed in the framework of the Enactive project (EU IST NoE Enactive). The aim of this research is to improve believability of a virtual character using qualitative analysis of gesture. Using techniques developed for human gesture analysis, we show it is possible to extract high-level motion features from reconstructed motion and to compare them with the same features extracted from the corresponding real motions. Moreover this method allows us to evaluate whether the virtual character conveys the same high level expressive content as the real motion does, and makes it possible to compare different rendering techniques in order to assess which one better maintains such information.
Keywords: Gesture Analysis; Believability; Expressive Motion Content
A Method for Selection of Optimal Hand Gesture Vocabularies BIBAKFull-Text 57-68
  Helman Stern; Juan P. Wachs; Yael Edan
This work presents an analytical approach to design a gesture vocabulary (GV) using multiobjectives for psycho-physiological and gesture recognition factors. Previous works dealt only with selection of hand gestures vocabularies using rule based or ad-hoc methods. The analytical formulation in our research is a demonstration of the future need defined by previous research. A meta-heuristic approach is taken by decomposing the problem into two sub-problems: (i) finding the subsets of gestures that meet a minimal accuracy requirement, and (ii) matching gestures to commands to maximize the human factors objective. The result is a set of solutions from which a Pareto optimal subset is selected. An example solution from the Pareto set is exhibited using prioritized objectives.
Keywords: hand gesture vocabulary design; multiobjective optimization; fuzzy c-means; feature selection; gesture interfaces; hand gesture recognition; human-computer interaction

Vision-Based Gesture Recognition

Person-Independent 3D Sign Language Recognition BIBAFull-Text 69-80
  Jeroen Lichtenauer; Gineke A. ten Holt; Marcel J. T. Reinders; Emile A. Hendriks
In this paper, we present a person independent 3D system for judging the correctness of a sign. The system is camera-based, using computer vision techniques to track the hand and extract features. 3D co-ordinates of the hands and other features are calculated from stereo images. The features are then modeled statistically and automatic feature selection is used to build the classifiers. Each classifier is meant to judge the correctness of one sign. We tested our approach using a 120-sign vocabulary and 75 different signers. Overall, a true positive rate of 96.5% at a false positive rate of 3.5% is achieved. The system's performance in a real-world setting largely agreed with human expert judgement.
Skin Color Profile Capture for Scale and Rotation Invariant Hand Gesture Recognition BIBAKFull-Text 81-92
  Rafael Bastos; Miguel Sales Dias
This paper presents a new approach to real-time scale and rotation invariant hand pose detection, which is based on a technique for computing the best hand skin color segmentation map. This segmentation map, a vector entity referred to as a "skin profile", is used during an online hand gesture calibration stage to enable correct classification of skin regions. Subsequently, we construct efficient and reliable scale and rotation invariant hand pose gesture descriptors, by introducing an innovative technique, referred to as "oriented gesture descriptors". Finally, hand pose gesture recognition is computed using a template matching technique which is luminance invariant.
Keywords: Gesture Tracking; Skin Color Tracking; Image Processing; Feature Matching; Scale and Rotation Invariant Template
Robust Tracking for Processing of Videos of Communication's Gestures BIBAKFull-Text 93-101
  Frédérick Gianni; Christophe Collet; Patrice Dalle
This paper presents a method of image processing used in a mono-vision system in order to study semiotic gestures. We present a robust method to track the hands and face of a person performing gestural communication and the Signs' language communication. A model of skin is used to compute the observation density as a skin colour distribution in the image. Three particle filter trackers are implemented, with re-sampling and annealed update steps to increase their robustness to occultation and high acceleration variations of body parts'. Evaluations of the trackers with and without these enhancements, show the improvement that they bring.
Keywords: Hands and Head Tracking; Skin Colour Segmentation; Particle Filter; Sign Languages; Video Analysis
Representation of Human Postures for Vision-Based Gesture Recognition in Real-Time BIBAKFull-Text 102-107
  Antoni Jaume-i-Capó; Javier Varona; Francisco J. Perales López
Using computer vision to sense and perceive the user and his/her actions in a Human-Computer Interaction context is often referred to as Vision-Based Interfaces. In this paper, we present a Vision-Based Interface guided by the user gestures. Previously to recognition, the user's movements are obtained through a real-time vision-based motion capture system. This motion capture system is capable to estimate the user 3D body joints position in real-time. By means of an appropriate representation of limbs orientations based on temporal histograms, we present a scheme of gesture recognition that also works in real-time. This scheme of recognition has been tested through control of a classical computer videogame showing an excellent performance in on-line classification and it allows the possibility to achieve a learning phase in real-time due to its computational simplicity.
Keywords: Vision-Based Gesture Recognition
Enhancing a Sign Language Translation System with Vision-Based Features BIBAFull-Text 108-113
  Philippe Dreuw; Daniel Stein; Hermann Ney
In automatic sign language translation, one of the main problems is the usage of spatial information in sign language and its proper representation and translation, e.g. the handling of spatial reference points in the signing space. Such locations are encoded at static points in signing space as spatial references for motion events.
   We present a new approach starting from a large vocabulary speech recognition system which is able to recognize sentences of continuous sign language speaker independently. The manual features obtained from the tracking are passed to the statistical machine translation system to improve its accuracy. On a publicly available benchmark database, we achieve a competitive recognition performance and can similarly improve the translation performance by integrating the tracking features.

Sign Language Processing

Generating Data for Signer Adaptation BIBAKFull-Text 114-121
  Chunli Wang; Xilin Chen; Wen Gao
In sign language recognition (SLR), one of the problems is signer adaptation. Different from spoken language, there are lots of "phonemes" in sign language. It is not convenient to collect enough data to adapt the system to a new signer. A method of signer adaptation with little data for continuous density hidden Markov models (HMMs) is presented. Firstly, hand shapes, positions and orientations that compose all sign words are extracted with clustering algorithm. They are regarded as basic units. Based on a small number of sign words that include these basic units, the adaptation data of all sign words are generated. Statistics are gathered from the generated data and used to calculate a linear regression-based transformation for the mean vectors. To verify the effectiveness of the proposed method, some experiments are carried out on a vocabulary with 350 sign words in Chinese Sign Language (CSL). All basic units of hand shape, position and orientation are found. With these units, we generate the adaptation data of 350 sign words. Experimental results demonstrate that the proposed method has similar performance compared with that using the original samples of 350 sign words as adaptation data.
Keywords: Sign Language Recognition; Adaptation; MLLR
A Qualitative and Quantitative Characterisation of Style in Sign Language Gestures BIBAFull-Text 122-133
  Alexis Heloir; Sylvie Gibet
This paper addresses the identification and representation of the variations induced by style for the synthesis of realistic and convincing expressive sign language gesture sequences. A qualitative and quantitative comparison of styled gesture sequences is made. This comparison leads to the identification of temporal, spatial, and structural processes that are described in a theoretical model of sign language phonology. Insights raised by this study are then considered in the more general framework of gesture synthesis in order to enhance existing gesture specification systems.
Sequential Belief-Based Fusion of Manual and Non-manual Information for Recognizing Isolated Signs BIBAKFull-Text 134-144
  Oya Aran; Thomas Burger; Alice Caplier; Lale Akarun
This work aims to recognize signs which have both manual and non-manual components by providing a sequential belief-based fusion mechanism. We propose a methodology based on belief functions for fusing extracted manual and non-manual features in a sequential two-step approach. The belief functions based on the likelihoods of the hidden Markov models are used to decide whether there is an uncertainty in the decision of the first step and also to identify the uncertainty clusters. Then we proceed to the second step which utilizes only the non-manual features within the identified cluster, only if there is an uncertainty.
Keywords: Sign language recognition; hand gestures; head gestures; non-manual signals; hidden Markov models; belief functions
Gesture Modelling for Linguistic Purposes BIBAFull-Text 145-150
  Guillaume J.-L. Olivrin
The study of sign languages attempts to create a coherent model that describes the expressive nature of signs conveyed in gestures with a linguistic framework. 3D gesture modelling offers a precise annotation and representation of linguistic constructs and can become an entry mechanism for sign languages. This paper presents the requirements to build an input method editor for sign languages and the initial experiments using signing avatars input interfaces. The system currently saves and annotates 3D gestures on humanoid models with linguistic labels. Results show that the annotating prototype can be used in turn to ease and guide the task of 3D gesture modelling.

Gesturing with Tangible Interfaces and in Virtual and Augmented Reality

Automatic Classification of Expressive Hand Gestures on Tangible Acoustic Interfaces According to Laban's Theory of Effort BIBAKFull-Text 151-162
  Antonio Camurri; Corrado Canepa; Simone Ghisio; Gualtiero Volpe
Tangible Acoustic Interfaces (TAIs) exploit the propagation of sound in physical objects in order to localize touching positions and to analyse user's gesture on the object. Designing and developing TAIs consists of exploring how physical objects, augmented surfaces, and spaces can be transformed into tangible-acoustic embodiments of natural seamless unrestricted interfaces. Our research focuses on Expressive TAIs, i.e., TAIs able at processing expressive user's gesture and providing users with natural multimodal interfaces that fully exploit expressive, emotional content. This paper presents a concrete example of analysis of expressive gesture in TAIs: hand gestures on a TAI surface are classified according to the Space and Time dimensions of Rudolf Laban's Theory of Effort. Research started in the EU-IST Project TAI-CHI (Tangible Acoustic Interfaces for Computer-Human Interaction) and is currently going on in the EU-ICT Project SAME (Sound and Music for Everyone, Everyday, Everywhere, Every way, www.sameproject.eu). Expressive gesture analysis and multimodal and cross-modal processing are achieved in the new EyesWeb XMI open platform (available at www.eyesweb.org) by means of a new version of the EyesWeb Expressive Gesture Processing Library.
Keywords: expressive gesture; tangible acoustic interfaces; natural interfaces; multimodal interactive systems; multimodal analysis of expressive movement
Implementing Distinctive Behavior for Conversational Agents BIBAFull-Text 163-174
  Maurizio Mancini; Catherine Pelachaud
We aim to define conversational agents exhibiting distinctive behavior. To this aim we provide a small set of parameters to allow one to define behavior profiles and then leave to the system the task of animating the agents. Our approach is to manipulate the behavior tendency of the agents depending on their communicative intention and emotional state. In this paper we define the concepts of Baseline and Dynamicline. The Baseline of an agent is defined as a set of fixed parameters that represent the personalized agent behavior, while the Dynamicline is a set of parameters that derive both from the Baseline and the current communicative intention and emotional state.
Using Hand Gesture and Speech in a Multimodal Augmented Reality Environment BIBAKFull-Text 175-180
  Miguel Sales Dias; Rafael Bastos; João Fernandes; João Tavares; Pedro Santos
In this work we describe a 3D authoring tool which takes advantage of multimodal interfaces such as gestures and speech. This tool allows real-time Augmented Reality aimed to aid the tasks of interior architects and designers. This approach intends to be an alternative to traditional techniques. The main benefit of using a multimodal based augmented reality system is the provision of a more transparent, flexible, efficient and expressive means of human-computer interaction.
Keywords: Gesture Tracking; Augmented Reality; 3D Authoring Tool; Speech; Multimodal interfaces
A Virtual Reality-Based Framework for Experiments on Perception of Manual Gestures BIBAFull-Text 181-186
  Sebastian Ullrich; Jakob Valvoda; Marc Wolter; Gisela Fehrmann; Isa Werth; Ludwig Jaeger; Torsten Kuhlen
This work contributes an integrated and flexible approach to sign language processing in virtual environments that allows for interactive experimental evaluations with high ecological validity. Initial steps deal with real-time tracking and processing of manual gestures. Motion data is stereoscopically rendered in immersive virtual environments with varying spatial and representational configurations. Besides flexibility, the most important aspect is the seamless integration within a VR-based neuropsychological experiment software. Ongoing studies facilitated with this system contribute to the understanding of the cognition of sign language. The system is beneficial for experimenters because of the controlled and immersive three-dimensional environment enabling experiments with visual depth perception that can not be achieved with video presentations.
Processing Iconic Gestures in a Multimodal Virtual Construction Environment BIBAKFull-Text 187-192
  Christian Fröhlich; Peter Biermann; Marc Erich Latoschik; Ipke Wachsmuth
In this paper we describe how coverbal iconic gestures can be used to express shape-related references to objects in a Virtual Construction Environment. Shape information is represented using Imagistic Description Trees (IDTs), an extended semantic representation which includes relational information (as well as numerical data) about the objects' spatial features. The IDTs are generated online according to the trajectory of the user's hand movements when the system is instructed to select an existing or to create a new object. A tight integration of the semantic information into the objects' data structures allows to access this information via so-called semantic entities as interfaces during the multimodal analysis and integration process.
Keywords: Gesture Representation; Iconic Gestures; Virtual Construction; Multimodal Human-Computer Interaction
Analysis of Emotional Gestures for the Generation of Expressive Copying Behaviour in an Embodied Agent BIBAKFull-Text 193-198
  Ginevra Castellano; Maurizio Mancini
This paper presents a system capable of acquiring input from a video camera, processing information related to the expressivity of human movement and generating expressive copying behaviour of an Embodied Agent. We model a bi-directional communication between user and agent based on real-time analysis of movement expressivity and generation of expressive copying behaviour: while the user is moving, the agent responds with a gesture that exhibits the same expressive characteristics. An evaluation study based on a perceptual experiment with participants showed the effectiveness of the designed interaction.
Keywords: gesture expressivity; emotion; embodied agent
Gestures to Intuitively Control Large Displays BIBAFull-Text 199-204
  Wim Fikkert; Paul E. van der Vet; Han Rauwerda; Timo M. Breit; Anton Nijholt
Large displays are highly suited to support discussions in empirical science. Such displays can display project results on a large digital surface to feed the discussion. This paper describes our approach to closely involve multidisciplinary omics scientists in the design of an intuitive display control through hand gestures. This interface is based upon a gesture repertoire. This paper describes how this repertoire is designed based on observations of, and scripted task experiments with, omics scientists.

Gesture for Music and Performing Arts

Geometry and Effort in Gestural Renderings of Musical Sound BIBAKFull-Text 205-215
  Rolf Inge Godøy
As may be seen at concerts and in various everyday listening situations, people often make spontaneous gestures when listening to music. We believe these gestures are interesting to study because they may reveal important features of musical experience. In particular, hand movements may give us information on what features are perceived as salient by listeners. Based on various current ideas on embodied cognition, the aim of this paper is to argue that gestures are integral to music perception, and to present research in support of this. A conceptual model of separating geometry and effort is presented in order to better understand the variety of music-related gestures we may observe, leading up to some ideas on how to apply this conceptual model in present and future research.
Keywords: Music; perception; gestures; geometry; effort; perception-action; key-postures; movement
String Bowing Gestures at Varying Bow Stroke Frequencies: A Case Study BIBAFull-Text 216-226
  Nicolas H. Rasamimanana; Delphine Bernardin; Marcelo M. Wanderley; Frédéric Bevilacqua
The understanding of different bowing strategies can provide key concepts for the modelling of music performance. We report here an exploratory study of bowing gestures for a viola player and a violin player in the case of bow strokes performed at different frequencies. Bow and arm movements as well as bow pressure on strings were measured respectively with a 3D optical motion capture system and a custom pressure sensor. While increasing bow stroke frequency, defined as the inverse time between two strokes, players did use different bowing movements as indicated from the measurement of bow velocity and arm joint angles. First, bow velocity profiles abruptly shift from a rectangle shape to a sinus shape. Second, while bow velocity is sinusoidal, an additional change is observed: the wrist and elbow relative phase shifts from out-of-phase to in-phase at the highest frequencies, indicating a possible change in the players coordinative pattern. We finally discuss the fact that only small differences are found in the sound while significant changes occur in the velocity / acceleration profiles.
Gesture Control of Sound Spatialization for Live Musical Performance BIBAFull-Text 227-238
  Mark T. Marshall; Joseph Malloch; Marcelo M. Wanderley
This paper presents the development of methods for gesture control of sound spatialization. It provides a comparison of seven popular software spatialization systems from a control point of view, and examines human-factors issues relevant to gesture control. An effort is made to reconcile these two design- and parameter-spaces, and draw useful conclusions regarding likely successful mapping strategies. Lastly, examples are given using several different gesture-tracking and motion capture systems controlling various parameters of the spatialization system.
Validation of an Algorithm for Segmentation of Full-Body Movement Sequences by Perception: A Pilot Experiment BIBAKFull-Text 239-244
  Donald Glowinski; Antonio Camurri; Carlo Chiorri; Barbara Mazzarino; Gualtiero Volpe
This paper presents a pilot experiment for the perceptual validation by human subjects of a motion segmentation algorithm, i.e., an algorithm for automatically segmenting a motion sequence (e.g., a dance fragment) into a collection of pause and motion phases. Perceptual validation of motion and gesture analysis algorithms is an important issue in the development of multimodal interactive systems where human full-body movement and expressive gesture are a major input channel. The discussed experiment is part of a broader research at DIST-InfoMus Lab aiming at investigating the non-verbal mechanisms of communication involving human movement and gesture as primary conveyors of expressive emotional content.
Keywords: expressive gesture; motion segmentation; motion feature

Gesture for Therapy and Rehabilitation

Signs Workshop: The Importance of Natural Gestures in the Promotion of Early Communication Skills of Children with Developmental Disabilities BIBAKFull-Text 245-254
  Ana Margarida P. Almeida; Teresa Condeço; Fernando Ramos; Álvaro Sousa; Luísa Cotrim; Sofia Macedo; Miguel Palha
This article emphasises the importance of natural gestures and describes the framework and the development process of the "Signs Workshop" CD-ROM, which is a multimedia application for the promotion of early communication skills of children with developmental disabilities. Signs Workshop CD-ROM was created in the scope of Down's Comm Project, which was financed by the Calouste Gulbenkian Foundation, and is the result of a partnership between UNICA (Communication and Arts Research Unit of the University of Aveiro) and the Portuguese Down Syndrome Association (APPT21/Differences).
Keywords: language and communication skills; augmented communication systems; total communication (simultaneous use of signs and language); multimedia production
The Ergonomic Analysis of the Workplace of Physically Disabled Individuals BIBAKFull-Text 255-260
  Matthieu Aubry; Frédéric Julliard; Sylvie Gibet
This paper presents a new gesture-based approach for the ergonomic evaluation of the workplaces of the physically handicapped. After a brief overview of tools using interactive simulation to perform ergonomic analysis, we describe the requirements to perform ergonomic analyses for the disabled. We then propose a framework unifying the synthesis and analysis of motions and integrating a model of disabilities based on constraints. Finally, we present preliminary results following the implementation of a constraint enabled kinematic controller.
Keywords: Ergonomics; Interactive simulation; Gesture; Inverse kinematics

Gesture in Mobile Computing and Usability Studies

Mnemonical Body Shortcuts for Interacting with Mobile Devices BIBAKFull-Text 261-271
  Tiago João Vieira Guerreiro; Ricardo Gamboa; Joaquim A. Jorge
Mobile devices' user interfaces have some similarities with the traditional interfaces offered by desktop computers, which are highly problematic when used in mobile contexts. Gesture recognition in mobile interaction appears as an important area to provide suitable on-the-move usability. We present a body space based approach to improve mobile device interaction and on the move performance. The human body is presented as a rich repository of meaningful relations which are always available to interact with. Body-based gestures allow the user to naturally interact with mobile devices with no movement limitations. Preliminary studies using RFID technology were performed, validating the mnemonical body shortcuts concept as a new mobile interaction mechanism. Finally, inertial sensing prototypes were developed and evaluated, proving to be suitable for mobile interaction and efficient, accomplishing a good recognition rate.
Keywords: Gestures; Mnemonics; Shortcuts; RFID; Accelerometer; Mobile
The Effects of the Gesture Viewpoint on the Students' Memory of Words and Stories BIBAKFull-Text 272-281
  Giorgio Merola
The goal of this work is to estimate the effects of teacher's iconic gestures on the students' memory of words and short stories. Indeed, some evidence seems to confirm the possibility that iconics help the listener, but it is unclear what are the elements that make gestures more or less useful. According to McNeill's observation that children produce many more Character Viewpoint gestures than Observer Viewpoint ones, we hypothesize that they also understand and remember better words accompanied by these gestures. The results of two experimental studies showed that iconic gestures helped students to remember words and tales better and that younger students performed better in memory tasks when their teacher used Character Viewpoint gestures.
Keywords: iconic; Character Viewpoint; effects; memory; students