HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,245,907
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: VazquezAlvarez_Y* Results: 11 Sorted by: Date  Comments?
Help Dates
Limit:   
e-Seesaw: A Tangible, Ludic, Parent-child, Awareness System Late-Breaking Works: Games & Playful Interaction / Sun, Yingze / Aylett, Matthew P. / Vazquez-Alvarez, Yolanda Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.1821-1827
ACM Digital Library Link
Summary: In modern China, the pace of life is becoming faster and working pressure is increasing often leading to pressure on families and family interaction. 23 pairs of working parents and their children were asked what they saw as their main communication challenges and how they currently used communication technology to stay in touch. The mobile phone was the dominant form of communication despite being poorly rated by children as a way of enhancing a sense of connection and love. Parents and children were presented with a series of design probes to investigate how current communication technology might be supported or enhanced with a tangible and playful awareness system. One of the designs, the e-Seesaw, was selected and evaluated in a lab and home setting. Participant reaction was positive with the design provoking a novel perspective on remote parent-child interaction allowing even very young children to both initiate and control communication.

Interactive Radio: A New Platform for Calm Computing WIP Theme: Ubicomp, Robots and Wearables / Aylett, Matthew P. / Vazquez-Alvarez, Yolanda / Baillie, Lynne Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.2085-2090
ACM Digital Library Link
Summary: Interactive radio is proposed as a platform for Weiser's calm computing vision. An evaluation of CereProc's MyMyRadio is presented as a case study to highlight the potential and challenges of an interactive radio approach: the difficulty of transitioning between passive and active modes of interaction, and the challenge of designing such services. The evaluation showed: 1) A higher workload for MyMyRadio for active tasks compared to default applications (e.g. Facebook app); 2) No significant difference in workload for passive tasks (e.g. listening to audio rendered RSS updates vs Browser app); 3) A higher workload when listening to music within MyMyRadio vs iTunes; and 4) A preference for RSS feed content compared to content from social media. We conclude by discussing the potential of interactive radio as a platform for pervasive eyes-free services.

Face-Based Automatic Personality Perception Posters 3 / Al Moubayed, Noura / Vazquez-Alvarez, Yolanda / McKay, Alex / Vinciarelli, Alessandro Proceedings of the 2014 ACM International Conference on Multimedia 2014-11-03 p.1153-1156
ACM Digital Library Link
Summary: Automatic Personality Perception is the task of automatically predicting the personality traits people attribute to others. This work presents experiments where such a task is performed by mapping facial appearance into the Big-Five personality traits, namely Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. The experiments are performed over the pictures of the FERET corpus, originally collected for biometrics purposes, for a total of 829 individuals. The results show that it is possible to automatically predict whether a person is perceived to be above or below median with an accuracy close to 70 percent (depending on the trait).

None of a CHInd: relationship counselling for HCI and speech technology alt.chi: limits and futures / Aylett, Matthew P. / Kristensson, Per Ola / Whittaker, Steve / Vazquez-Alvarez, Yolanda Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.749-760
ACM Digital Library Link
Summary: It's an old story. A relationship built on promises turns to bitterness and recriminations. But speech technology has changed: Yes, we know we hurt you, we know things didn't turn out the way we hoped, but can't we put the past behind us? We need you, we need design. And you? You need us. How can you fulfill a dream of pervasive technology without us? So let's look at what went wrong. Let's see how we can fix this thing. For the sake of little Siri, she needs a family. She needs to grow into more than a piece of PR, and maybe, if we could only work out our differences, just maybe, think of the magic we might make together.

Multilevel auditory displays for mobile eyes-free location-based interaction Works-in-progress / Vazquez-Alvarez, Yolanda / Aylett, Matthew P. / Brewster, Stephen A. / von Jungenfeld, Rocio / Virolainen, Antti Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.1567-1572
ACM Digital Library Link
Summary: This paper explores the use of multilevel auditory displays to enable eyes-free mobile interaction with location-based information in a conceptual art exhibition space. Multilevel auditory displays enable user interaction with concentrated areas of information. However, it is necessary to consider how to present the auditory streams without overloading the user. We present an initial study in which a top-level exocentric sonification layer was used to advertise information present in a gallery-like space. Then, in a secondary interactive layer, three different conditions were evaluated that varied in the presentation (sequential versus simultaneous) and spatialisation (non-spatialised versus egocentric spatialisation) of multiple auditory sources. Results show that 1) participants spent significantly more time interacting with spatialised displays, 2) there was no evidence that a switch from an exocentric to an egocentric display increased workload or lowered satisfaction, and 3) there was no evidence that simultaneous presentation of spatialised Earcons in the secondary display increased workload.

Shaking the dead: multimodal location based experiences for un-stewarded archaeological sites Location-based interaction / McGookin, David / Vazquez-Alvarez, Yolanda / Brewster, Stephen / Bergstrom-Lehtovirta, Joanna Proceedings of the 7th Nordic Conference on Human-Computer Interaction 2012-10-14 p.199-208
ACM Digital Library Link
Summary: We consider how visits to un-stewarded historical and archaeological sites -- those that are unstaffed and have few visible archaeological remains -- can be augmented with multimodal interaction to create more engaging experiences. We developed and evaluated a mobile application that allowed multimodal exploration of a rural Roman fort. Sixteen primary school children used the application to explore the fort. Issues, including the influence of visual remains, were identified and compared with findings from a second study with eight users at a separate site. From these, we determined key design implications around the importance of physical space, group work and interaction with the auditory data.

The effect of clothing on thermal feedback perception Poster session / Halvey, Martin / Wilson, Graham / Vazquez-Alvarez, Yolanda / Brewster, Stephen A. / Hughes, Stephen A. Proceedings of the 2011 International Conference on Multimodal Interfaces 2011-11-14 p.217-220
ACM Digital Library Link
Summary: Thermal feedback is a new area of research in HCI. To date, studies investigating thermal feedback for interaction have focused on virtual reality, abstract uses of thermal output or on use in highly controlled lab settings. This paper is one of the first to look at how environmental factors, in our case clothing, might affect user perception of thermal feedback and therefore usability of thermal feedback. We present a study into how well users perceive hot and cold stimuli on the hand, thigh and waist. Evaluations were carried out with cotton and nylon between the thermal stimulators and the skin. Results showed that the presence of clothing requires higher intensity thermal changes for detection but that these changes are more comfortable than direct stimulation on skin.

"Can we work this out?": an evaluation of remote collaborative interaction in a mobile shared environment Interacting off-screen, on-site and remote / Trendafilov, Dari / Vazquez-Alvarez, Yolanda / Lemmelä, Saija / Murray-Smith, Roderick Proceedings of the 13th Conference on Human-computer interaction with mobile devices and services 2011-08-30 p.499-502
ACM Digital Library Link
Summary: We describe a novel dynamic method for collaborative virtual environments designed for mobile devices and evaluated in a mobile context. Participants interacted in pairs remotely and through touch while walking in three different feedback conditions: 1) visual, 2) audio-tactile, 3) spatial audio-tactile. Results showed the visual baseline system provided higher shared awareness, efficiency and a strong learning effect. However, and although very challenging, the eyes-free systems still offered the ability to build joint awareness in remote collaborative environments, particularly the spatial audio one. These results help us better understand the potential of different feedback mechanisms in the design of future mobile collaborative environments.

Eyes-free multitasking: the effect of cognitive load on mobile spatial audio interfaces Mobile issues / Vazquez-Alvarez, Yolanda / Brewster, Stephen A. Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011-05-07 v.1 p.2173-2176
ACM Digital Library Link
Summary: As mobile devices increase in functionality, users perform more tasks when on the move. Spatial audio interfaces offer a solution for eyes-free interaction. However, such interfaces face a number of challenges when supporting multiple and simultaneous tasks, namely: 1) interference amongst multiple audio streams, and 2) the constraints of cognitive load. We present a comparative study of spatial audio techniques evaluated in a divided- and selective-attention task. A podcast was used for high cognitive load (divided-attention) and classical music for low cognitive load (selective-attention), while interacting with an audio menu. Results showed that spatial audio techniques were preferred when cognitive load was kept low, while a baseline technique using an interruptible single audio stream was significantly less preferred. Conversely, when cognitive load was increased the preferences reversed. Thus, given an appropriate task structure, spatial techniques offer a means of designing effective audio interfaces to support eyes-free mobile multitasking.

Designing spatial audio interfaces for mobile devices: supporting multitasking and context information Doctoral consortium / Vazquez-Alvarez, Yolanda Proceedings of 12th Conference on Human-computer interaction with mobile devices and services 2010-09-07 p.481-482
Keywords: 3D audio, audio interfaces, context information, mobile devices, multitasking
ACM Digital Library Link
Summary: Audio interfaces are becoming more important due to the increasing functionality of today's mobile devices. As a result, more complex audio-driven eyes-free interactions are required when mobile. The aim of my work is to evaluate 3D audio techniques used to implement auditory displays that support multitasking and access to context information in interactive mobile environments.

Investigating background & foreground interactions using spatial audio cues Spotlight on work in progress session 1 / Vazquez-Alvarez, Yolanda / Brewster, Stephen Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009-04-04 v.2 p.3823-3828
Keywords: 3d audio, audio cues, background and foreground interactions, evaluation, multiple audio streams
ACM Digital Library Link
Summary: Audio is a key feedback mechanism in eyes-free and mobile computer interaction. Spatial audio, which allows us to localize a sound source in a 3D space, can offer a means of altering focus between audio streams as well as increasing the richness and differentiation of audio cues. However, the implementation of spatial audio on mobile phones is a recent development. Therefore, a calibration of this new technology is a requirement for any further spatial audio research. In this paper we report an evaluation of the spatial audio capabilities supported on a Nokia N95 8GB mobile phone. Participants were able to significantly discriminate between five audio sources on the frontal horizontal plane. Results also highlighted possible subject variation caused by earedness and handedness. We then introduce the concept of audio minimization and describe work in progress using the Nokia N95's 3D audio capability to implement and evaluate audio minimization in an eyes-free mobile environment.