HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,811,930
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Aylett_M* Results: 12 Sorted by: Date  Comments?
Help Dates
Limit:   
The Smartphone: A Lacanian Stain, A Tech Killer, and an Embodiment of Radical Individualism alt.chi: Confronting Power in HCI / Aylett, Matthew P. / Lawson, Shaun Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.501-511
ACM Digital Library Link
Summary: YAFR (Yet another futile rant) presents the smartphone: an unstoppable piece of technology generated from a perfect storm of commercial, technological, social and psychological factors. We begin by misquoting Steve Jobs and by being unfairly rude about the HCI community. We then consider the smartphone's ability to kill off competing technology and to undermine collectivism. We argue that its role as a Lacanian stain, an exploitative tool, and as a means of concentrating power into the hands of the few, make it a technology that will rival the personal automobile in its effect on modern society.

e-Seesaw: A Tangible, Ludic, Parent-child, Awareness System Late-Breaking Works: Games & Playful Interaction / Sun, Yingze / Aylett, Matthew P. / Vazquez-Alvarez, Yolanda Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.1821-1827
ACM Digital Library Link
Summary: In modern China, the pace of life is becoming faster and working pressure is increasing often leading to pressure on families and family interaction. 23 pairs of working parents and their children were asked what they saw as their main communication challenges and how they currently used communication technology to stay in touch. The mobile phone was the dominant form of communication despite being poorly rated by children as a way of enhancing a sense of connection and love. Parents and children were presented with a series of design probes to investigate how current communication technology might be supported or enhanced with a tangible and playful awareness system. One of the designs, the e-Seesaw, was selected and evaluated in a lab and home setting. Participant reaction was positive with the design provoking a novel perspective on remote parent-child interaction allowing even very young children to both initiate and control communication.

My Life On Film Workshop Summaries / Aylett, Matthew P. / Thomas, Lisa / Green, David P. / Shamma, David A. / Briggs, Pam / Kerrigan, Finola Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3379-3386
ACM Digital Library Link
Summary: Social media has begun to migrate from a predominantly text-based medium, through photography and into cinematography and edited video. Film is a vital medium through which we not only capture our world, but also seek to understand it. This workshop explores an emerging area of research within the CHI community that focuses on applying filmic techniques in two different ways; 1) to automatically interpret personal data and to allow users to interact with personal data, and 2) to explore film as a vehicle for the personal curation of digital identity. This multidisciplinary, one-day workshop will bring together social scientists, cinematography experts, ethnographers, semantic and graphics engineers together with general HCI practitioners to explore and evaluate individual and community representations on film, new ways of translating traditional social media data into film, the engineering challenges of automatically rendering filmic media, and the critical role such automatic and semi-automatic systems can play in persuasion, understanding, and empowerment.

Designing Speech and Multimodal Interactions for Mobile, Wearable, and Pervasive Applications Workshop Summaries / Munteanu, Cosmin / Irani, Pourang / Oviatt, Sharon / Aylett, Matthew / Penn, Gerald / Pan, Shimei / Sharma, Nikhil / Rudzicz, Frank / Gomez, Randy / Nakamura, Keisuke / Nakadai, Kazuhiro Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3612-3619
ACM Digital Library Link
Summary: Traditional interfaces are continuously being replaced by mobile, wearable, or pervasive interfaces. Yet when it comes to the input and output modalities enabling our interactions, we have yet to fully embrace some of the most natural forms of communication and information processing that humans possess: speech, language, gestures, thoughts. Very little HCI attention has been dedicated to designing and developing spoken language and multimodal interaction techniques, especially for mobile and wearable devices. In addition to the enormous, recent engineering progress in processing such modalities, there is now sufficient evidence that many real-life applications do not require 100% accuracy of processing multimodal input to be useful, particularly if such modalities complement each other. This multidisciplinary, two-day workshop will bring together interaction designers, usability researchers, and general HCI practitioners to analyze the opportunities and directions to take in designing more natural interactions with mobile and wearable devices, and to look at how we can leverage recent advances in speech and multimodal processing.

Don't Say Yes, Say Yes: Interacting with Synthetic Speech Using Tonetable Interactivity Demos / Aylett, Matthew P. / Pullin, Graham / Braude, David A. / Potard, Blaise / Hennig, Shannon / Ferreira, Marilia Antunes Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3643-3646
ACM Digital Library Link
Summary: This demo is not about what you say but how you say it. Using a tangible system, Tonetable, we explore the shades of meaning carried by the same word said in many different ways. The same word or phrase is synthesised using the Intel Edison with different expressive techniques. Tonetable allows participants to play these different tokens and select the manner they should be synthesised for different contexts. Adopting the visual language of mid-century modernism, the system provokes participants to think deeply about how they might want to say yes, oh really, or I see. Designed with the very serious objective of supporting expressive personalisation of AAC devices, but with the ability to produce a playful and amusing experience, Tonetable will change the way you think about speech synthesis and what yes really means.

Generating Narratives from Personal Digital Data: Using Sentiment, Themes, and Named Entities to Construct Stories Demonstrations / Farrow, Elaine / Dickinson, Thomas / Aylett, Matthew P. Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part IV 2015-09-14 v.4 p.473-477
Keywords: Social media; Narrative; Triptych; Multi-media
Link to Digital Content at Springer
Summary: As the quantity and variety of personal digital data shared on social media continues to grow, how can users make sense of it? There is growing interest among HCI researchers in using narrative techniques to support interpretation and understanding. This work describes our prototype application, ReelOut, which uses narrative techniques to allow users to understand their data as more than just a database. The online service extracts data from multiple social media sources and augments it with semantic information such as sentiment, themes, and named entities. The interactive editor automatically constructs a story by using unit selection to fit data units to a simple narrative structure. It allows the user to change the story interactively by rejecting certain units or selecting a new narrative target. Finally, images from the story can be exported as a video clip or a collage.

The Broken Dream of Pervasive Sentient Ambient Calm Invisible Ubiquitous Computing alt.chi: Augmentation / Aylett, Matthew P. / Quigley, Aaron J. Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.425-435
ACM Digital Library Link
Summary: We dreamt of technology becoming invisible, for our wants and needs to be primary and the tools we use for making them a reality to become like a genie, a snap of the fingers and ta daa, everything is realised. What went wrong? Was this always an impossible dream? How did we end up with this fetishised obsession with mobile phones? How did we end up with technology tearing apart our sense of experience and replacing it with 'Likes'. No one meant this to happen, not even US Corporates, they just wanted to own us, not diminish our sense of existing and interacting within the real world. In this paper we consider how tools took over, and how the dream of ubiquitous (or whatever its called) computing was destroyed. We rally rebellious forces and consider how we might fight back, and whether we should even bother trying.

Generating Narratives from Personal Digital Data: Triptychs WIP Theme: Social Computing / Aylett, Matthew P. / Farrow, Elaine / Pschetz, Larissa / Dickinson, Thomas Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.1875-1880
ACM Digital Library Link
Summary: The need for users to make sense of their growing mass of personal digital data presents a challenge to Design and HCI researchers. There is a growing interest in using narrative techniques to support the interpretation and understanding of such data. In this early study we explore methods of selecting images from personal Instagram accounts in the form of a triptych (a sequence of three images) in order to create a sense of narrative. We present a brief description of the algorithms behind image selection, evaluate how effective they are in creating a sense of narrative, and discuss the wider implications of our work. Results show that semantic tagging, a dynamic programming algorithm, and a simple narrative structure produced triptychs which were significantly more story-like, with a significantly more coherent order, than a random selection, or a neutral sequence of images.

Interactive Radio: A New Platform for Calm Computing WIP Theme: Ubicomp, Robots and Wearables / Aylett, Matthew P. / Vazquez-Alvarez, Yolanda / Baillie, Lynne Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.2085-2090
ACM Digital Library Link
Summary: Interactive radio is proposed as a platform for Weiser's calm computing vision. An evaluation of CereProc's MyMyRadio is presented as a case study to highlight the potential and challenges of an interactive radio approach: the difficulty of transitioning between passive and active modes of interaction, and the challenge of designing such services. The evaluation showed: 1) A higher workload for MyMyRadio for active tasks compared to default applications (e.g. Facebook app); 2) No significant difference in workload for passive tasks (e.g. listening to audio rendered RSS updates vs Browser app); 3) A higher workload when listening to music within MyMyRadio vs iTunes; and 4) A preference for RSS feed content compared to content from social media. We conclude by discussing the potential of interactive radio as a platform for pervasive eyes-free services.

Designing speech and language interactions Workshop summaries / Munteanu, Cosmin / Jones, Matt / Whittaker, Steve / Oviatt, Sharon / Aylett, Matthew / Penn, Gerald / Brewster, Stephen / d'Alessandro, Nicolas Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.75-78
ACM Digital Library Link
Summary: Speech and natural language remain our most natural forms of interaction; yet the HCI community have been very timid about focusing their attention on designing and developing spoken language interaction techniques. While significant efforts are spent and progress made in speech recognition, synthesis, and natural language processing, there is now sufficient evidence that many real-life applications using speech technologies do not require 100% accuracy to be useful. This is particularly true if such systems are designed with complementary modalities that better support their users or enhance the systems' usability. Engaging the CHI community now is timely -- many recent commercial applications, especially in the mobile space, are already tapping the increased interest in and need for natural user interfaces (NUIs) by enabling speech interaction in their products. This multidisciplinary, one-day workshop will bring together interaction designers, usability researchers, and general HCI practitioners to analyze the opportunities and directions to take in designing more natural interactions based on spoken language, and to look at how we can leverage recent advances in speech processing in order to gain widespread acceptance of speech and natural language interaction.

None of a CHInd: relationship counselling for HCI and speech technology alt.chi: limits and futures / Aylett, Matthew P. / Kristensson, Per Ola / Whittaker, Steve / Vazquez-Alvarez, Yolanda Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.749-760
ACM Digital Library Link
Summary: It's an old story. A relationship built on promises turns to bitterness and recriminations. But speech technology has changed: Yes, we know we hurt you, we know things didn't turn out the way we hoped, but can't we put the past behind us? We need you, we need design. And you? You need us. How can you fulfill a dream of pervasive technology without us? So let's look at what went wrong. Let's see how we can fix this thing. For the sake of little Siri, she needs a family. She needs to grow into more than a piece of PR, and maybe, if we could only work out our differences, just maybe, think of the magic we might make together.

Multilevel auditory displays for mobile eyes-free location-based interaction Works-in-progress / Vazquez-Alvarez, Yolanda / Aylett, Matthew P. / Brewster, Stephen A. / von Jungenfeld, Rocio / Virolainen, Antti Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.1567-1572
ACM Digital Library Link
Summary: This paper explores the use of multilevel auditory displays to enable eyes-free mobile interaction with location-based information in a conceptual art exhibition space. Multilevel auditory displays enable user interaction with concentrated areas of information. However, it is necessary to consider how to present the auditory streams without overloading the user. We present an initial study in which a top-level exocentric sonification layer was used to advertise information present in a gallery-like space. Then, in a secondary interactive layer, three different conditions were evaluated that varied in the presentation (sequential versus simultaneous) and spatialisation (non-spatialised versus egocentric spatialisation) of multiple auditory sources. Results show that 1) participants spent significantly more time interacting with spatialised displays, 2) there was no evidence that a switch from an exocentric to an egocentric display increased workload or lowered satisfaction, and 3) there was no evidence that simultaneous presentation of spatialised Earcons in the secondary display increased workload.