Designing Visual Complexity for Dual-screen Media
Visual Design Principles for Unconventional Displays
/
Neate, Timothy
/
Evans, Michael
/
Jones, Matt
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.475-486
© Copyright 2016 ACM
Summary: So many people are now using handheld second screens whilst watching TV that
application developers and broadcasters are designing companion applications
Second screen content that accompanies a TV programme. The nature of such
dual-screen use cases inherently causes attention to be split, somewhat
unpredictably. Dual-screen complexity, a clear factor in this attention split,
is largely unexplored by the literature and will have an unknown (and likely
negative) impact on user experience (UX). Therefore, we use empirical
techniques to investigate the objective and subjective effect of dual-screen
visual complexity on attention distribution in a companion content scenario.
Our sequence of studies culminates in the deployment of a companion application
prototype that supports adjustment of complexity (by either content curator or
viewer) to allow convergence on optimum experience. Our findings assist the
effective design of dual-screen content, informing content providers how to
manage dual second screen complexity for enhanced UX through a more blended,
complementary dual-screen experience.
What you Sculpt is What you Get: Modeling Physical Interactive Devices with
Clay and 3D Printed Widgets
Prototyping for Fabricatio, 3D Designing, Modelling & Printing
/
Jones, Michael D.
/
Seppi, Kevin
/
Olsen, Dan R.
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.876-886
© Copyright 2016 ACM
Summary: We present a method for fabricating prototypes of interactive computing
devices from clay sculptures without requiring the designer to be skilled in
CAD software. The method creates a "what you sculpt is what you get" process
that mimics the "what you see is what you get" processes used in interface
design for 2D screens. Our approach uses clay for modeling the basic shape of
the device around 3D printed representations, which we call "blanks", of
physical interaction widgets such as buttons, sliders, knobs and other
electronics. Each blank includes 4 fiducial markers uniquely arranged on a
visible surface. After scanning the sculpture, these fiducial marks allow our
software to identify widget types and locations in the scanned model. The
software then converts the scan into a printable prototype by positioning
mounting surfaces, openings for the controls and a splitting plane for
assembly. Because the blanks fit in the sculpted shape, they will reliably fit
in the interactive prototype. Creating an interactive prototype requires about
30 minutes of human effort for sculpting, and after scanning, involves a single
button click to use the process.
Emergeables: Deformable Displays for Continuous Eyes-Free Mobile Interaction
Shape Changing Displays
/
Robinson, Simon
/
Coutrix, Céline
/
Pearson, Jennifer
/
Rosso, Juan
/
Torquato, Matheus Fernandes
/
Nigay, Laurence
/
Jones, Matt
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.3793-3805
© Copyright 2016 ACM
Summary: In this paper we present the concept of Emergeables -- mobile surfaces that
can deform or 'morph' to provide fully-actuated, tangible controls. Our goal in
this work is to provide the flexibility of graphical touchscreens, coupled with
the affordance and tactile benefits offered by physical widgets. In contrast to
previous research in the area of deformable displays, our work focuses on
continuous controls (e.g., dials or sliders), and strives for fully-dynamic
positioning, providing versatile widgets that can change shape and location
depending on the user's needs. We describe the design and implementation of two
prototype emergeables built to demonstrate the concept, and present an in-depth
evaluation that compares both with a touchscreen alternative. The results show
the strong potential of emergeables for on-demand, eyes-free control of
continuous parameters, particularly when comparing the accuracy and usability
of a high-resolution emergeable to a standard GUI approach. We conclude with a
discussion of the level of resolution that is necessary for future emergeables,
and suggest how high-resolution versions might be achieved.
Mobile UX: Breaking the Glass to Richer User Experiences
Course Overviews
/
Robinson, Simon
/
Jones, Matt
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.969-972
© Copyright 2016 ACM
Summary: Apps are changing the world. If you work for a bank, an airline, an art
gallery or a even a local coffee shop, you'll probably have helped create an
app to connect and transact with your customers and visitors. As users, we
consume these bite-sized chunks of digital goodness voraciously, with some
estimates putting total app downloads to date at over 100 billion. People find
apps effective, satisfying and enjoyable. Meeting their needs, filling dead
time, solving their problems. So, why are we organising a course that argues
for some new thinking? We celebrate the success that is apps, services and the
ecology of mobile devices; but, we want to ask the question: what do the
current approaches to mobile interaction overlook? Is there more to user
experience than can be expressed through today's heads-down, glass blunted and
me-centred reality? We have both have had the great fortune to work and
collaborate with research labs, practitioners and industry. The aim of this
course is to connect the great app innovation that is out there with the sorts
of alternative thinking that have been brewing in university and industry labs
for several years. It seems obvious how things should develop in the mobile
market-more apps, better screens, longer battery life, faster and faster
networks, drawing us more and more towards the tempting pool that leads us to
digital worlds that offer so much. We want to help undermine this certainty by
challenging attendees to step back and look at alternative perspectives;
changing the future but starting now.
Mobile Phone Access to a Sign Language Dictionary
Poster Session 1
/
Jones, Michael D.
/
Hamilton, Harley
/
Petmecky, James
Seventeenth International ACM SIGACCESS Conference on Computers and
Accessibility
2015-10-26
p.331-332
© Copyright 2015 ACM
Summary: We have built a functional prototype of a mobile phone app that allows
children who are deaf to look up American Sign Language (ASL) definitions of
printed English words using the camera on the mobile phone. In the United
States, 90% of children who are deaf are born to parents who are not deaf and
who do not know sign language [3]. In many cases, this means that the child
will not be exposed to fluent sign language in the home and this can delay the
child's acquisition of both their first signed language and a secondary written
language [1]. Another consequence is that outside of school the child may not
have easy access to people or services that can translate written English words
into ASL signs. We have developed a prototype phone app that allows children
who are deaf and their parents to look up ASL definitions of English words in
printed books. The user aims the phone camera at the printed text, takes a
picture and then clicks on a word to access the ASL definition. Our next steps
are to explore the idea with children who are deaf and their parents, develop
design guidelines for sign language dictionary apps, build the app using those
guidelines and then to test the app with children who are deaf and their
hearing parents.
Activity classification at a higher level: what to do after the classifier
does its best?
Activity recognition I
/
Younes, Rabih
/
Martin, Thomas L.
/
Jones, Mark
Proceedings of the 2015 International Symposium on Wearable Computers
2015-09-07
p.83-86
© Copyright 2015 ACM
Summary: Research in activity classification has focused on the sensors, the
classification techniques and the machine learning algorithms used in the
classifier. In this work, we study a higher level of activity classification.
We present two methods that can take the final observations of a classifier and
improve them. The first method uses hidden Markov models to define a
probabilistic model that can be used to improve classification accuracy. The
second method is a novel method that we developed that uses probabilistic
models along with matching costs in order to improve accuracy. Testing showed
that both proposed methods presented a significant increase in classification
accuracy rates, while also proving that they can both run in real time.
A Novel 3D Wheelchair Simulation System for Training Young Children with
Severe Motor Impairments
Children in HCI
/
Fu, Jicheng
/
Garien, Cole
/
Smith, Sean
/
Zeng, Wenxi
/
Jones, Maria
HCI International 2015: 17th International Conference on HCI: Posters'
Extended Abstracts, Part I
2015-08-02
v.4
p.366-371
Keywords: Artificial intelligence; A*; Gaming technology; Power wheelchair; Secondary
impairment; Severe motor impairment; Simulation
© Copyright 2015 Springer International Publishing Switzerland
Summary: Young children with severe motor impairments face a higher risk of secondary
impairments in the development of social, cognitive, and motor skills, owing to
the lack of independent mobility. Although power wheelchairs are typical tools
for providing independent mobility, the steep learning curve, safety concerns,
and high cost may prevent children aged 2-5 years from using them. We have
developed a 3D wheelchair simulation system using gaming technologies for these
young children to learn fundamental wheelchair driving skills in a safe,
affordable, and entertaining environment. Depending on the skill level, the
simulation system offers different options ranging from automatic control
(i.e., the artificial intelligent (AI) module fully controls the wheelchair) to
manual control (i.e., human users are fully responsible for controlling the
wheelchair). Optimized AI algorithms were developed to make the simulation
system easy and efficient to use. We have conducted experiments to evaluate the
simulation system. The results demonstrate that the simulation system is
promising to overcome the limitations associated with real wheelchairs
meanwhile providing a safe, affordable, and exciting environment to train young
children.
Designing attention for multi-screen TV experiences
Work-in-progress (posters)
/
Neate, Timothy
/
Jones, Matt
/
Evans, Michael
Proceedings of the 2015 British Human Computer Interaction Conference
2015-07-13
p.285-286
© Copyright 2015 Authors
Summary: In this Work-In-Progress we discuss our work on designing attention for
multi-screen TV experiences. We first briefly describe the current trends, and
then progress to touch on two investigations we have conducted. In the first
study we look at current viewing habits, paying particular attention to how we
deal with attention overload when viewing secondary devices while watching
television. Then, we go on to describe work we have conducted into
investigating how we may orchestrate attention between displays. We conclude by
discussing our work's current trajectory, and then go on to state what it could
mean for broadcasters and those who wish to design applications for
multi-display TV experiences.
It's About Time: Smartwatches as Public Displays
Smartwatch Interaction
/
Pearson, Jennifer
/
Robinson, Simon
/
Jones, Matt
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.1257-1266
© Copyright 2015 ACM
Summary: Current uses of smartwatches are focused solely around the wearer's content,
viewed by the wearer alone. When worn on a wrist, however, watches are often
visible to many other people, making it easy to quickly glance at their
displays. We explore the possibility of extending smartwatch interactions to
turn personal wearables into more public displays. We begin opening up this
area by investigating fundamental aspects of this interaction form, such as the
social acceptability and noticeability of looking at someone else's watch, as
well as the likelihood of a watch face being visible to others. We then sketch
out interaction dimensions as a design space, evaluating each aspect via a
web-based study and a deployment of three potential designs. We conclude with a
discussion of the findings, implications of the approach and ways in which
designers in this space can approach public wrist-worn wearables.
Mediating Attention for Second Screen Companion Content
HCI at Home
/
Neate, Timothy
/
Jones, Matt
/
Evans, Michael
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.3103-3106
© Copyright 2015 ACM
Summary: There is increasing interest in providing content to users on secondary
devices while they watch TV. This material, termed companion content, can be
anything from textual information, to interactive quiz games. It can be
delivered throughout a broadcast and often directly relates to specific scenes
in a show. This new scenario has exposed a challenging design space for
creators of both the content and the enabling technology. A key question when
introducing content on a secondary device is how much it detracts from, or
enhances, the show the user is currently engaged with. To examine this, we
investigated methods for mediating attention from the TV and onto a secondary
device. By examining a typical use case we have been able to gain new insights
into how best to design additional stimuli to alert users to companion content
from both a broadcasting, and an HCI perspective.
Director: A Remote Guidance Mechanism
WIP Theme: Novel Interfaces and Interaction Techniques
/
Betsworth, Liam
/
Jones, Matt
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.1735-1740
© Copyright 2015 ACM
Summary: When using a mobile device as a navigation aid, we are used to receiving
computer-generated routes and directions. Remote guidance, however, remains an
underexplored design space in mobile interaction design. In this paper, we
introduce Director, a novel, remote guidance mechanism for the positioning of
people in outdoor spaces using mobile devices. We conducted a study to test our
novel positioning technique, testing its guiding accuracy and effect on
Preferred Walking Speed (PWS). Our results suggest that Director offers users a
fun and playful experience, and that our novel guidance technique is a very
accurate remote mechanism.
Growth, Change and Decay: Plants and Interaction Possibilities
WIP Theme: Trust, Privacy and Emotions
/
Steer, Cameron
/
Robinson, Simon
/
Jones, Matt
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.2037-2042
© Copyright 2015 ACM
Summary: Our work explores using plants as an interaction material to extend and
disrupt existing notions of HCI. We focus in particular on how the affordances
and properties of plants can be utilised for enhanced physical and emotional
interaction between people and computers, with our core motive being to find
methods of enriching user engagement. Moreover, we investigate whether plants
could offer a new dimension of interaction and emotional attachment to computer
interfaces. We conducted a study to observe people's interactions with
prototype plant-based systems, and also interviewed them about future usage of
plants in HCI. Our early findings indicate that using a plant-based interface
triggered emotive connections, making interactions more enjoyable. In this
work-in-progress, we discuss the results of this study, and consider the future
potential for using plants as an interaction medium.
PaperChains: Dynamic Sketch+Voice Annotations
Annotation Systems and Approaches
/
Pearson, Jennifer
/
Robinson, Simon
/
Jones, Matt
Proceedings of ACM CSCW 2015 Conference on Computer-Supported Cooperative
Work and Social Computing
2015-02-28
v.1
p.383-392
© Copyright 2015 ACM
Summary: In this paper we present a novel interface for collaborative creation of
evolving audio-visual documents. PaperChains allows users to sketch on paper
and then augment with digital audio, allowing both the physical and digital
objects to evolve simultaneously over time. The technique we have developed
focuses on affordability and accessibility in its design, using standard
cameraphones and telephone connections, which allow it to be used in regions
where literacy, technological experience and data connections cannot
necessarily be taken for granted. The main use-case that we focus on in this
paper is for collaborative storytelling, an area which has been well studied
and previously proven to be of value in resource constrained environments. To
investigate the relevance of the approach in these contexts, we undertook two
usability evaluations in India and South Africa. Results from these
investigations indicate users' ability to both create and interpret stories
using the software, as well as demonstrating high overall usability and
enjoyment. We end with a discussion of the implications of our design and
opportunities for use in other contexts.
Extraction of Encumbered Anthropometric Measures from Whole-Body Scan Data
Human Performance Modeling: HP7 -- Sensors, Biometrics, and Behavior
/
Jones, Monica L. H.
/
Lamb, Matthew
/
Shih, Jen M. V.
/
Sy, Lois A.
/
Keefe, Allan A.
Proceedings of the Human Factors and Ergonomics Society 2014 Annual Meeting
2014-10-27
p.934-938
doi 10.1177/1541931214581196
© Copyright 2014 HFES
Summary: Accurate capture of encumbered anthropometry is critical to ensure that the
analysis and design of military platforms and workspaces account for the
additional space required for clothing and PPE equipment. To examine the effect
of encumbrance on spatial claim, a method was developed to obtain
scan-extracted measures from detailed whole-body shape data. This analysis
focused on comparing cross-sectional measures extracted from 3D scan data with
measurements of the same participants obtained by traditional 1D techniques,
while donning different levels of clothing and equipment.
Head mounted displays and deaf children: Facilitating Sign Language in
Challenging Learning Environments
Thursday short papers
/
Jones, Michael
/
Lawler, M. Jeannette
/
Hintz, Eric
/
Bench, Nathan
/
Mangrubang, Fred
/
Trullender, Mallory
Proceedings of ACM IDC'14: Interaction Design and Children
2014-06-17
p.317-320
© Copyright 2014 ACM
Summary: Head-mounted displays (HMDs) are evaluated as a tool to facilitate
student-teacher interaction in sign language. Deaf or hard-of-hearing children
who communicate in sign language receive all instruction visually. In normal
deaf educational settings the child must split visual attention between signed
narration and visual aids. Settings in which visual aids are distributed over a
large visual area are particularly difficult. Sign language displayed in HMDs
may allow a deaf child to keep the signed narration in sight, even when not
looking directly at the person signing. Children from the community who
communicate primarily in American Sign Language (ASL) participated in two
phases of a study designed to evaluate the comfort and utility of viewing ASL
in an HMD.
A billion signposts: repurposing barcodes for indoor navigation
Novel approaches to navigation
/
Robinson, Simon
/
Pearson, Jennifer S.
/
Jones, Matt
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.639-642
© Copyright 2014 ACM
Summary: Barcodes are all around us -- on books, groceries and other products -- but
these everyday markers are typically used for a single focused purpose. In this
paper we explore the concept of "piggybacking" on ubiquitous markers to
facilitate indoor navigation. Our initial probe -- BookMark -- allows library
visitors to scan any nearby book to provide a custom map to the location of a
desired item. In contrast to previous indoor navigation systems, our approach
repurposes existing markers on physical items that are already in the
navigation space, meaning that no additional infrastructure is required. We
evaluated the BookMark probe in a large university library, showing its
potential with real library users. In addition, we illustrate how the general
technique shows further potential in other similar barcode-rich environments.
AudioCanvas: internet-free interactive audio photos
Lost and found in translation
/
Robinson, Simon
/
Pearson, Jennifer S.
/
Jones, Matt
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.3735-3738
© Copyright 2014 ACM
Summary: In this paper we present a novel interaction technique that helps to make
textual information more accessible to those with low or no textual literacy
skills. AudioCanvas allows cameraphone users to interact directly with their
own photos of printed media to receive audio feedback or narration. The use of
a remote telephone-based service also allows our design to be used over a
standard phone line, removing the need for data connections, which can be
problematic in developing regions. We show the value of the technique via user
evaluations in both a rural Indian village and a South African township.
Designing speech and language interactions
Workshop summaries
/
Munteanu, Cosmin
/
Jones, Matt
/
Whittaker, Steve
/
Oviatt, Sharon
/
Aylett, Matthew
/
Penn, Gerald
/
Brewster, Stephen
/
d'Alessandro, Nicolas
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.2
p.75-78
© Copyright 2014 ACM
Summary: Speech and natural language remain our most natural forms of interaction;
yet the HCI community have been very timid about focusing their attention on
designing and developing spoken language interaction techniques. While
significant efforts are spent and progress made in speech recognition,
synthesis, and natural language processing, there is now sufficient evidence
that many real-life applications using speech technologies do not require 100%
accuracy to be useful. This is particularly true if such systems are designed
with complementary modalities that better support their users or enhance the
systems' usability. Engaging the CHI community now is timely -- many recent
commercial applications, especially in the mobile space, are already tapping
the increased interest in and need for natural user interfaces (NUIs) by
enabling speech interaction in their products. This multidisciplinary, one-day
workshop will bring together interaction designers, usability researchers, and
general HCI practitioners to analyze the opportunities and directions to take
in designing more natural interactions based on spoken language, and to look at
how we can leverage recent advances in speech processing in order to gain
widespread acceptance of speech and natural language interaction.
In memory of Gary Marsden
Departments
/
Jones, Matt
/
Rogers, Yvonne
interactions
2014-03
v.21
n.2
p.6-7
© Copyright 2014 ACM
Doing innovation in the wild
New spaces for design
/
Crabtree, A.
/
Chamberlain, A.
/
Davies, M.
/
Glover, K.
/
Reeves, S.
/
Rodden, T.
/
Tolmie, P.
/
Jones, Matt
Proceedings of CHItaly '13: ACM SIGCHI Italian Chapter International
Conference on Computer-Human Interaction
2013-09-16
p.25
© Copyright 2013 ACM
Summary: Doing research 'in the wild is becoming an increasingly popular approach
towards developing innovative computing systems and applications. This paper
reflects upon a research project conducted in the wild, and key aspects of the
work involved in making the project work, to examine current tropes about the
approach. It suggests that doing research in the wild is rather more
complicated than is reflected in current understandings, and that even greater
involvement of ethnographers, computer scientists, software engineers and other
disciplines operating within systems design is needed if innovation is to be
effectively driven within and by real world contexts of use.
Shedding light on retail environments
Full Papers
/
Harwood, Tracy
/
Jones, Martin
/
Carreras, Ashley
Proceedings of 2013 Eye Tracking South Africa
2013-08-29
p.2-7
© Copyright 2013 ACM
Summary: This paper presents an overview of research into consumer responses to
lighting within retail stores using mobile eye-tracking. It begins with a brief
review of pertinent literature in relation to lighting and visual attention.
The study is small scale and experimental, using 3 scenarios with different
lighting patterns on a visual merchandising unit. Tobii Mobile™ glasses
were used to provide naturalistic visual attention data of consumer responses
to the unit. Eye-tracking data was time interval content analysed by lighting
scenario and position of focal attention on the unit. Data was subsequently
analysed using repeated measures ANOVA to assess correlations. Findings
highlight methodological implications as well as the roles of lighting and
position of products. Future research directions are discussed.
ACQR: acoustic quick response codes for content sharing on low end phones
with no internet connectivity
Developing world
/
Pearson, Jennifer
/
Robinson, Simon
/
Jones, Matt
/
Nanavati, Amit
/
Rajput, Nitendra
Proceedings of 2013 Conference on Human-computer interaction with mobile
devices and services
2013-08-27
2013-08-27
p.308-317
© Copyright 2013 ACM
Summary: In this paper we introduce Acoustic Quick Response codes to facilitate
sharing between Interactive Voice Response (IVR) service users. IVRs are
telephone-based, and similar to the world wide web in many aspects, but
currently lack support for content sharing. Our approach uses 'audio codes' to
let people share their call positions, and allows callers to hold their normal
(low-end) handsets together to synchronise. The technique uses remote
generation and recognition of audio codes to ensure that sharing is possible on
any type of phone without the need for textual literacy or an internet
connection. We begin by exploring existing user needs for sharing, then
evaluate the technical robustness of our audio-based design. We demonstrate the
value of the approach for voice service users over several separate studies --
including an eight-month extended field deployment -- then conclude with a
discussion of future possibilities for such scenarios.
Placebooks: Participation, Community, Design, and Ubiquitous Data
Aggregation 'In the Wild'
Personalized Information and Interaction
/
Chamberlain, Alan
/
Crabtree, Andy
/
Davies, Mark
/
Glover, Kevin
/
Reeves, Stuart
/
Tolmie, Peter
/
Jones, Matt
HIMI 2013: Human Interface and the Management of Information, Part I:
Information and Interaction Design
2013-07-21
v.1
p.411-420
Keywords: collaborative work; Community computing; Electronic publishing;
Participatory design; Quality of life and lifestyle
© Copyright 2013 Springer-Verlag
Summary: This paper outlines and describes the development of a multi-media data
aggregation system called Placebooks. Placebooks was developed as a ubiquitous
toolkit aimed at allowing people in rural areas to create and share digital
books that contained a variety of media, such as: maps; text; videos; audio and
images. Placebooks consists of two parts: 1) a web-based editor and viewer, and
2) an Android app that allows the user to download and view books. In
particular, the app allows the user to cache content, thereby negating the need
for 3G networks in rural areas where there is little-to-no 3G coverage. Both
the web-based tools and the app were produced in the English and Welsh
languages. The system was developed through working with local communities
using participatory approaches: working 'in the wild'. Placebooks is currently
being used by a Welsh Assembly Government project called the People's
Collection of Wales/ Casgliad y Werin.
Owl pellets and head-mounted displays: a demonstration of visual interaction
for children who communicate in a sign language
Demos
/
Jones, Michael
/
Lawler, Jeannette
/
Hintz, Eric
/
Bench, Nathan
/
Mangrubang, Fred
Proceedings of ACM IDC'13: Interaction Design and Children
2013-06-24
p.535-538
© Copyright 2013 ACM
Summary: This demonstration will provoke discussion of the role of head mounted
displays (HMD) in Deaf science education for children. The demonstration mimics
the classroom laboratory experience of children who are Deaf or
hard-of-hearing. When these children dissect an owl pellet, their teachers can
not stand behind them and offer instruction over their shoulder while the
students looks at the specimen. A teacher can sign for the student but this
requires the student to switch visual attention back and forth between the
specimen and the signing teacher. This can be difficult if the specimen is on a
table and the teacher is standing nearby. HMDs allow students to put the
signing teacher and the laboratory specimen in close visual proximity.
Participants in our demonstration will be given an owl pellet study kit and no
verbal instruction. Participants will be asked to use visual aids to identify
bones in the pellet. Some participants will view the visual aids on a poster
placed behind them. Others will view visual aids in an HMD.
Give and take: audio gift giving to support research practices
Evaluation and design methods
/
Thom, Emma
/
Jones, Matt
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.235-240
© Copyright 2013 ACM
Summary: In this paper, we introduce and explore continuing research based around the
Audio Gift system. Audio Gift uses hand-only gestures along with haptic
feedback to capture and share audio notes from a discussion. The aim is to
enhance, in a subtle way, the capture of key points during a research
discussion. In addition to describing the prototype, we present observations
and findings of an exploratory field study with archaeologists. These findings
highlight the value and challenges of Audio Gift.