Crowd-Designed Motivation: Motivational Messages for Exercise Adherence
Based on Behavior Change Theory
Behavioral Change
/
de Vries, Roelof A. J.
/
Truong, Khiet P.
/
Kwint, Sigrid
/
Drossaert, Constance H. C.
/
Evers, Vanessa
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.297-308
© Copyright 2016 ACM
Summary: Developing motivational technology to support long-term behavior change is a
challenge. A solution is to incorporate insights from behavior change theory
and design technology to tailor to individual users. We carried out two studies
to investigate whether the processes of change, from the Transtheoretical
Model, can be effectively represented by motivational text messages. We
crowdsourced peer-designed text messages and coded them into categories based
on the processes of change. We evaluated whether people perceived messages
tailored to their stage of change as motivating. We found that crowdsourcing is
an effective method to design motivational messages. Our results indicate that
different messages are perceived as motivating depending on the stage of
behavior change a person is in. However, while motivational messages related to
later stages of change were perceived as motivational for those stages, the
motivational messages related to earlier stages of change were not. This
indicates that a person's stage of change may not be the (only) key factor that
determines behavior change. More individual factors need to be considered to
design effective motivational technology.
SoQr: sonically quantifying the content level inside containers
Novel sensing techniques
/
Fan, Mingming
/
Truong, Khai N.
Proceedings of the 2015 International Conference on Ubiquitous Computing
2015-09-07
p.3-14
© Copyright 2015 ACM
Summary: In this paper, we present SoQr, a sensor that can be attached to an external
surface of a household item to estimate the amount of content inside it. The
sensor consists of a speaker and a microphone. It outputs a short duration sine
wave probing sound to excite a container and its content, and then records the
container's impulse response. SoQr then extracts Mean Mel-Frequency Cepstral
Coefficients from impulse response recordings of a container with different
content levels and learns a support vector machine classifier. Results from a
10-fold cross validation of the prediction models on 19 common household items
demonstrate that SoQr can correctly estimate the content level for these
products with an average overall F-Measure above 0.96. We then further
evaluated SoQr's robustness in different usage scenarios to gain an
understanding of how the system performs and specific challenges that might
arise when users interact with these products and the sensor.
Filteryedping: A Dwell-Free Eye Typing Technique
Interactivity
/
Pedrosa, Diogo
/
Pimentel, Maria da Graça
/
Truong, Khai N.
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.303-306
© Copyright 2015 ACM
Summary: The ability to type using eye gaze only is extremely important for
individuals with a severe motor disability. To eye type, the user currently
must sequentially gaze at letters in a virtual keyboard and dwell on each
desired letter for a specific amount of time to input that key. Dwell-based eye
typing has two possible drawbacks: unwanted input if the dwell threshold is too
short or slow typing rates if the threshold is long. We demonstrate an eye
typing technique, which does not require the user to dwell on the letters that
she wants to input. Our method automatically filters out unwanted letters from
the sequence of letters gazed at while typing a word. It ranks candidate words
based on their length and frequency and presents them to the user for
confirmation. Spell correction and support for typing words not in the corpus
are also included.
Filteryedping: Design Challenges and User Performance of Dwell-Free Eye
Typing
/
Pedrosa, Diogo
/
Pimentel, Maria Da Graça
/
Wright, Amy
/
Truong, Khai N.
ACM Transactions on Accessible Computing
2015-03
v.6
n.1
p.3
© Copyright 2015 ACM
Summary: The ability to use the movements of the eyes to write is extremely important
for individuals with a severe motor disability. With eye typing, a virtual
keyboard is shown on the screen and the user enters text by gazing at the
intended keys one at a time. With dwell-based eye typing, a key is selected by
continuously gazing at it for a specific amount of time. However, this approach
has two possible drawbacks: unwanted selections and slow typing rates. In this
study, we propose a dwell-free eye typing technique that filters out
unintentionally selected letters from the sequence of letters looked at by the
user. It ranks possible words based on their length and frequency of use and
suggests them to the user. We evaluated Filteryedping with a series of
experiments. First, we recruited participants without disabilities to compare
it with another potential dwell-free technique and with a dwell-based eye
typing interface. The results indicate it is a fast technique that allows an
average of 15.95 words per minute after 100min of typing. Then, we improved the
technique through iterative design and evaluation with individuals who have
severe motor disabilities. This phase helped to identify and create parameters
that allow the technique to be adapted to different users.
Public restroom detection on mobile phone via active probing
Contextual awareness on mobile devices
/
Fan, Mingming
/
Adams, Alexander Travis
/
Truong, Khai N.
Proceedings of the 2014 International Symposium on Wearable Computers
2014-09-13
v.1
p.27-34
© Copyright 2014 ACM
Summary: Although there are clear benefits to automatic image capture services by
wearable devices, image capture sometimes happens in sensitive spaces where
camera use is not appropriate. In this paper, we tackle this problem by
focusing on detecting when the user of a wearable device is located in a
specific type of private space -- the public restroom -- so that the image
capture can be disabled. We present an infrastructure-independent method that
uses just the microphone and the speaker on a commodity mobile phone. Our
method actively probes the environment by playing a 0.1 seconds sine wave sweep
sound and then analyzes the impulse response (IR) by extracting MFCCs features.
These features are then used to train an SVM model. Our evaluation results show
that we can train a general restroom model which is able to recognize new
restrooms. We demonstrate that this approach works on different phone hardware.
Furthermore, the volume levels, occupancy and presence of other sounds do not
affect recognition in significant ways. We discuss three types of errors that
the prediction model has and evaluate two proposed smoothing algorithms for
improving recognition.
Slide to X: unlocking the potential of smartphone unlocking
Crowdsourcing
/
Truong, Khai N.
/
Shihipar, Thariq
/
Wigdor, Daniel J.
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.3635-3644
© Copyright 2014 ACM
Summary: Unlock gestures are performed by billions of users across the world multiple
times a day. Beyond preventing accidental input on mobile devices, they
currently serve little to no other purpose. In this paper, we explore how
replacing the regular unlock screen with one that asks the user to perform a
simple, optional task, can benefit a wealth of application domains, including
data collection, personal-health metrics collection, and human intelligence
tasks. We evaluate this concept, which we refer to as Slide to X. Further, we
show that people are willing to perform microtasks presented through this
interface and continue to do so throughout the day while they visit different
locations as part of their daily routines. We then discuss how to implement
this concept and demonstrate three applications.
Disambiguation of imprecise input with one-dimensional rotational text entry
/
Walmsley, William S.
/
Snelgrove, W. Xavier
/
Truong, Khai N.
ACM Transactions on Computer-Human Interaction
2014-02
v.21
n.1
p.4
© Copyright 2014 ACM
Summary: We introduce a distinction between disambiguation supporting continuous
versus discrete ambiguous text entry. With continuous ambiguous text entry
methods, letter selections are treated as ambiguous due to expected imprecision
rather than due to discretized letter groupings. We investigate the simple case
of a one-dimensional character layout to demonstrate the potential of
techniques designed for imprecise entry. Our rotation-based sight-free
technique, Rotext, maps device orientation to a layout optimized for
disambiguation, motor efficiency, and learnability. We also present an audio
feedback system for efficient selection of disambiguated word candidates and
explore the role that time spent acknowledging word-level feedback plays in
text entry performance. Through a user study, we show that despite missing on
average by 2.46-2.92 character positions, with the aid of a maximum a
posteriori (MAP) disambiguation algorithm, users can average a sight-free entry
speed of 12.6wpm with 98.9% accuracy within 13 sessions (4.3 hours). In a
second study, expert users are found to reach 21wpm with 99.6% accuracy after
session 20 (6.7 hours) and continue to grow in performance, with individual
phrases entered at up to 37wpm. A final study revisits the learnability of the
optimized layout. Our modeling of ultimate performance indicates maximum
overall sight-free entry speeds of 29.0wpm with audio feedback, or 40.7wpm if
an expert user could operate without relying on audio feedback.
Uncovering information needs for independent spatial learning for users who
are visually impaired
Papers
/
Banovic, Nikola
/
Franz, Rachel L.
/
Truong, Khai N.
/
Mankoff, Jennifer
/
Dey, Anind K.
Fifteenth Annual ACM SIGACCESS Conference on Assistive Technologies
2013-10-21
p.24
© Copyright 2013 ACM
Summary: Sighted individuals often develop significant knowledge about their
environment through what they can visually observe. In contrast, individuals
who are visually impaired mostly acquire such knowledge about their environment
through information that is explicitly related to them. This paper examines the
practices that visually impaired individuals use to learn about their
environments and the associated challenges. In the first of our two studies, we
uncover four types of information needed to master and navigate the
environment. We detail how individuals' context impacts their ability to learn
this information, and outline requirements for independent spatial learning. In
a second study, we explore how individuals learn about places and activities in
their environment. Our findings show that users not only learn information to
satisfy their immediate needs, but also to enable future opportunities --
something existing technologies do not fully support. From these findings, we
discuss future research and design opportunities to assist the visually
impaired in independent spatial learning.
"We are not in the loop": resource wastage and conservation attitude of
employees in Indian workplace
Sustainability II
/
Jain, Mohit
/
Agrawal, Ankit
/
Ghai, Sunil K.
/
Truong, Khai N.
/
Seetharam, Deva P.
Proceedings of the 2013 International Joint Conference on Pervasive and
Ubiquitous Computing
2013-09-08
v.1
p.687-696
© Copyright 2013 ACM
Summary: Though rapid depletion of natural resources has become a global problem,
most of the solutions developed to address it are based on studies done in the
developed world. Moreover, the commercial sector is among the primary consumers
of resources, yet research work has been mostly limited to residential users.
We present a study exploring employees' perception, their beliefs and
attitudes, towards environmental sustainability at workplaces in a developing
region. To obtain broader context, we also conducted a focus group with the
facility team members. Our study highlights that in spite of strong motivations
to conserve, employees conservative actions are limited due to lack of
controls, knowledge and responsibility. We identify new opportunities for
design such as designing location specific buildings, removing inefficient
choices, and building communal spaces, to facilitate conservation at
workplaces.
Effects of hand drift while typing on touchscreens
Input 1: pens and consistency
/
Li, Frank Chun Yat
/
Findlater, Leah
/
Truong, Khai N.
Proceedings of the 2013 Conference on Graphics Interface
2013-05-29
p.95-98
© Copyright 2013 Authors
Summary: On a touchscreen keyboard, it can be difficult to continuously type without
frequently looking at the keys. One factor contributing to this difficulty is
called hand drift, where a user's hands gradually misalign with the touchscreen
keyboard due to limited tactile feedback. Although intuitive, there remains a
lack of empirical data to describe the effect of hand drift. A formal
understanding of it can provide insights for improving soft keyboards. To
formally quantify the degree (magnitude and direction) of hand drift, we
conducted a 3-session study with 13 participants. We measured hand drift with
two typing interfaces: a visible conventional keyboard and an invisible
adaptive keyboard. To expose drift patterns, both keyboards used relaxed letter
disambiguation to allow for unconstrained movement. Findings show that hand
drift occurred in both interfaces, at an average rate of 0.25mm/min on the
conventional keyboard and 1.32mm/min on the adaptive keyboard. Participants
were also more likely to drift up and/or left instead of down or right.
Paratyping: A Contextualized Method of Inquiry for Understanding Perceptions
of Mobile and Ubiquitous Computing Technologies
/
Hayes, Gillian R.
/
Truong, Khai N.
Human-Computer Interaction
2013-05-01
v.28
v.28
n.3
p.265-286
© Copyright 2013 Taylor and Francis Group, LLC
Summary: In this article, we describe the origins, use, and efficacy of a
contextualized method for evaluating mobile and ubiquitous computing systems.
This technique, which we called paratyping, is based on experience prototyping
and event-contingent experience sampling and allows researchers to survey
people in real-life situations without the need for costly and sometimes
untenable deployment evaluations. We used this tool to probe the perceptions of
the conversation partners of users of the Personal Audio Loop, a memory aid
with the potential for substantial privacy implications. Based on that
experience, we refined and adapted the approach to evaluate SenseCam, a
wearable, automatic picture-taking device, across multiple geographic
locations. We describe the benefits, challenges, and methodological
considerations that emerged during our use of the paratyping method across
these two studies. We describe how this method blends some of the benefits of
survey-based research with more contextualized methods, focusing on
trustworthiness of the method in terms of generating scientific knowledge. In
particular, this method is a good fit for studying certain classes of mobile
and ubiquitous computing applications but can be applied to many types of
applications.
"Welcome!": social and psychological predictors of volunteer socializers in
online communities
Wikipedia supported cooperative work
/
Hsieh, Gary
/
Hou, Youyang
/
Chen, Ian
/
Truong, Khai N.
Proceedings of ACM CSCW'13 Conference on Computer-Supported Cooperative Work
2013-02-23
v.1
p.827-838
© Copyright 2013 ACM
Summary: Volunteer socializers are members of a community who voluntarily help
newcomers become familiar with the popular practices and attitudes of the
community. In this paper, we explore the social and psychological predictors of
volunteer socializers on Reddit, an online social news-sharing community.
Through a survey of over 1000 Reddit users, we found that social identity,
prosocial-orientation and generalized reciprocity are all predictors of
socializers in the community. Interestingly, a user's tenure with the online
community has a quadratic effect on volunteer socialization behaviors -- new
and long-time members are both more likely to help newcomers than those in
between. We conclude with design implications for motivating users to help
newcomers.
Evaluating the Effect of Phrase Set in Hindi Text Entry
Text Comprehensibility
/
Jain, Mohit
/
Tekchandani, Khushboo
/
Truong, Khai N.
Proceedings of IFIP INTERACT'13: Human-Computer Interaction-4
2013
v.4
p.195-202
Keywords: Hindi; Text Input; Phrase Set
© Copyright 2013 IFIP
Summary: Recently, many different Indic text entry mechanisms have been proposed and
evaluated. Whereas the use of a common phrase set across text-entry research
may help to produce generalizable results across studies, previous Indic Text
entry evaluations have used a variety of different text entry phrases. In this
paper, we develop and evaluate three different types of Hindi phrase sets that
have been previously used in the literature -- Hindi films, a grade VII
textbook and a translated version of MacKenzie and Soukoreff's phrases -- to
study effects of their characteristics on performance. No statistical
difference was found in novice user performance due to the different phrase
sets. However, based on participant feedback, we report that consideration
should be taken with regards to phrase length, frequency, understandability,
and memorability in the design and selection of text-entry phrases.
Escape-Keyboard: A Sight-Free One-Handed Text Entry Method for Mobile
Touch-screen Devices
/
Banovic, Nikola
/
Yatani, Koji
/
Truong, Khai N.
International Journal of Mobile Human Computer Interaction
2013
v.5
n.3
p.42-61
© Copyright 2013 IGI Global
Summary: Mobile text entry methods traditionally have been designed with the
assumption that users can devote full visual and mental attention on the
device, though this is not always possible. The authors present their iterative
design and evaluation of Escape-Keyboard, a sight-free text entry method for
mobile touch-screen devices. Escape-Keyboard allows the user to type letters
with one hand by pressing the thumb on different areas of the screen and
performing a flick gesture. The authors then examine the performance of
Escape-Keyboard in a study that included 16 sessions in which participants
typed in sighted and sight-free conditions. Qualitative results from this study
highlight the importance of reducing the mental load with using Escape-Keyboard
to improve user performance over time. The authors thus also explore features
to mitigate this learnability issue. Finally, the authors investigate the upper
bound on the sight-free performance with Escape-Keyboard by performing
theoretical analysis of the expert peak performance.
BodyScope: a wearable acoustic sensor for activity recognition
Sensing on and with people
/
Yatani, Koji
/
Truong, Khai N.
Proceedings of the 2012 International Conference on Ubiquitous Computing
2012-09-05
p.341-350
© Copyright 2012 ACM
Summary: Accurate activity recognition enables the development of a variety of
ubiquitous computing applications, such as context-aware systems, lifelogging,
and personal health systems. Wearable sensing technologies can be used to
gather data for activity recognition without requiring sensors to be installed
in the infrastructure. However, the user may need to wear multiple sensors for
accurate recognition of a larger number of different activities. We developed a
wearable acoustic sensor, called BodyScope, to record the sounds produced in
the user's throat area and classify them into user activities, such as eating,
drinking, speaking, laughing, and coughing. The F-measure of the Support Vector
Machine classification of 12 activities using only our BodyScope sensor was
79.5%. We also conducted a small-scale in-the-wild study, and found that
BodyScope was able to identify four activities (eating, drinking, speaking, and
laughing) at 71.5% accuracy.
uSmell: a gas sensor system to classify odors in natural, uncontrolled
environments
Posters
/
Hirano, Sen H.
/
Truong, Khai N.
/
Hayes, Gillian R.
Proceedings of the 2012 International Conference on Ubiquitous Computing
2012-09-05
p.657-658
© Copyright 2012 ACM
Summary: Smell can be used to infer quite a bit of context about environments.
Previous research primarily has shown that gas sensors can be used to
discriminate accurately between odors when used in testing chambers. However,
potential real-world applications require these sensors to perform an analysis
in uncontrolled environments, which can be challenging. In this poster, we
present our gas sensor system, called uSmell, to address these challenges. This
system has the potential to improve context-aware applications, such as
lifelogging and assisted living.
CrossingGuard: exploring information content in navigation aids for visually
impaired pedestrians
Supporting visually impaired users
/
Guy, Richard
/
Truong, Khai
Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems
2012-05-05
v.1
p.405-414
© Copyright 2012 ACM
Summary: Visually impaired pedestrians experience unique challenges when navigating
an urban environment because many cues about orientation and traffic patterns
are difficult to ascertain without the use of vision. Technological aids such
as customized GPS navigation tools offer the chance to augment visually
impaired pedestrians' sensory information with a richer depiction of an
environment, but care must be taken to balance the need for more information
with other demands on the senses. In this paper, we focus on the information
needs of visually impaired pedestrians at intersections, which present a
specific cause of stress when navigating in unfamiliar locations. We present a
navigation application prototype called CrossingGuard that provides rich
information to a user such as details about intersection geometry that are not
available to visually impaired pedestrians today. A user study comparing
content-rich information to a baseline condition shows that content-rich
information raises the level of comfort that visually impaired pedestrians feel
at unfamiliar intersections. In addition, we discuss the categories of
information that are most useful. Finally, we introduce a micro-task approach
to gather intersection data via Street View annotations that achieves 85.5%
accuracy over the 9 categories of information used by CrossingGuard.
SpaceSense: representing geographical information to visually impaired
people using spatial tactile feedback
Supporting visually impaired users
/
Yatani, Koji
/
Banovic, Nikola
/
Truong, Khai
Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems
2012-05-05
v.1
p.415-424
© Copyright 2012 ACM
Summary: Learning an environment can be challenging for people with visual
impairments. Braille maps allow their users to understand the spatial
relationship between a set of places. However, physical Braille maps are often
costly, may not always cover an area of interest with sufficient detail, and
might not present up-to-date information. We built a handheld system for
representing geographical information called SpaceSense, which includes custom
spatial tactile feedback hardware-multiple vibration motors attached to
different locations on a mobile touch-screen device. It offers high-level
information about the distance and direction towards a destination and
bookmarked places through vibrotactile feedback to help the user maintain the
spatial relationships between these points. SpaceSense also adapts a
summarization technique for online user reviews of public and commercial
venues. Our user study shows that participants could build and maintain the
spatial relationships between places on a map more accurately with SpaceSense
compared to a system without spatial tactile feedback. They pointed
specifically to having spatial tactile feedback as the contributing factor in
successfully building and maintaining their mental map.
Evaluating the implicit acquisition of second language vocabulary using a
live wallpaper
Promoting educational opportunity
/
Dearman, David
/
Truong, Khai
Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems
2012-05-05
v.1
p.1391-1400
© Copyright 2012 ACM
Summary: An essential aspect of learning a second language is the acquisition of
vocabulary. However, acquiring vocabulary is often a protracted process that
requires repeated and spaced exposure; which can be difficult to accommodate
given the busyness of daily living. In this paper, we explore if a learner can
implicitly acquire second language vocabulary through her explicit interactions
with her mobile phone (e.g., navigating multiple home screens) using an
interface we developed called Vocabulary Wallpaper. In addition, we examine if
the type of vocabulary this technique exposes to the learner, whether it is
contextually relevant or contextually-independent will influence the learner's
rate of vocabulary acquisition. The results of our study show participants were
able to use Vocabulary Wallpaper to increase the number of second language
vocabulary that they can recognize and recall and their rate of vocabulary
acquisition was significantly greater when presented with a contextually
relevant vocabulary than a contextually-independent vocabulary.
Determining the orientation of proximate mobile devices using their back
facing camera
Phone fun: extending mobile interaction
/
Dearman, David
/
Guy, Richard
/
Truong, Khai
Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems
2012-05-05
v.1
p.2231-2234
© Copyright 2012 ACM
Summary: Proximate mobile devices that are aware of their orientation relative to one
another can support novel and natural forms of interaction. In this paper, we
present a method to determine the relative orientation of proximate mobile
devices using only the backside camera. We implemented this method as a service
called Orienteer, which provides mobile device with the orientation of other
proximate mobile devices. We demonstrate that orientation information can be
used to enable novel and natural interactions by developing two applications
that allow the user to push content in the direction of another device to share
it and point the device toward another to filter content based on the device's
owner. An informal evaluation revealed that interactions built upon orientation
information can be natural and compelling to users, but developers and
designers need to carefully consider how orientation should be applied
effectively.
Investigating effects of visual and tactile feedback on spatial coordination
in collaborative handheld systems
Media production
/
Yatani, Koji
/
Gergle, Darren
/
Truong, Khai
Proceedings of ACM CSCW'12 Conference on Computer-Supported Cooperative Work
2012-02-11
v.1
p.661-670
© Copyright 2012 ACM
Summary: Mobile and handheld devices have become platforms to support remote
collaboration. But, their small form-factor may impact the effectiveness of the
visual feedback channel often used to help users maintain an awareness of their
partner's activities during synchronous collaborative tasks. We investigated
how visual and tactile feedback affects collaboration on mobile devices, with
emphasis on spatial coordination in a shared workspace. From two user studies,
our results highlight different benefits of each feedback channel in
collaborative handheld systems. Visual feedback can provide precise spatial
information for collaborators, but degrades collaboration when the feedback is
occluded, and sometimes can distract the user's attention. Spatial tactile
feedback can reduce the overload of information in visual space and gently
guides the user's attention to an area of interest. Our results also show that
visual and tactile feedback can complement each other, and systems using both
feedback channels can support better spatial coordination than systems using
only one form of feedback.
An examination of how households share and coordinate the completion of
errands
Family life
/
Sohn, Timothy
/
Lee, Lorikeet
/
Zhang, Stephanie
/
Dearman, David
/
Truong, Khai
Proceedings of ACM CSCW'12 Conference on Computer-Supported Cooperative Work
2012-02-11
v.1
p.729-738
© Copyright 2012 ACM
Summary: People often complete tasks and to-dos not only for themselves but also for
others in their household. In this work, we examine how household members share
and accomplish errands both individually and together. We conducted a
three-week diary study with eight households to understand the types of errands
that family members and roommates share with each other. We explore their
motivations for offering and requesting help to complete their errands and the
variety of methods for doing so. Our findings reveal when participants
sometimes face challenges completing their errands, and how household members
request and receive help. We learned that the cooperative performance of
errands is typically dependent on household members' location, availability,
and capability. Using these findings, we discuss design opportunities for
cooperative errands sharing systems that can assist households.
Design of unimanual multi-finger pie menu interaction
Interaction techinques on and above the surface
/
Banovic, Nikola
/
Li, Frank Chun Yat
/
Dearman, David
/
Yatani, Koji
/
Truong, Khai N.
Proceedings of the 2011 ACM International Conference on Interactive
Tabletops and Surfaces
2011-11-13
p.120-129
© Copyright 2011 ACM
Summary: Context menus, most commonly the right click menu, are a traditional method
of interaction when using a keyboard and mouse. Context menus make a subset of
commands in the application quickly available to the user. However, on tabletop
touchscreen computers, context menus have all but disappeared. In this paper,
we investigate how to design context menus for efficient unimanual multi-touch
use. We investigate the limitations of the arm, wrist, and fingers and how it
relates to human performance of multi-targets selection tasks on multi-touch
surface. We show that selecting targets with multiple fingers simultaneously
improves the performance of target selection compared to traditional single
finger selection, but also increases errors. Informed by these results, we
present our own context menu design for horizontal tabletop surfaces.
The 1line keyboard: a QWERTY layout in a single line
Mobile
/
Li, Frank Chun Yat
/
Guy, Richard T.
/
Yatani, Koji
/
Truong, Khai N.
Proceedings of the 201 ACM Symposium on User Interface Software and
Technology1
2011-10-16
v.1
p.461-470
© Copyright 2011 ACM
Summary: Current soft QWERTY keyboards often consume a large portion of the screen
space on portable touchscreens. This space consumption can diminish the overall
user experience on these devices. In this paper, we present the 1Line keyboard,
a soft QWERTY keyboard that is 140 pixels tall (in landscape mode) and 40% of
the height of the native iPad QWERTY keyboard. Our keyboard condenses the three
rows of keys in the normal QWERTY layout into a single line with eight keys.
The sizing of the eight keys is based on users' mental layout of a QWERTY
keyboard on an iPad. The system disambiguates the word the user types based on
the sequence of keys pressed. The user can use flick gestures to perform
backspace and enter, and tap on the bezel below the keyboard to input a space.
Through an evaluation, we show that participants are able to quickly learn how
to use the 1Line keyboard and type at a rate of over 30 WPM after just five
20-minute typing sessions. Using a keystroke level model, we predict the peak
expert text entry rate with the 1Line keyboard to be 66-68 WPM.
Review spotlight: a user interface for summarizing user-generated reviews
using adjective-noun word pairs
Search & stuff
/
Yatani, Koji
/
Novati, Michael
/
Trusty, Andrew
/
Truong, Khai N.
Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems
2011-05-07
v.1
p.1541-1550
© Copyright 2011 ACM
Summary: Many people read online reviews written by other users to learn more about a
product or venue. However, the overwhelming amount of user-generated reviews
and variance in length, detail and quality across the reviews make it difficult
to glean useful information. In this paper, we present the iterative design of
our system, called Review Spotlight. It provides a brief overview of reviews
using adjective-noun word pairs, and allows the user to quickly explore the
reviews in greater detail. Through a laboratory user study which required
participants to perform decision making tasks, we showed that participants
could form detailed impressions about restaurants and decide between two
options significantly faster with Review Spotlight than with traditional review
webpages.