| Introduction to the Special Issue on Aging and Information Technology | | BIBAK | Full-Text | 1 | |
| Sara J. Czaja; Peter Gregor; Vicki L. Hanson | |||
| This article provides an introduction to the Special Issue on Aging. Keywords: Aging, cognitive aging, instruction, menu design, older adults, pen
interfaces, quality of life technology, spoken dialog systems, user privacy
preferences, video modeling, voice interfaces | |||
| Being Old Doesn't Mean Acting Old: How Older Users Interact with Spoken Dialog Systems | | BIBAK | Full-Text | 2 | |
| Maria Wolters; Kallirroi Georgila; Johanna D. Moore; Sarah E. MacPherson | |||
| Most studies on adapting voice interfaces to older users work top-down by
comparing the interaction behavior of older and younger users. In contrast, we
present a bottom-up approach. A statistical cluster analysis of 447 appointment
scheduling dialogs between 50 older and younger users and 9 simulated spoken
dialog systems revealed two main user groups, a "social" group and a "factual"
group. "Factual" users adapted quickly to the systems and interacted
efficiently with them. "Social" users, on the other hand, were more likely to
treat the system like a human, and did not adapt their interaction style. While
almost all "social" users were older, over a third of all older users belonged
in the "factual" group. Cognitive abilities and gender did not predict group
membership. We conclude that spoken dialog systems should adapt to users based
on observed behavior, not on age. Keywords: Aging, clustering, cognitive aging, spoken dialog systems, voice interfaces | |||
| Exploring Methods to Improve Pen-Based Menu Selection for Younger and Older Adults | | BIBAK | Full-Text | 3 | |
| Karyn Moffatt; Joanna McGrenere | |||
| Tablet PCs are gaining popularity, but many individuals still struggle with
pen-based interaction. In a previous baseline study, we examined the types of
difficulties younger and older adults encounter when using pen-based input. The
research reported in this article seeks to address one of these errors, namely,
missing just below. This error occurs in a menu selection task when a user's
selection pattern is downwardly shifted, such that the top edge of the menu
item below the target is selected relatively often, while the corresponding top
edge of the target itself is seldom selected. We developed two approaches for
addressing missing just below errors: reassigning selections along the top edge
and deactivating them. In a laboratory evaluation, only the deactivated edge
approach showed promise overall. Further analysis of our data revealed that
individual differences played a large role in our results and identified a new
source of selection difficulty. Specifically, we observed two error-prone
groups of users: the low hitters, who, like participants in the baseline study,
made missing just below errors, and the high hitters, who, in contrast, had
difficulty with errors on the item above. All but one of the older participants
fell into one of these error-prone groups, reinforcing that older users do need
better support for selecting menu items with a pen. Preliminary analysis of the
performance data suggests both of our approaches were beneficial for the low
hitters, but that additional techniques are needed to meet the needs of the
high hitters and to address the challenge of supporting both groups in a single
interface. Keywords: Pen-based target acquisition, aging, interaction techniques, menu design,
older users | |||
| Video Modeling for Training Older Adults to Use New Technologies | | BIBAK | Full-Text | 4 | |
| Doreen Struve; Hartmut Wandke | |||
| The increasing permeation of technology in our society leads to the
challenge that everybody needs to interact with technology systems. Older
adults often meet difficulties while trying to interact with complex, demanding
systems in their daily life. One approach to enable older adults to use new
technologies in a safe and efficient way is the provision of training programs.
In this article we report about a promising training strategy using video
modeling in conjunction with other instructional methods to enhance learning.
Cognitive as well as socio-motivational aspects will be addressed. We assessed
if guided error training in video modeling will improve learning outcomes for a
Ticket Vending Machine (TVM). To investigate if the training method might be
beneficial for younger adults as well, we compared 40 younger and 40 older
adult learners in a guided error training course with error-free training.
Younger and older participants made fewer mistakes in guided error training,
but no differences occurred in task completion times. Moreover, self-efficacy
increased with training for both age groups, but no significant differences
were found for the training condition. Analysis of knowledge gains showed a
significant benefit of guided error training in structural knowledge. Overall,
the results showed that guided error training may enhance learning for younger
and older adults who are learning to use technology. Keywords: Instruction, guided error training, older adults, self-efficacy, technology
use, video modeling | |||
| Disability, Age, and Informational Privacy Attitudes in Quality of Life Technology Applications: Results from a National Web Survey | | BIBAK | Full-Text | 5 | |
| Scott Beach; Richard Schulz; Julie Downs; Judith Matthews; Bruce Barron; Katherine Seelman | |||
| Technology aimed at enhancing function and enabling independent living among
older and disabled adults is a growing field of research. Privacy concerns are
a potential barrier to adoption of such technology. Using data from a national
Web survey (n=1,518), we focus on perceived acceptability of sharing
information about toileting, taking medications, moving about the home,
cognitive ability, driving behavior, and vital signs with five targets: family,
healthcare providers, insurance companies, researchers, and government. We also
examine acceptability of recording the behaviors using three methods: video
with sound, video without sound, and sensors. Results show that sharing or
recording information about toileting behavior; sharing information with the
government and insurance companies; and recording the information using video
were least acceptable. Respondents who reported current disability were
significantly more accepting of sharing and recording of information than
nondisabled adults, controlling for demographic variables, general technology
attitudes, and assistive device use. Results for age were less consistent,
although older respondents tended to be more accepting than younger
respondents. The study provides empirical evidence from a large national sample
of the implicit trade-offs between privacy and the potential for improved
health among older and disabled adults in quality of life technology
applications. Keywords: User privacy preferences, quality of life technology | |||
| Guest Editorial | | BIB | Full-Text | 7 | |
| Armando Barreto; Torsten Felzer | |||
| A3: HCI Coding Guideline for Research Using Video Annotation to Assess Behavior of Nonverbal Subjects with Computer-Based Intervention | | BIBAK | Full-Text | 8 | |
| Joshua Hailpern; Karrie Karahalios; James Halle; Laura Dethorne; Mary-Kelsey Coletto | |||
| HCI studies assessing nonverbal individuals (especially those who do not
communicate through traditional linguistic means: spoken, written, or sign) are
a daunting undertaking. Without the use of directed tasks, interviews,
questionnaires, or question-answer sessions, researchers must rely fully upon
observation of behavior, and the categorization and quantification of the
participant's actions. This problem is compounded further by the lack of
metrics to quantify the behavior of nonverbal subjects in computer-based
intervention contexts. We present a set of dependent variables called A3
(pronounced A-Cubed) or Annotation for ASD Analysis, to assess the behavior of
this demographic of users, specifically focusing on engagement and
vocalization. This paper demonstrates how theory from multiple disciplines can
be brought together to create a set of dependent variables, as well as
demonstration of these variables, in an experimental context. Through an
examination of the existing literature, and a detailed analysis of the current
state of computer vision and speech detection, we present how computer
automation may be integrated with the A3 guidelines to reduce coding time and
potentially increase accuracy. We conclude by presenting how and where these
variables can be used in multiple research areas and with varied target
populations. Keywords: ASD, Autism, Kappa, annotation, audio feedback, coding, guideline,
intervention, nonverbal, point-by-point agreement, reliability, video,
visualization | |||
| A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language | | BIBAK | Full-Text | 9 | |
| Matt Huenerfauth | |||
| Many deaf adults in the United States have difficulty reading written
English text; computer animations of American Sign Language (ASL) can improve
these individuals' access to information, communication, and services. Planning
and scripting the movements of a virtual character's arms and body to perform a
grammatically correct and understandable ASL sentence is a difficult task, and
the timing subtleties of the animation can be particularly challenging. After
examining the psycholinguistics literature on the speed and timing of ASL, we
have designed software to calculate realistic timing of the movements in ASL
animations. We have built algorithms to calculate the time-duration of signs
and the location/length of pauses during an ASL animation. To determine whether
our software can improve the quality of ASL animations, we conducted a study in
which native ASL signers evaluated the ASL animations processed by our
algorithms. We have found that: (1) adding linguistically motivated pauses and
variations in sign-durations improved signers' performance on a comprehension
task and (2) these animations were rated as more understandable by ASL signers. Keywords: American Sign Language, accessibility technology for the deaf, animation,
evaluation, natural language generation | |||
| The Development and Evaluation of Performance-Based Functional Assessment: A Methodology for the Measurement of Physical Capabilities | | BIBAK | Full-Text | 10 | |
| Kathleen J. Price; Andrew Sears | |||
| Understanding and describing the physical capabilities of users with motor
impairments is a significant challenge for accessibility researchers and system
designers alike. Current practice is to use descriptors such as medical
diagnoses to represent a person's physical capabilities. This solution is not
adequate due to similarities in functional capabilities between diagnoses as
well as differences in capabilities within a diagnosis. An alternative is user
self-reporting or observation by another person, but these solutions can be
problematic because they rely on individual interpretations of capabilities and
may introduce unwanted bias. The current research focuses on defining an
objective, quantifiable, repeatable, and efficient methodology for assessing an
individual's physical capabilities in relation to use of information
technologies. Thirty-one users with a range of physical capabilities
participated in the evaluation of the proposed performance-based functional
assessment methodology. Building on the current standard for such assessments,
multiple observers provided independent assessments that served as the gold
standard for comparison. Promising metrics produced through the
performance-based assessment were identified through comparisons with these
observer evaluations. Predictive models were then generated via regression and
correlation analysis. The models were validated using a three-fold validation
process. Results from this initial research are encouraging, with the resulting
models explaining up to 92% of the variance in user capabilities. Directions
for future research are discussed. Keywords: Functional assessment, HCI, accessibility, physical capabilities | |||
| Exploring Visual and Motor Accessibility in Navigating a Virtual World | | BIBAK | Full-Text | 11 | |
| Shari Trewin; Mark Laff; Vicki Hanson; Anna Cavender | |||
| For many millions of users, 3D virtual worlds provide an engaging, immersive
experience heightened by a synergistic combination of visual realism with
dynamic control of the user's movement within the virtual world. For
individuals with visual or dexterity impairments, however, one or both of those
synergistic elements are impacted, reducing the usability and therefore the
utility of the 3D virtual world. This article considers what features are
necessary to make virtual worlds usable by such individuals. Empirical work has
been based on a multiplayer 3D virtual world game called PowerUp, to which we
have built in an extensive set of accessibility features. These features
include in-world navigation and orientation tools, font customization,
self-voicing text-to-speech output, key remapping options, and keyboard-only
and mouse-only navigation. Through empirical work with legally blind teenagers
and adults with cerebral palsy, these features have been refined and validated.
Whereas accessibility support for users with visual impairment often revolves
around keyboard navigation, these studies emphasized the need to support visual
aspects of pointing device actions too. Other notable findings include use of
speech to supplement sound effects for novice users, and, for those with
cerebral palsy, a general preference to use a pointing device to look around
the world, rather than keys or on-screen buttons. The PowerUp accessibility
features provide a core level of accessibility for the user groups studied. Keywords: 3D, accessibility, audio interfaces, cerebral palsy, input, virtual worlds | |||
| Universal Design of Auditory Graphs: A Comparison of Sonification Mappings for Visually Impaired and Sighted Listeners | | BIBAK | Full-Text | 12 | |
| B. N. Walker; L. M. Mauney | |||
| Determining patterns in data is an important and often difficult task for
scientists and students. Unfortunately, graphing and analysis software
typically is largely inaccessible to users with vision impairment. Using sound
to represent data (i.e., sonification or auditory graphs) can make data
analysis more accessible; however, there are few guidelines for designing such
displays for maximum effectiveness. One crucial yet understudied design issue
is exactly how changes in data (e.g., temperature) are mapped onto changes in
sound (e.g., pitch), and how this may depend on the specific user. In this
study, magnitude estimation was used to determine preferred data-to-display
mappings, polarities, and psychophysical scaling functions relating data values
to underlying acoustic parameters (frequency, tempo, or modulation index) for
blind and visually impaired listeners. The resulting polarities and scaling
functions are compared to previous results with sighted participants. There was
general agreement about polarities obtained with the two listener populations,
with some notable exceptions. There was also evidence for strong similarities
regarding the magnitudes of the slopes of the scaling functions, again with
some notable differences. For maximum effectiveness, sonification software
designers will need to consider carefully their intended users' vision
abilities. Practical implications and limitations are discussed. Keywords: Magnitude estimation, auditory display, visually impaired | |||
| Computer Usage by Children with Down Syndrome: Challenges and Future Research | | BIBAK | Full-Text | 13 | |
| Jinjuan Feng; Jonathan Lazar; Libby Kumin; Ant Ozok | |||
| Children with Down syndrome, like neurotypical children, are growing up with
extensive exposure to computer technology. Computers and computer-related
devices have the potential to help these children in education, career
development, and independent living. Our understanding of computer usage by
this population is quite limited. Most of the software, games, and Web sites
that children with Down syndrome interact with are designed without
consideration of their special needs, making the applications less effective or
completely inaccessible. We conducted a large-scale survey that collected
computer usage information from the parents of approximately six hundred
children with Down syndrome. This article reports the text responses collected
in the survey and is intended as a step towards understanding the difficulties
children with Down syndrome experience while using computers. The relationship
between the age and the specific type of difficulties, as well as related
design challenges are also reported. A number of potential research directions
and hypotheses are identified for future studies. Due to limitations in survey
methodology, the findings need to be further validated through
hypothesis-driven, empirical studies. Keywords: Down syndrome, children, computer use, human-computer interaction | |||
| ITHACA: An Open Source Framework for Building Component-Based Augmentative and Alternative Communication | | BIBAK | Full-Text | 14 | |
| A Applications; Alexandros Pino; Georgios Kouroupetroglou | |||
| As an answer to the disabled community's odyssey to gain access to
adaptable, modular, multilingual, cheap and sustainable Augmentative and
Alternative Communication (AAC) products, we propose the use of the ITHACA
framework. It is a software environment for building component-based AAC
applications, grounded on the Design for All principles and a hybrid --
community and commercial -- Open Source development model. ITHACA addresses the
developers, the vendors, as well as the people who use AAC. We introduce a new
viewpoint on the AAC product design-develop-distribute lifecycle, and a novel
way to search-select-modify-maintain the AAC aid. ITHACA provides programmers
with a set of tools and reusable Open Source code for building AAC software
components. It also facilitates AAC product vendors to put together
sophisticated applications using the available on the Web, independently
premanufactured, free or commercial software parts. Furthermore, it provides
people who use AAC with a variety of compatible AAC software products which
incorporate multimodal, user-tailored interfaces that can fulfill their
changing needs. The ITHACA architecture and the proposed fusion of past and
current approaches, trends and technologies are explained. ITHACA has been
successfully applied by implementing a family of AAC products, based on
interchangeable components. Several ready to use ITHACA-based components,
including on-screen keyboards, Text-to-Speech, symbol selection sets,
e-chatting, emailing, and scanning-based input, as well as four complete
communication aids addressing different user cases have been developed. This
demonstration showed good acceptance of the ITHACA applications and substantial
improvement of the end users' communication skills. Developers' experience on
working in ITHACA's Open Source projects was also positively evaluated. More
importantly, the potential contribution of the component-based framework and
Open Source development model combination to the AAC community emerged. Keywords: Augmentative and alternative communication, component, design for all,
framework, open source | |||
| Towards A Universally Usable Human Interaction Proof: Evaluation of Task Completion Strategies | | BIBAK | Full-Text | 15 | |
| Graig Sauer; Jonathan Lazar; Harry Hochheiser; Jinjuan Feng | |||
| The need for security features to stop spam and bots has prompted research
aimed at developing human interaction proofs (HIPs) that are both secure and
easy to use. The primarily visual techniques used in these HIP tools present
difficulties for users with visual impairments. This article reports on the
development of Human-Interaction Proof, Universally Usable (HIPUU), a new
approach to human-interaction proofs based on identification of a series of
sound/image pairs. Simultaneous presentation of a single, unified task in two
alternative modalities provides multiple paths to successful task completion.
We present two alternative task completion strategies, based on differing input
strategies (menu-based vs. free text entry). Empirical results from studies
involving both blind and sighted users validate both the usability and
accessibility of these differing strategies, with blind users achieving
successful task completion rates above 90%. The strengths of the alternate task
completion strategies are discussed, along with possible approaches for
improving the robustness of HIPUU. Keywords: CAPTCHA, HIP, blind users, security, universal usability | |||
| Assessing Fit of Nontraditional Assistive Technologies | | BIBAK | Full-Text | 16 | |
| Adriane B. Randolph; Melody M. Moore Jackson | |||
| There is a variety of brain-based interface methods which depend on
measuring small changes in brain signals or properties. These methods have
typically been used for nontraditional assistive technology applications.
Non-traditional assistive technology is generally targeted for users with
severe motor disabilities which may last long-term due to illness or injury or
short-term due to situational disabilities. Control of a nontraditional
assistive technology can vary widely across users depending upon many factors
ranging from health to experience. Unfortunately, there is no systematic method
for assessing usability of nontraditional assistive technologies to achieve the
best control. The current methods to accommodate users through trial-and-error
result in the loss of valuable time and resources as users sometimes have
diminishing abilities or suffer from terminal illnesses. This work describes a
methodology for objectively measuring an individual's ability to control a
specific nontraditional assistive technology, thus expediting the
technology-fit process. Keywords: assistive technology, brain-based interfaces, brain-computer interface,
direct-brain interface, functional near-infrared, galvanic skin response,
individual characteristics, user profiles | |||