HCI Bibliography Home | HCI Journals | About TACCESS | Journal Info | TACCESS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
TACCESS Tables of Contents: 010203040506

ACM Transactions on Accessible Computing 3

Editors:Andrew Sears; Vicki L. Hanson
Standard No:ISSN 1936-7228
Links:Journal Home Page | ACM Digital Library | Table of Contents
  1. TACCESS 2010-09 Volume 3 Issue 1
  2. TACCESS 2010-10 Volume 3 Issue 2
  3. TACCESS 2011-04 Volume 3 Issue 3
  4. TACCESS 2011-04 Volume 3 Issue 4

TACCESS 2010-09 Volume 3 Issue 1

Multi-Layered Interfaces to Improve Older Adults' Initial Learnability of Mobile Applications BIBAFull-Text 1
  Rock Leung; Leah Findlater; Joanna McGrenere; Peter Graf; Justine Yang
Mobile computing devices can offer older adults (ages 65+) support in their daily lives, but older adults often find such devices difficult to learn and use. One potential design approach to improve the learnability of mobile devices is a Multi-Layered (ML) interface, where novice users start with a reduced-functionality interface layer that only allows them to perform basic tasks, before progressing to a more complex interface layer when they are comfortable. We studied the effects of a ML interface on older adults' performance in learning tasks on a mobile device. We conducted a controlled experiment with 16 older (ages 65-81) and 16 younger participants (age 21-36), who performed tasks on either a 2-layer or a nonlayered (control) address book application, implemented on a commercial smart phone. We found that the ML interface's Reduced-Functionality layer, compared to the control's Full-Functionality layer, better helped users to master a set of basic tasks and to retain that ability 30 minutes later. When users transitioned from the Reduced-Functionality to the Full-Functionality interface layer, their performance on the previously learned tasks was negatively affected, but no negative impact was found on learning new, advanced tasks. Overall, the ML interface provided greater benefit for older participants than for younger participants in terms of task completion time during initial learning, perceived complexity, and preference. We discuss how the ML interface approach is suitable for improving the learnability of mobile applications, particularly for older adults.
Accurate and Accessible Motion-Capture Glove Calibration for Sign Language Data Collection BIBAFull-Text 2
  Matt Huenerfauth; Pengfei Lu
Motion-capture recordings of sign language are used in research on automatic recognition of sign language or generation of sign language animations, which have accessibility applications for deaf users with low levels of written-language literacy. Motion-capture gloves are used to record the wearer's handshape. Unfortunately, they require a time-consuming and inexact calibration process each time they are worn. This article describes the design and evaluation of a new calibration protocol for motion-capture gloves, which is designed to make the process more efficient and to be accessible for participants who are deaf and use American Sign Language (ASL). The protocol was evaluated experimentally; deaf ASL signers wore the gloves, were calibrated (using the new protocol and using a calibration routine provided by the glove manufacturer), and were asked to perform sequences of ASL handshapes. Five native ASL signers rated the correctness and understandability of the collected handshape data. In an additional evaluation, ASL signers were asked to perform ASL stories while wearing the gloves and a motion-capture bodysuit (in some cases our new calibration protocol was used, in other cases, the standard protocol). Later, twelve native ASL signers watched animations produced from this motion-capture data and answered comprehension questions about the stories. In both evaluation studies, the new protocol received significantly higher scores than the standard calibration. The protocol has been made freely available online, and it includes directions for the researcher, images and videos of how participants move their hands during the process, and directions for participants (as ASL videos and English text).
Investigating Grid-Based Navigation: The Impact of Physical Disability BIBAFull-Text 3
  Shaojian Zhu; Jinjuan Feng; Andrew Sears
Hands-free speech-based technology can be a useful alternative for individuals that find traditional input devices, such as keyboard and mouse, difficult to use. Various speech-based navigation techniques have been examined, and several are available in commercial software applications. Among these alternatives, grid-based navigation has demonstrated both potential and limitations. In this article, we discuss an empirical study that assessed the efficacy of two enhancements to grid-based navigation: magnification and fine-tuning. The magnification capability enlarges the selected region when it becomes sufficiently small, making it easier to see the target and cursor. The fine-tuning capability allows users to move the cursor short distances to position the cursor over the target. The study involved one group of participants with physical disabilities, an age-matched group of participants without disabilities, and a third group that included young adults without disabilities. The results confirm that both magnification and fine-tuning significantly improved the participants' performance when selecting targets, especially small targets. Providing either, or both, of the proposed enhancements substantially reduced the gaps in performance due to disability and age. The results will inform the design of speech-based target selection mechanism, allowing users to select targets faster while making fewer errors.

TACCESS 2010-10 Volume 3 Issue 2

Guest Editorial ASSETS 2009 BIBFull-Text 4
  Kathleen F. McCoy
Exploratory Analysis of Collaborative Web Accessibility Improvement BIBAFull-Text 5
  Daisuke Sato; Hironobu Takagi; Masatomo Kobayashi; Shinya Kawanaka; Chieko Asakawa
The Web is becoming a platform for daily activities and is expanding the opportunities for collaboration among people all over the world. The effects of these innovations are seen not only in major Web services such as wikis and social networking services but also in accessibility services. Collaborative accessibility improvement has great potential to make the Web more adaptive. Screen reader users, developers, site owners, and any Web volunteers who want to help the users are invited into the activities to improve accessibility in a timely manner. The Social Accessibility Project is an experimental service for a new needs-driven improvement model based on collaborative metadata authoring technologies. In 20 months, about 19,000 pieces of metadata were created for more than 3,000 Web pages through collaboration based on 355 requests submitted from users. We encountered many challenges as we sought to create a new mainstream approach and created distinctive features in new user interfaces to address some of these challenges. Although the new features increased user participation, serious issues remain. The productivity of the volunteers exceeded our expectations, but we found large and important problems in the users' lack of awareness of their own accessibility problems. This is a critical problem for sustaining the active use of the service, because about 70% of the improvement starts with a request from a user. Helping users with visual impairments understand the actual issues is a crucial and challenging topic, and will lead to improved accessibility. We first introduce examples of collaboration, analyze several kinds of statistics on the activities of the users and volunteers of the pilot service, and then discuss our findings and challenges. Five future foci are considered: site-wide metadata authoring, encouraging active participation by users, quality management for the created metadata, metadata for dynamic HTML applications, and collaborations with site owners.
Orienting Kinesthetically: A Haptic Handheld Wayfinder for People with Visual Impairments BIBAFull-Text 6
  Tomohiro Amemiya; Hisashi Sugiyama
Orientation and position information are vital for people with visual impairments if they are to avoid obstacles and hazards while walking around. We develop and evaluate a haptic direction indicator that delivers directional information in real time through kinesthetic cues. The indicator uses a novel kinesthetic perception method called the pseudo-attraction force technique, which employs the nonlinear relationship between perceived and physical acceleration to generate a force sensation. In an experiment, we find that the haptic direction indicator allowed people with visual impairments to walk safely along a predefined route at their usual walking pace without any previous training, independent of the existence of auditory information. The findings indicate that the haptic direction indicator is effective at delivering simple navigational information, and is a suitable substitute for and/or enhancement to conventional wayfinding methods.
Usability of a Multimodal Video Game to Improve Navigation Skills for Blind Children BIBAFull-Text 7
  Jaime Sánchez; Mauricio Saenz; Jose Miguel Garrido
This work presents an evaluative study on the usability of a haptic device together with a sound-based video game for the development and use of orientation and mobility (O&M) skills in closed, unfamiliar spaces by blind, school-aged children. A usability evaluation was implemented for a haptic device especially designed for this study (Digital Clock Carpet) and a 3D video game (MOVA3D) in order to determine the degree to which the user accepted the device, and the level of the user's satisfaction regarding her interaction with these products for O&M purposes. In addition, a cognitive evaluation was administered. The results show that both the haptic device and the video game are usable, accepted and considered to be pleasant for use by blind children. The results also show that they are ready to be used for cognitive learning purposes. Results from a cognitive study demonstrated significant gains in tempo-spatial orientation skills of blind children when navigating in unfamiliar spaces.
Multimodal Presentation of Two-Dimensional Charts: An Investigation Using Open Office XML and Microsoft Excel BIBAFull-Text 8
  Iyad Abu Doush; Enrico Pontelli; Tran Cao Son; Dominic Simon; Ou Ma
Several solutions, based on aural and haptic feedback, have been developed to enable access to complex on-line and digital information contents for people with visual impairment. Nevertheless, there are several components of widely used software applications that are still beyond the reach of traditional screen readers and Braille displays. This article investigates the nonvisual accessibility issues associated with the graphing component of Microsoft Excel and proposes a novel approach and system. The goal is to provide flexible multi-modal presentation schemes which can help visually impaired users in comprehending the most commonly used two dimensional business charts, demonstrated within the familiar context of Excel charts. The methodology identifies the need for three distinct strategies used in the user interaction with a chart: exploratory, guided, and summarization. These methodologies have been implemented using a multimodal approach, which combines aural cues, speech commentaries, and 3-dimensional haptic feedback. The prototype implementation and the preliminary studies suggest that the multimodality can be effectively realized and users denote preferences in intertwining these methodologies to gain understanding of the content of charts. These methodologies have been implemented in a system, which makes use of the Novint Falcon haptic device and integrated as a plug-in in Microsoft Excel.

TACCESS 2011-04 Volume 3 Issue 3

Ability-Based Design: Concept, Principles and Examples BIBAFull-Text 9
  Jacob O. Wobbrock; Shaun K. Kane; Krzysztof Z. Gajos; Susumu Harada; Jon Froehlich
Current approaches to accessible computing share a common goal of making technology accessible to users with disabilities. Perhaps because of this goal, they may also share a tendency to centralize disability rather than ability. We present a refinement to these approaches called ability-based design that consists of focusing on ability throughout the design process in an effort to create systems that leverage the full range of human potential. Just as user-centered design shifted the focus of interactive system design from systems to users, ability-based design attempts to shift the focus of accessible design from disability to ability. Although prior approaches to accessible computing may consider users' abilities to some extent, ability-based design makes ability its central focus. We offer seven ability-based design principles and describe the projects that inspired their formulation. We also present a research agenda for ability-based design.
Spindex (Speech Index) Improves Auditory Menu Acceptance and Navigation Performance BIBAFull-Text 10
  Myounghoon Jeon; Bruce N. Walker
Users interact with mobile devices through menus, which can include many items. Auditory menus have the potential to make those devices more accessible to a wide range of users. However, auditory menus are a relatively new concept, and there are few guidelines that describe how to design them. In this paper, we detail how visual menu concepts may be applied to auditory menus in order to help develop design guidelines. Specifically, we examine how to optimize the designs of a new contextual cue, called "spindex" (i.e., speech index). We developed and evaluated various design alternatives for spindex and iteratively refined the design with sighted users and visually impaired users. As a result, the "attenuated" spindex was the best in terms of preference as well as performance, across user groups. Nevertheless, sighted and visually impaired participants showed slightly different responses and feedback. Results are discussed in terms of acoustical theory, practical display design, and assistive technology design.
Health Problem Solving by Older Persons Using a Complex Government Web Site: Analysis and Implications for Web Design BIBAFull-Text 11
  Joseph Sharit; Mario A. Hernandez; Sankaran N. Nair; Thomas Kuhn; Sara J. Czaja
A large number of health-related Web sites currently exist that offer consumers a wealth of information that can be used to enhance the quality of their lives. Much less attention has been given to Web sites that can support complex health-related problem solving, as opposed to more general information search activities, of user populations such as older adults. In this article, we expand on a prior usability study that examined the performance of 112 older adults who were asked to solve two problems using the U. S. government's Medicare.gov Web site. The indications from that study were that older adults had difficulty carrying out these problem-solving tasks.
   This article illustrates, in the context of a case study, the use of a structured methodology for obtaining insights into Web site design issues that could make it difficult for healthcare consumers such as older adults to solve health-related problems. Initially, a number of Web design guidelines that have been developed for older users are presented. The argument is made that such checklist-type guidelines, though essential, are difficult to apply to complex Web-based problem-solving activities. Following a review of research in the area of Web-based health information seeking and problem-solving by older adults, the description and implementation of a methodology for aiding designers in anticipating cognitive demands that older users might confront during their problem-solving activities is presented. Detailed analysis of task performance is then presented to demonstrate that very few of the study participants were able to successfully negotiate the solution to the problem. The use of the methodology for identifying a number user-Web site interaction issues and for proposing recommendations particularly relevant to older users, and ultimately for enhancing the accessibility of health Web sites, is highlighted. Finally, a detailed framework is presented that is intended for guiding designers in the application of this methodology.

TACCESS 2011-04 Volume 3 Issue 4

Evaluation of Haptic HTML Mappings Derived from a Novel Methodology BIBAFull-Text 12
  Ravi Kuber; Wai Yu; M. Sile O'Modhrain
As levels of awareness surrounding accessibility increase, designers often look towards using nonvisual technologies to make existing graphical interfaces (e.g. Web pages) more inclusive. As existing haptic design guidance is not targeted to the specific needs of blind Web users, inappropriate touchable representations of graphical objects may be developed and integrated with Web interfaces, thereby reducing the quality of the browsing experience. This paper describes the evaluation of haptic HTML mappings that were developed using a participatory-design based technique, and presented using the Logitech Wingman force-feedback mouse. Findings have shown that participants were able to identify objects presented haptically, and develop a structural representation of layout from exploring content. Participants were able to perform a range of Web-based tasks that were previously found to be difficult for some blind users when using a screen reader alone. The haptic HTML mappings are presented, along with recommendations for their application derived from a validation study. The design guidance presented offers a standard reference tool for Web designers wanting to develop an accessible browsing application, using the benefits offered by a force-feedback mouse.
Identifying Behavioral Strategies of Visually Impaired Users to Improve Access to Web Content BIBAFull-Text 13
  Darren Lunn; Simon Harper; Sean Bechhofer
The World Wide Web is a predominantly visual media for presenting and disseminating information. As such, visually impaired users, who access content through audio interaction, are hindered as the Web is not designed with their needs in mind. To compensate for this, visually impaired users develop behavioral strategies to cope when access to the content becomes challenging. While tools exist to aid visually impaired users in accessing the Web, they tend to focus on adapting content to meet the needs of the device rather than the user. Therefore, to further improve Web access an understanding of the behavioral strategies users employ is required. To achieve this, studies of eleven visually impaired Web users were conducted. The data from these sessions were analyzed to develop a framework for identifying strategies that users may employ when they face difficulties accessing the content. Using data for twenty visually impaired users obtained from an independent study, the framework was validated and shown to be flexible and accurate enough to be applicable to multiple data sources. An analysis of the coping strategies identified from the framework revealed six abstract patterns of coping. These patterns were used as the basis for developing behavior-driven transcoding that transformed static Web documents into interactive content by allowing users to navigate between key elements of the page through a consistent set of key presses. Results obtained from a user evaluation of the transcoding support the use of behavior-driven transcoding as a mechanism for improving access to Web content when compared to existing transcoding techniques. This result allows the coping strategies framework to be used as a foundation for further understanding of the strategies visually impaired users employ on Web sites and the transformations required to allow the Web to be accessible to those users.