HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

Ninth Annual ACM SIGACCESS Conference on Assistive Technologies

Fullname:Ninth International ACM SIGACCESS Conference on Assistive Technologies
Editors:Enrico Pontelli; Shari Trewin
Location:Tempe, Arizona, United States
Dates:2007-Oct-15 to 2007-Oct-17
Publisher:ACM
Standard No:ISBN 1-59593-573-8, 978-1-59593-573-1; ACM Order number 444070; ACM DL: Table of Contents hcibib: ASSETS07
Papers:53
Pages:268
Links:Conference Home Page
  1. Keynote address
  2. Direct manipulation
  3. Web accessibility
  4. Non-visual presentation of information
  5. Teaching and learning
  6. Older and younger
  7. Personal technologies
  8. Supporting communication
  9. Posters and demonstrations
  10. Student research competition

Keynote address

Brain-computer interfaces (BCIs) for communication and control BIBKFull-Text 1-2
  Jonathan R. Wolpaw
Keywords: augmentative communication and control, brain-computer interfaces, neuro-muscular disorders

Direct manipulation

A comparison of area pointing and goal crossing for people with and without motor impairments BIBAFull-Text 3-10
  Jacob O. Wobbrock; Krzysztof Z. Gajos
Prior work has highlighted the challenges faced by people with motor impairments when trying to acquire on-screen targets using a mouse or trackball. Two reasons for this are the difficulty of positioning the mouse cursor within a confined area, and the challenge of accurately executing a click. We hypothesize that both of these difficulties with area pointing may be alleviated in a different target acquisition paradigm called "goal crossing." In goal crossing, users do not acquire a confined area, but instead pass over a target line. Although goal crossing has been studied for able-bodied users, its suitability for people with motor impairments is unknown. We present a study of 16 people, 8 of whom had motor impairments, using mice and trackballs to do area pointing and goal crossing. Our results indicate that Fitts' law models both techniques for both user groups. Furthermore, although throughput for able-bodied users was higher for area pointing than for goal crossing (4.72 vs. 3.61 bits/s), the opposite was true for users with motor impairments (2.34 vs. 2.88 bits/s), suggesting that goal crossing may be viable for them. However, error rates were higher for goal crossing than for area pointing under a strict definition of crossing errors (6.23% vs. 1.94%). Subjective results indicate a preference for goal crossing among motor-impaired users. This work provides the empirical foundation from which to pursue the design of crossing-based interfaces as accessible alternatives to pointing-based interfaces.
Slipping and drifting: using older users to uncover pen-based target acquisition difficulties BIBAFull-Text 11-18
  Karyn A. Moffatt; Joanna McGrenere
This paper presents the results of a study to gather information on the underlying causes of pen-based target acquisition difficulty. In order to observe both simple and complex interaction, two tasks (menu and Fitts' tapping) were used. Thirty-six participants across three age groups (18-54, 54-69, and 70-85) were included to draw out both general shortcomings of targeting, and those difficulties unique to older users. Three primary sources of target acquisition difficulty were identified: slipping off the target, drifting unexpectedly from one menu to the next, and missing a menu selection by selecting the top edge of the item below. Based on these difficulties, we then evolved several designs for improving pen-based target acquisition. An additional finding was that including older users as participants allowed us to uncover pen-interaction deficiencies that we would likely have missed otherwise.
Barrier pointing: using physical ed BIBAFull-Text 19-26
  Jon Froehlich; Jacob O. Wobbrock; Shaun K. Kane
Mobile phones and personal digital assistants (PDAs) are incredibly popular pervasive technologies. Many of these devices contain touch screens, which can present problems for users with motor impairments due to small targets and their reliance on tapping for target acquisition. In order to select a target, users must tap on the screen, an action which requires the precise motion of flying into a target and lifting without slipping. In this paper, we propose a new technique for target acquisition called barrier pointing, which leverages the elevated physical edges surrounding the screen to improve pointing accuracy. After designing a series of barrier pointing techniques, we conducted an initial study with 9 able bodied users and 9 users with motor impairments in order to discover the parameters that make barrier pointing successful. From this data, we offer an in-depth analysis of the performance of two motor impaired users for whom barrier pointing was especially beneficial. We show the importance of providing physical stability by allowing the stylus to press against the screen and its physical edge. We offer other design insights and lessons learned that can inform future attempts at leveraging the physical properties of mobile devices to improve accessibility.
Voicedraw: a hands-free voice-driven drawing application for people with motor impairments BIBAFull-Text 27-34
  Susumu Harada; Jacob O. Wobbrock; James A. Landay
We present VoiceDraw, a voice-driven drawing application for people with motor impairments that provides a way to generate free-form drawings without needing manual interaction. VoiceDraw was designed and built to investigate the potential of the human voice as a modality to bring fluid, continuous direct manipulation interaction to users who lack the use of their hands. VoiceDraw also allows us to study the issues surrounding the design of a user interface optimized for non-speech voice-based interaction. We describe the features of the VoiceDraw application, our design process, including our user-centered design sessions with a 'voice painter', and offer lessons learned that could inform future voice-based design efforts. In particular, we offer insights for mapping human voice to continuous control.

Web accessibility

Automatic accessibility transcoding for flash content BIBAFull-Text 35-42
  Daisuke Sato; Hisashi Miyashita; Hironobu Takagi; Chieko Asakawa
It is not surprising that rich Internet content, such as Flash and DHTML, is some of the most pervasive content because of its visual attractiveness to the sighted majority. Such visually rich content has been causing severe accessibility problems, especially for people with visual disabilities. For Flash content, the kinds of accessibility information necessary for screen readers is not usually provided in the existing content. A typical example of such missing data is the lack of alternative text for buttons, hypertext links, widget roles, and so on. One of the major reasons is that the current accessibility framework of Flash content imposes a burden on content authors to make their content accessible. As a result, adding support for accessibility tends to be neglected, and screen reader users are left out of the richer Internet experiences.
   Therefore, we decided to develop an automatic accessibility transcoding system for Flash content to allow users to access a wider range of existing content, and to reduce the workload for content authors by using an automatic repair algorithm. It works as a client-side transcoding system based on the internal object model inside the Flash content. It adds and repairs accessibility information for existing Flash content, so screen readers can present more accessible information to users. Our experiment using the pilot system showed that 55% of the missing alternative texts for buttons in the tested websites could be added automatically.
SAMBA: a semi-automatic method for measuring barriers of accessibility BIBAFull-Text 43-50
  Giorgio Brajnik; Raffaella Lomuscio
Although they play an important role in any assessment procedure, web accessibility metrics are not yet well developed and studied. In addition, most metrics are geared towards conformance, and therefore are not well suited to answer questions whether the web site has critical barriers with respect to some user group.
   The paper addresses some open issues: how can accessibility be measured other than by conformance to certain guidelines? How can a metric merge results produced by accessibility evaluation tools and by expert reviewers? Does it consider error rates of the tool? How can a metric consider also severity of accessibility barriers? Can a metric tell us if a web site is more accessible for certain user groups rather than others?.
   The paper presents a new methodology and associated metric for measuring accessibility that efficiently combine expert reviews with automatic evaluation of web pages. Examples and data drawn from tests performed on 1500 web pages are also presented.
WebinSitu: a comparative analysis of blind and sighted browsing behavior BIBAFull-Text 51-58
  Jeffrey P. Bigham; Anna C. Cavender; Jeremy T. Brudvik; Jacob O. Wobbrock; Richard E. Lander
Web browsing is inefficient for blind web users because of persistent accessibility problems, but the extent of these problems and their practical effects from the perspective of the user has not been sufficiently examined. We conducted a study in situ to investigate the accessibility of the web as experienced by web users. This remote study used an advanced web proxy that leverages AJAX technology to record both the pages viewed and the actions taken by users on the web pages that they visited. Our study was conducted remotely over the period of one week, and our participants used the assistive technology and software to which they were already accustomed and had already configured according to preference. These advantages allowed us to aggregate observations of many users and to explore the practical effects on and coping strategies employed by our blind participants. Our study reflects web accessibility from the perspective of web users and describes quantitative differences in the browsing behavior of blind and sighted web users.
Effects of sampling methods on web accessibility evaluations BIBAFull-Text 59-66
  Giorgio Brajnik; Andrea Mulas; Claudia Pitton
Except for trivial cases, any accessibility evaluation has to be based on some method for selecting pages to be analyzed. But this selection process may bias the evaluation. Up to know, not much is known about available selection methods, and about their effectiveness and efficiency.
   The paper addresses the following open issues: how to define the quality of the selection process, which processes are better than others, how to measure their difference in quality, which factors may affect quality (type of assessment, size of the page pool, structural features of the web site).
   These issues are investigated through an experimental evaluation of thirteen sampling methods applied to 32000 web pages. While some of the conclusions are not surprising (for example, that sample size affect accuracy), others were not expected at all (that minimal sampling size obtains a high accuracy level under certain circumstances).

Non-visual presentation of information

Improving accessibility to statistical graphs: the iGraph-Lite system BIBAFull-Text 67-74
  Leo Ferres; Petro Verkhogliad; Gitte Lindgaard; Louis Boucher; Antoine Chretien; Martin Lachance
Information is often presented in graphical form. Unfortunately, current assistive technologies such as screen readers are not well-equipped to handle these representations. To provide accessibility to graphs published in "The Daily" (Statistics Canada's main dissemination venue), we have developed iGraph-Lite, a system that provides short verbal descriptions of the information depicted in graphs and a way to also interact with this information.
Automated tactile graphics translation: in the field BIBAFull-Text 75-82
  Chandrika Jayant; Matt Renzelmann; Dana Wen; Satria Krisnandi; Richard Ladner; Dan Comden
We address the practical problem of automating the process of translating figures from mathematics, science, and engineering textbooks to a tactile form suitable for blind students. The Tactile Graphics Assistant (TGA) and accompanying workflow is described. Components of the TGA that identify text and replace it with Braille use machine learning, computational geometry, and optimization algorithms. We followed through with the ideas in our 2005 paper by creating a more detailed workflow, translating actual images, and analyzing the translation time. Our experience in translating more than 2,300 figures from 4 textbooks demonstrates that figures can be translated in ten minutes or less of human time on average. We describe our experience with training tactile graphics specialists to use the new TGA technology.
Haptic comparison of size (relative magnitude) in blind and sighted people BIBAFull-Text 83-90
  Sarah A. Douglas; Shasta Willson
Applications for blind users often involve the mapping of information such as size (magnitude) from one sensory domain (vision) onto another (sound or touch). For example, visual perception of length can be estimated directly by touch, or encoded to pitch or even vibration. Applications for blind users will benefit from fundamental research into human perception of computer-generated substitutions for vision. In this paper we present the results of a haptics-only experiment with the PHANToM that measures human performance (time and accuracy) judging relative magnitude with computer generated haptic properties. Magnitude was represented by either physical length (displacement), or vibration varied by frequency or amplitude. Eleven blind and eleven blindfolded sighted individuals participated. Displacement tasks were 50% slower than vibration conditions for all participants. Accuracy for displacement and vibration varied by amplitude was equivalent. Vibration varied by frequency was significantly less accurate, although we are cautious about the reliability of those results. Blind participants took 50% longer with equivalent accuracy to sighted participants. Sightedness had no effect on performance regarding the type of display. No other interaction effects were found. These results suggest that vibration varied by amplitude provides a faster and equally accurate display of magnitude compared with the traditional displacement approach. Secondly, the same coding benefits equally well visually disabled and sighted individuals.
Aibrowser for multimedia: introducing multimedia content accessibility for visually impaired users BIBAFull-Text 91-98
  Hisashi Miyashita; Daisuke Sato; Hironobu Takagi; Chieko Asakawa
Multimedia content with Rich Internet Applications using Dynamic HTML (DHTML) and Adobe Flash is now becoming popular in various websites. However, visually impaired users cannot deal with such content due to audio interference with the speech from screen readers and intricate structures strongly optimized for sighted users.
   We have been developing an Accessibility Internet Browser for Multimedia (aiBrowser) to address these problems. The browser has two novel features: non-visual multimedia audio controls and alternative user interfaces using external metadata. First, by using the aiBrowser, users can directly control the audio from the embedded media with fixed shortcut keys. Therefore, this allows blind users to increase or decrease the media volume, and pause or stop the media to handle conflicts between the audio of the media and the speech from the screen reader. Second, the aiBrowser can provide an alternative simplified user interface suitable for screen readers by using external metadata, which can even be applied to dynamic content such as DHTML and Flash.
   In this paper, we discuss accessibility problems with multimedia content due to streaming media and the dynamic changes in such content, and explain how the aiBrowser addresses these problems by describing non-visual multimedia audio controls and external metadata-based alternative user interfaces. The evaluation of the aiBrowser was conducted by comparing it to JAWS, one of the most popular screen readers, on three well known multimedia-content-intensive websites.
   The evaluation showed that the aiBrowser made the content that was inaccessible with JAWS relatively accessible by using the multimedia audio controls and alternative interfaces with metadata which included alternative text, heading information, and so on. It also drastically reduced the keystrokes for navigation with aiBrowser, which implies to improve the non-visual usability.

Teaching and learning

Photonote evaluation: aiding students with disabilities in a lecture environment BIBAFull-Text 99-106
  Gregory Hughes; Peter Robinson
Visual material presented in lectures can be enhanced for students with disabilities by using high-resolution digital-still cameras. The Photonote system uses a digital-still camera to capture visual information, a digital-video camera to capture a lecturer and a second digital-video camera to capture a sign-language interpreter, if necessary. The visual information is enhanced using computer-vision algorithms and presented alongside the recorded video and audio to provide an accurate representation of a lecture which can be used by students with disabilities for review purposes. This paper presents the Photonote system and a user study evaluating its effectiveness at aiding students with disabilities in a lecture environment.
Dual educational electronic textbooks: the starlight platform BIBAFull-Text 107-114
  Dimitris Grammenos; Anthony Savidis; Yannis Georgalis; Themistoklis Bourdenas; Constantine Stephanidis
This paper presents a novel software platform for developing and interacting with multimodal interactive electronic textbooks that provide a Dual User Interface, i.e., an interface concurrently accessible by visually impaired and sighted persons. The platform, named Starlight, comprises two sub-systems: (a) the "Writer", facilitating the authoring of electronic textbooks, encompassing various categories of interactive exercises (Q&A, multiple choice, fill in the blanks, etc.); and (b) the "Reader", enabling multimodal interaction with the created electronic textbooks, supporting various features like searching, book-marking, replay of sentences / paragraphs, user annotations / comments, activity recording, and context-sensitive help. An iterative, user-centered design process was adopted, involving from the very early stages students and educators, resulting in the creation of eight textbooks for the primary and high school that are currently available in the Greek market. The paper discusses the competitive features of the Dual User Interface and of supplied functionality compared to existing accessible electronic books. It also consolidates the key design findings, elaborating on prominent design issues, design rational, and respective solutions, highlighting strengths and weaknesses, and outlining directions for future work.
A software model to support collaborative mathematical work between braille and sighted users BIBAFull-Text 115-122
  Dominique Archambault; Bernhard Stöger; Mario Batusic; Claudia Fahrengruber; Klaus Miesenberger
In this paper we describe a software model that we have developed within the framework of the MaWEn project (Mathematical Working Environment). Based on the MathML standard, this model enables collaboration between sighted people and users of Braille. It allows for synchronisation of Braille and graphical views of scientific contents as well as offering improved navigational functions for Braille users, in both reading and editing modes. The UMCL (Universal Maths Conversion Library) is used to support various national Braille Mathematical notations. After presenting the model, its implementation in MaWEn prototypes is described.
Techniques to assist in developing accessibility engineers BIBAFull-Text 123-130
  Jim A. Carter; David W. Fourne
This paper describes techniques used in a recent computer science course designed to develop accessibility engineers. It provides sufficient detail for other instructors to replicate the highly successful experience that resulted. It also discuses a number of results of the course that act as indicators of its success.

Older and younger

Providing good memory cues for people with episodic memory impairment BIBAFull-Text 131-138
  Matthew L. Lee; Anind K. Dey
Alzheimer's disease impairs episodic memory and subtly and progressively robs people of their ability to remember their recent experiences. In this paper, we describe two studies that lead to a better understanding of how caregivers use cues to support episodic memory impairment and what types of cues are best for supporting recollection. We also show how good memory cues differ between people with and without episodic memory impairment. We discuss how this improved understanding impacts the design of life logging technologies for automatically capturing and extracting the best memory cues to assist overburdened caregivers and people with episodic memory impairment in supporting recollection of episodic memory.
Data visualisation and data mining technology for supporting care for older people BIBAFull-Text 139-146
  Nubia M. Gil; Nicolas A. Hine; John L. Arnott; Julienne Hanson; Richard G. Curry; Telmo Amaral; Dorota Osipovic
The overall purpose of the research discussed here is the enhancement of home-based care by revealing individual patterns in the life of a person, through modelling of the "busyness" of activity in their dwelling, so that care can be better tailored to their needs and changing circumstances. The use of data mining and on-line analytical processing (OLAP) is potentially interesting in this context because of the possibility of exploring, detecting and predicting changes in the level of activity of people's movement that may reflect change in well-being. An investigation is presented here into the use of data mining and visualisation to illustrate activity from sensor data from a trial project run in a domestic context.
An operantly conditioned looking task for assessing infant auditory processing ability BIBAFull-Text 147-154
  Jason Nawyn; Cynthia Roesler; Teresa Realpe-Bonilla; Naseem Choudhury; April A. Benasich
In this paper, we describe the design and evaluation of a gaze-driven interface for the assessment of rapid auditory processing abilities in infants aged 4 to 6 months. A cross-modal operant conditioning procedure is used to reinforce anticipatory eye movements in response to changes in a continuous auditory stream. Using this procedure, we hope to develop a clinical tool that will enable early identification of individuals at risk for language-based learning impairments. Some of the unique opportunities and challenges inherent to designing for infant-computer interaction are discussed.

Personal technologies

Using participatory activities with seniors to critique, build, and evaluate mobile phones BIBAFull-Text 155-162
  Michael Massimi; Ronald M. Baecker; Michael Wu
Mobile phones can provide a number of benefits to older people. However, most mobile phone designs and form factors are targeted at younger people and middle-aged adults. To inform the design of mobile phones for seniors, we ran several participatory activities where seniors critiqued current mobile phones, chose important applications, and built their own imagined mobile phone system. We prototyped this system on a real mobile phone and evaluated the seniors' performance through user tests and a real-world deployment. We found that our participants wanted more than simple phone functions, and instead wanted a variety of application areas. While they were able to learn to use the software with little difficulty, hardware design made completing some tasks frustrating or difficult. Based on our experience with our participants, we offer considerations for the community about how to design mobile devices for seniors and how to engage them in participatory activities.
Variable frame rate for low power mobile sign language communication BIBAFull-Text 163-170
  Neva Cherniavsky; Anna C. Cavender; Richard E. Ladner; Eve A. Riskin
The MobileASL project aims to increase accessibility by enabling Deaf people to communicate over video cell phones in their native language, American Sign Language (ASL). Real-time video over cell phones can be a computationally intensive task that quickly drains the battery, rendering the cell phone useless. Properties of conversational sign language allow us to save power and bits: namely, lower frame rates are possible when one person is not signing due to turn-taking, and signing can potentially employ a lower frame rate than fingerspelling. We conduct a user study with native signers to examine the intelligibility of varying the frame rate based on activity in the video. We then describe several methods for automatically determining the activity of signing or not signing from the video stream in real-time. Our results show that varying the frame rate during turn-taking is a good way to save power without sacrificing intelligibility, and that automatic activity analysis is feasible.
Observing Sara: a case study of a blind person's interactions with technology BIBAFull-Text 171-178
  Kristen Shinohara; Josh Tenenberg
While software is increasingly being improved to enhance access and use, software interfaces nonetheless often create barriers for people who are blind. In response, the blind computer user develops workarounds, strategies to overcome the constraints of a physical and social world engineered for the sighted. This paper describes an interview and observational study of a blind college student interacting with various technologies within her home. Structured around Blythe, Monk and Park's Technology Biographies, these experience centered sessions focus not only on technology function, but on the relationship of function to the meanings and values that this student attributes to technology use in different settings. Studying a single user across a range of devices and tasks provides a broader and more nuanced understanding of the contexts and causes of task failure and of the workarounds employed than is possible with a more narrowly focused usability study. Themes that were revealed across a range of tasks include the importance for technologies to not "mark" the user as being blind within a predominantly sighted social world, to support user independence through portability and user control, and to allow user "resets" and brute-force fallbacks in the face of persistent task failure.
Understanding mobile phone requirements for young adults with cognitive disabilities BIBAFull-Text 179-186
  Melissa Dawe
Mobile phones have transformed the way we communicate with friends and family, coordinate our daily activities, and organize our lives. For families with children with cognitive disabilities there is widespread hope, though not always fulfilled, that personal technologies -- particularly mobile phones -- can bring a dramatic increase in their children's level of safety, independence, and social connectedness. In this research, we conducted semi-structured interviews with five families to understand the current patterns of remote communication among young adults with cognitive disabilities and their parental caregivers, and the role that remote communication played in increasing independence and safety. While some of the young adults used mobile phones and some did not, we identified common themes in requirements, patterns of use, and desires for an accessible mobile-phone based remote communication system. Requirements include the need for a simplified navigation menu with fewer options and a rugged handset and charger input. Families used mobile phones for safety check-ins and help getting un-stuck. While parents desired increased social involvement for their children, they observed that their children did not often chat with friends on the phone.

Supporting communication

The design and field evaluation of PhotoTalk: a digital image communication application for people BIBAFull-Text 187-194
  Meghan Allen; Joanna McGrenere; Barbara Purves
Talk is an application for a mobile device that allows people with aphasia to capture and manage digital photographs to support face-to-face communication. Unlike any other augmentative and alternative communication device for people with aphasia, PhotoTalk focuses solely on image capture and organization and is designed to be used independently. Our project used a streamlined process with 3 phases: (1) a rapid participatory design and development phase with two speech-language pathologists acting as representative users, (2) an informal usability study with 5 aphasic participants, which caught usability problems and provided preliminary feedback on the usefulness of PhotoTalk, and (3) a 1 month field evaluation with 2 aphasic participants, which showed that both used it regularly and fairly independently, although not always for its intended communicative purpose. Our field study demonstrated PhotoTalk's promise in terms of its usability and usefulness in real life situations.
Corpus studies in word prediction BIBAFull-Text 195-202
  Keith Trnka; Kathleen F. McCoy
Word prediction can be used to enhance the communication rate of people with disabilities who use Augmentative and Alternative Communication (AAC) devices. We use statistical methods in a word prediction system, which are trained on a corpus, and then measure the efficacy of the resulting system by calculating the theoretical keystroke savings on some held out data. Ideally training and testing should be done on a large corpus of AAC text covering a variety of topics, but no such corpus exists. We discuss training and testing on a wide variety of corpora meant to approximate text from AAC users. We show that training on a combination of in-domain data with out-of-domain data is often more beneficial than either data set alone and that advanced language modeling such as topic modeling is portable even when applied to very different text.
SIBYLLE: a system for alternative communication adapting to the context and its user BIBAFull-Text 203-210
  Tonio Wandmacher; Jean-Yves Antoine; Franck Poirier
In this paper, we describe the latest version of SIBYLLE, an AAC system that permits persons suffering from severe physical disabilities to enter text with any computer application and also to compose messages to be read out by a speech synthesis module. The system consists of a virtual keyboard comprising a set of keypads which allow entering characters or full words by a single-switch selection process. It also comprises a sophisticated word prediction component which dynamically calculates the most appropriate words for a given context. This component is auto-adaptive, i.e. it learns on every text the user has entered. It thus adapts its predictions to the user's language and the current topic of communication as well. So far the system works for French, German and English. Earlier versions of SIBYLLE have been used since 2001 in the Kerpape rehabilitation center (Brittany, France).
Evaluating American Sign Language generation through the participation of native ASL signers BIBAFull-Text 211-218
  Matt Huenerfauth; Liming Zhao; Erdan Gu; Jan Allbeck
We discuss important factors in the design of evaluation studies for systems that generate animations of American Sign Language (ASL) sentences. In particular, we outline how some cultural and linguistic characteristics of members of the American Deaf community must be taken into account so as to ensure the accuracy of evaluations involving these users. Finally, we describe our implementation and user-based evaluation (by native ASL signers) of a prototype ASL generator to produce sentences containing classifier predicates, frequent and complex spatial phenomena that previous ASL generators have not produced.

Posters and demonstrations

Evaluation of onscreen precompensation algorithms for computer users with visual aberrations BIBAFull-Text 219-220
  Miguel, Jr. Alonso; Armando Barreto; Julie A. Jacko; Malek Adjouadi
In this paper, we present statistical results from testing our precompensation algorithms with 20 human subjects. A factorial experiment was designed and tested to evaluate the significance that the method, icon size, and subject group have on the ability of users to identify icons. These results reinforce software and "artificial eye" test findings indicating that the methods used for precompensation provide a significant increase in retinal image quality for users that have visual aberrations present in the optical systems of their eyes. Significant interactions were also found that reveal circumstances under which the method may perform best.
Improving the outcomes of students with cognitive and learning disabilities: phase I development for a web accessibility tool BIBAFull-Text 221-222
  Aaron Andersen; Cyndi Rowland
The authors will present empirically-derived draft technical specifications for a suite of tools to evaluate the cognitive load of a Web page. By adding this suite to the existing open-source WAVE (Web Accessibility Versatile Evaluator), it will be possible to increase the awareness of Web developers to the needs of users with cognitive disabilities. Participants are encouraged to comment on the draft specifications as part of an overall research methodology to include stakeholders at all levels of the development of the tool.
Feedback-based evaluation tool for web accessibility BIBAFull-Text 223-224
  Daisuke Asai; Masahiro Watanabe; Yoko Asano
This paper proposes the feedback-based evaluation tool for Web accessibility. The tool collects users' assessments of the evaluation results as feedback, and its evaluation-rules are improved based on the assessments. Utilizing this framework, the evaluation tool becomes more precise and extracts more accessibility problems than existing tools. We implemented the proposed evaluation tool as a web service and conducted a field test. The feasibility of collecting users' assessments and their quality is confirmed.
WebAnywhere: a screen reader on-the-go BIBAFull-Text 225-226
  Jeffrey P. Bigham; Craig M. Prince
People often use computers other than their own to browse the web, but blind web users are limited in where they access the web because they require specialized, expensive programs for access. WebAnywhere is a web-based, self-voicing browser that enables blind web users to access the web from almost any computer that can produce sound. The system runs entirely in standard web browsers and requires no additional software to be installed. The system could serve as a convenient, low-cost solution for both web developers targeting accessible design and end users unable to afford a full screen reader. This demonstration will offer visitors the opportunity to try WebAnywhere and learn more about it.
Simulation to predict performance of assistive interfaces BIBAFull-Text 227-228
  Pradipta Biswas; Peter Robinson
Computers offer valuable assistance to people with physical disabilities. However designing human-computer interfaces for these users is complicated. The range of abilities is more diverse than for able-bodied users, which makes analytical modelling harder. Practical user trials are also difficult and time consuming. We have developed a simulator to help with the evaluation of assistive interfaces. It can predict the likely interaction patterns when undertaking a task using a variety of input devices, and estimate the time to complete the task in the presence of different disabilities and for different levels of skill.
   Computers offer valuable assistance to people with physical disabilities. However designing human-computer interfaces for these users is complicated. The range of abilities is more diverse than for able-bodied users, which makes analytical modelling harder. Practical user trials are also difficult and time consuming. We have developed a simulator to help with the evaluation of assistive interfaces. It can predict the likely interaction patterns when undertaking a task using a variety of input devices, and estimate the time to complete the task in the presence of different disabilities and for different levels of skill.
Accessible spaces: navigating through a marked environment with a camera phone BIBAFull-Text 229-230
  Kee-Yip Chan; Roberto Manduchi; James Coughlan
We demonstrate a system designed to assist a visually impaired individual while moving in an unfamiliar environment. Small and economical color markers are placed in key locations, possibly in the vicinity of other signs (bar codes or text). The user can detect these markers by means of a cell phone equipped with a camera. Our demonstration highlights a number of novel features, including: improved acoustic interfaces; estimation of the distance to the marker, which is communicated to the user via text-to-speech (TTS); increased robustness via rotation invariance, which makes the system easier to use for users with reduced dexterity.
A novel wayfinding system based on geo-coded qr codes for individuals with cognitive impairments BIBAFull-Text 231-232
  Yao-Jen Chang; Shih-Kai Tsai; Yao-Sheng Chang; Tsen-Yung Wang
In this paper, we present a wayfinding prototype system based geo-coded QR codes for individuals with cognitive impairments. The design draws upon the cognitive models of spatial navigation and consists of wayfinding devices and a tracking system. Compared to the sensor network approach, it is easy to deploy because of low cost and short time frame. The prototype is tested with routes on a campus where a rehabilitation trained job coach oversees the process. The results show the prototype is user friendly and promising with high reliability.
Performance analysis of an integrated eye gaze tracking / electromyogram cursor control system BIBAFull-Text 233-234
  Craig A. Chin; Armando Barreto; Gualberto Cremades; Malek Adjouadi
Eye Gaze Tracking (EGT) systems allow individuals with motor disabilities to quickly move a screen cursor on a PC. However, there are limitations in the steadiness and the accuracy of cursor control and clicking capabilities they provide. On the other hand, a cursor control system to step the cursor up, down, left or right in response to voluntary contractions of specific facial muscles, developed by our group, provides steady and precise, albeit slow, cursor control, along with a reliable clicking mechanism. This system identifies muscle contractions by performing digital processing of the Electromyogram (EMG) signals generated by the facial muscles. Based on the complementary nature of the strengths of these two cursor control modalities we have developed an integrated EGT/EMG system in an attempt to consolidate the advantages of both input modalities. We have compared the selection accuracy and speed of an EGT-only cursor control implementation, our integrated EGT/EMG cursor control system and a standard handheld mouse in point-and click trials.
EasyVoice: integrating voice synthesis with Skype BIBAFull-Text 235-236
  Paulo A. Condado; Fernando G. Lobo
This paper presents EasyVoice, a system that integrates voice synthesis with Skype. EasyVoice allows a person with voice disabilities to talk with another person located anywhere in the world, removing an important obstacle that affects these people during a phone or VoIP-based conversation.
Adoption and configuration of assistive technologies: a semiotic engineering perspective BIBAFull-Text 237-238
  Katherine Deibel
This paper discusses semiotic engineering (a design methodology) and its potential for addressing issues concerning the adoption and configuration of assistive technologies.
Consolidating computer operation and wheelchair control BIBAFull-Text 239-240
  Torsten Felzer; Rainer Nordmann
The HAnds-free Mouse COntrol System HaMCoS is a software kit designed for persons with severe physical disabilities, allowing its users to fully operate a PC using intentional contractions of a single dedicated muscle only. The system's theoretical concept has originally been developed for a wheelchair control system, a revised and enhanced version of which -- comprising a stand-alone realization plus a software-based simulator -- has been introduced just recently. Since it is very likely that someone who needs the kind of assistance offered by HaMCoS is also obliged to use a wheelchair, the idea to interface the simulator with the computer tool really suggested itself. This demo presents the outcome of that idea. It will be shown how both systems work individually and how they interact with each other.
Equipping designers by simulating the effects of visual and hearing impairments BIBAFull-Text 241-242
  Joy Goodman-Deane; Patrick M. Langdon; P. John Clarkson; Nicholas H. M. Caldwell; Ahmed M. Sarhan
This paper describes a software tool that demonstrates the effects of common vision and hearing impairments on image and sound files. This helps designers to understand and empathize with the difficulties that less able users might experience when using their products and interfaces. A range of impairments and severity levels are simulated and guidance is provided on how to address some of the issues raised. The latest version of this tool will be available for demonstration at Assets.
A demonstration of a conversationally guided smart wheelchair BIBAFull-Text 243-244
  Beth A. Hockey; David P. Miller
There is a substantial population of wheelchair users who do not have the motor capabilities needed to efficiently operate a power wheelchair on their own. Various interfaces have been devised including some simple voice controlled chairs that can understand simple commands. However, such systems are awkward and slow to use. This demonstration shows operation of a smart wheelchair through a spoken conversational interface. By using a more capable dialogue, rather than a simple command paradigm, the chair can leverage off of the user's perceptual capabilities in order to process natural, high-level commands such as take me to the desk, which initiates a conversation with the chair to determine which desk and -- if it is not immediately detected by the chair's sensors -- where the desk is located.
Developing usable CAPTCHAs for blind users BIBAFull-Text 245-246
  Jonathan Holman; Jonathan Lazar; Jinjuan Heidi Feng; John D'Arcy
CAPTCHAs are widely used by websites for security and privacy purposes. However, traditional text-based CAPTCHAs are not suitable for individuals with visual impairments. We proposed and developed a new form of CAPTCHA that combines both visual and audio information to allow easy access by users with visual impairments. A preliminary evaluation suggests strong potential for the new form of CAPTCHA for both blind and visual users.
Demo of VJ-Voicebot: control of robotic arm with the Vocal Joystick BIBAFull-Text 247-248
  Brandi House; Jon Malkin; Jeff Bilmes
We explore the use of the Vocal Joystick (VJ) for robotic limb control using a small-scale robotic arm. The purpose of this research is to allow individuals with mobile disabilities to obtain greater independence by continuously controlling a robotic arm with their voice. The VJ-Voicebot relies only on continuous and discrete non-verbal vocal sounds to interact with objects in its environment. The demonstration will allow users to experience real-time control of a 5 degrees-of-freedom (DOF) robotic arm using sounds produced from their own vocal system.
Improving accessibility of html documents by generating image-tags in a proxy BIBAFull-Text 249-250
  Daniel Keysers; Marius Renn; Thomas M. Breuel
The widespread use of images without ALT tags on webpages reduces accessibility for the visually impaired. We present a system that automatically adds ALT tags based on an analysis of image contents.
Older women and digital TV: a case study BIBAFull-Text 251-252
  Sri Kurniawan
This study is a case study of understanding the deeper-seated issues that make digital television (DTV) services unappealing for older women through a technique called Delphi interview with a panel of three expert DTV older women users. The panel suggested that cost, content appropriateness, operation complexity and available assistance were the main drivers of adoption or rejection of DTV services. The panel also stated that television was not the main source of entertainment and rather had the role of providing news, filling up time and exercising minds.
Finger dance: a sound game for blind people BIBAFull-Text 253-254
  Daniel Miller; Aaron Parecki; Sarah A. Douglas
The usual approach to developing video games for people with visual impairment is sensory substitution. Elements of the visual display are replaced with auditory and/or haptic displays. Our approach differs. The purpose of the Finger Dance project is to research and develop accessible solutions to games that are inherently audio: musical rhythm-action games such as Dance Dance Revolution. However, these games still rely on visual cues that instruct the user on how to play along with musical rhythms. Finger Dance is an original audio-based rhythm-action game we developed specifically for visually impaired people. Working with both blind and sighted gamers using a human-centered development approach, players are able to play the game on their own and are enthusiastic about it. This paper discusses the game's design, development process and user studies.
Comparing speaker-dependent and speaker-adaptive acoustic models for recognizing dysarthric speech BIBAFull-Text 255-256
  Frank Rudzicz
Acoustic modeling of dysarthric speech is complicated by its increased intra- and inter-speaker variability. The accuracy of speaker-dependent and speaker-adaptive models are compared for this task, with the latter prevailing across varying levels of speaker intelligibility.
MathPlayer v2.1: web-based math accessibility BIBAFull-Text 257-258
  Neil Soiffer
MathPlayer is a plug-in to Microsoft Internet Explorer (IE) and Adobe Acrobat/Reader that renders MathML visually. It also contains a number of features that make mathematical expressions accessible to people with print-disabilities. MathPlayer integrates with many screen readers including JAWS and Window-Eyes. MathPlayer also works with a number of TextHELP!'s learning disabilities products.
Humming control interface for hand-held devices BIBAFull-Text 259-260
  Sook Young Won; Dong-In Lee; Julius Smith
This paper describes a control-by-humming interface in which a bluetooth-connected insertion earphone/microphone remotely controls a small portable system such as a modern assistive device, cell phone, etc. A pitch detection algorithm converts a subvocal hum input signal into pitch contours that are segmented into discrete "notes" and then grouped to form control commands. These commands cause transitions among operational states. An example application is given for hands-free control of a simplified (six-state) cell phone and music player system. Performance of the interface is discussed and future improvements are outlined.

Student research competition

Semantic & syntactic context-aware text entry methods BIBAFull-Text 261-262
  Jun Gong
Entering text using small devices has always been a serious problem for motor or visually impaired users. Except for their problem of word ambiguity, dictionary based predictive disambiguation (DBPD) text entry methods, such as T9", have been proved to be very efficient in terms of the number of required keystrokes. Thus, they are suitable for users with physical difficulties. Common DBPD methods only use word frequency to resolve such cases when ambiguous keystroke sequences are encountered. This paper proposes a new method, which also utilizes the semantic and syntactical contexts in the preceding texts to help disambiguate the user desired words, therefore further reduce the number of keystrokes needed from physically challenged users.
SADIe: exposing implicit information to improve accessibility BIBAFull-Text 263-264
  Darren Lunn
The World Wide Web (Web) is a visually complex, multimedia system that can be inaccessible to people with visual impairments. SADIe addresses this problem by using Semantic Web technologies to explicate implicit visual structures through a combination of an upper and lower ontology. This is then used to apply accurate transcoding to a range of Websites. To gauge the effectiveness of SADIe's transcoding capability, a user was presented with a series of Web pages, a sample of which had been adapted using SADIe. The results showed that providing answers to fact based questions could be achieved more quickly when information was exposed via SADIe's transcoding. The data obtained during the experiment was analysed with a randomization test to show that the results were statistically significant for a single user.
Information overload in non-visual web transaction: context analysis spells relief BIBAFull-Text 265-266
  Jalal Mahmud
Visually disabled individuals use screen readers to browse the Web. Sequential processing of screen readers makes Web browsing time-consuming and strenuous. The problem is further exacerbated when conducting Web transactions (e.g. buying books, paying utility bills, etc.), which typically involve a number of steps spanning several pages. Thus browsing becomes fatigue inducing and causes significant information overload. But usually one needs only small segments of Web pages for completing a Web transaction. Identifying and presenting such segments from Web pages can reduce information overload. An interesting idea is to use context surrounding a link to identify relevant information on the next Web page. I describe how context analysis coupled with Web content analysis can identify relevant content segments. Preliminary results based on my system incorporating this idea, show a lot of promise in combating the information overload problem encountered by visually disabled individuals when they do transactions over the Web.
WADER: a novel wayfinding system with deviation recovery for individuals with cognitive impairments BIBAFull-Text 267-268
  Shih-Kai Tsai
Difficulties in wayfinding hamper the quality of life of many individuals with cognitive impairments who are otherwise physically mobile. For example, an adult with mental disorder may want to lead a more independent life and be capable of getting trained and keeping employed, but may experience difficulty in using public transportation to and from the workplace. Remaining oriented in indoor spaces may also pose a challenge, for example, in an office building, a shopping mall, or a hospital where GPS devices fail to work due to scarce coverage of satellite signals. In addition, the state of art displaying positions on the navigational interfaces has not taken into consideration the needs of people with mental disabilities. A novel QR Code wayfinding system is presented in this research to increase work and life independence for cognitive-impaired patients such as people with traumatic brain injury, cerebral palsy, mental retardation, schizophrenia, and Alzheimer's disease.