HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

Tenth Annual ACM SIGACCESS Conference on Assistive Technologies

Fullname:The 10th International ACM SIGACCESS Conference on Computers and Accessibility
Editors:Simon Harper; Armando Barreto
Location:Halifax, Nova Scotia, Canada
Dates:2008-Oct-13 to 2008-Oct-15
Publisher:ACM
Standard No:ISBN 1-59593-976-8, 978-1-59593-976-0; ACM Order Number 444080; ACM DL: Table of Contents hcibib: ASSETS08
Papers:73
Pages:320
Links:Conference Home Page
  1. Keynote talk
  2. Cognition and memory (I)
  3. Cognition and memory (II)
  4. Older adults
  5. Vision
  6. Audio interaction
  7. Accessibility studies
  8. Web accessibility
  9. Games and gaming
  10. Collaborative accessibility
  11. Motor function
  12. Posters and system demonstrations
  13. Microsoft student research competition

Keynote talk

Designing technology to aid cognition BIBAFull-Text 1-2
  Ron Baecker
We present a framework for technological aids for cognition intended primarily for individuals with cognitive impairments and seniors experiencing cognitive decline. We illustrate the framework with concrete research projects and near-term challenges.

Cognition and memory (I)

Software and technologies designed for people with autism: what do users want? BIBAFull-Text 3-10
  Cynthia Putnam; Lorna Chong
Software developers, designers and researchers have been looking to technology for solutions to help and educate people with autism for over two decades. There are many examples of seemingly successful technology-based products and prototypes, yet very little is known about how well these solutions are currently integrated into lives of children and adults with autism and their families. This paper reports on results from an anonymous on-line survey intended as a first step to elucidate information about software and technology use. Additionally, data was analyzed to aid creation of future technology-based products for people with autism that are not just effective, but that also meet important user goals and align to their interests and strengths. Major findings included: (1) very few respondents (25\%) had any experience with software or technology designed for people with cognitive disabilities; (2) when asked an open-ended question about what they desire in technology design, respondents reported three major goals (social skills, academic skills, and organization skills), and many suggestions for improvements to software and hardware design; and (3) technology was reported as both a major strength and interest for people with autism.
A3: a coding guideline for HCI+autism research using video annotation BIBAFull-Text 11-18
  Joshua Hailpern; Karrie Karahalios; James Halle; Laura DeThorne; Mary-Kelsey Coletto
Due to the profile of strengths and weaknesses indicative of autism spectrum disorders (ASD), technology may play a key role in ameliorating communication difficulties with this population. This paper documents coding guidelines established through cross-disciplinary work focused on facilitating communication development in children with ASD using computerized feedback. The guidelines, referred to as A3 (pronounced A-Cubed) or Annotation for ASD Analysis, define and operationalize a set of dependent variables coded via video annotation. Inter-rater reliability data are also presented from a study currently in-progress, as well as related discussion to help guide future work in this area. The design of the A3 methodology is well-suited for the examination and evaluation of the behavior of low-functioning subjects with ASD who interact with technology.
Technology for just-in-time in-situ learning of facial affect for persons diagnosed with an autism spectrum disorder BIBAFull-Text 19-26
  Miriam Madsen; Rana el Kaliouby; Matthew Goodwin; Rosalind Picard
Many first-hand accounts from individuals diagnosed with autism spectrum disorders (ASD) highlight the challenges inherent in processing high-speed, complex, and unpredictable social information such as facial expressions in real-time. In this paper, we describe a new technology aimed at helping people capture, analyze, and reflect on a set of social-emotional signals communicated by facial and head movements in live social interaction that occurs with their everyday social companions. We describe our development of a new combination of hardware using a miniature camera connected to an ultramobile PC together with custom software developed to track, capture, interpret, and intuitively present various interpretations of the facial-head movements (e.g., presenting that there is a high probability the person looks "confused"). This paper describes this new technology together with the results of a series of pilot studies conducted with adolescents diagnosed with ASD who used the technology in their peer-group setting and contributed to its development via their feedback.

Cognition and memory (II)

A context aware handheld wayfinding system for individuals with cognitive impairments BIBAFull-Text 27-34
  Yao-Jen Chang; Shih-Kai Tsai; Tsen-Yung Wang
A challenge to individuals with cognitive impairments in wayfinding is how to remain oriented, recall routines, and travel in unfamiliar areas in a way relying on limited cognitive capacity. According to psychological model of spatial navigation and the requirements of rehabilitation professionals, a novel wayfinding system is presented with an aim to increase workplace and life independence for people suffering from diseases such as traumatic brain injury, cerebral palsy, mental retardation, schizophrenia, Down syndromes, and Alzheimer's disease. This paper describes an approach to providing distributed cognition support of travel guidance for persons with cognitive disabilities. The unique strength of the system is the ability to provide unique-to-the-user prompts that are triggered by context. As this population is very sensitive to issues of abstraction (e.g. icons) and presents the designer with the need to tailor prompts to a 'universe-of-one' the use of images specific to each user and context is implemented. The key to the approach is to spread the context awareness across the system, with the context being flagged by the QR-code tags and the appropriate response being evoked by displaying the appropriate path guidance images indexed by the intersection of specific end-user and context ID embedded in the tags. By separating the context trigger from the pictorial response, responses can be updated independently of the rest of the installed system, and a single QR-code tag can trigger multiple responses in the PDA depending on the end-user and their specific path. A prototype is built and tested in field experiments with real patients. The experimental results show the human-computer interface is friendly and the capabilities of wayfinding are reliable.
Computer usage by young individuals with down syndrome: an exploratory study BIBAFull-Text 35-42
  J. Feng; J. Lazar; L. Kumin; A. Ozok
In this paper, we discuss the results of an online survey that investigates how children and young adults with Down syndrome use computers and computer-related devices. The survey responses cover 561 individuals with Down syndrome between the age of four to 21. The survey results suggest that the majority of the children and young adults with Down syndrome can use the mouse to interact with computers, which requires spatial, cognitive, and fine motor skills that were previously believed to be quite challenging for individuals with Down syndrome. The results show great difficulty in text entry using keyboards. Young individuals with Down syndrome are using a variety of computer applications and computer related devices, and computers and computer-related devices play important roles in the life of individuals with Down syndrome. There appears to be great potential in computer-related education and training to broaden existing career opportunities for individuals with Down syndrome, and there needs to be further research on this topic.
Understanding pointing problems in real world computing environments BIBAFull-Text 43-50
  Amy Hurst; Jennifer Mankoff; Scott E. Hudson
Understanding how pointing performance varies in real world computer use and over time can provide valuable insight about how systems should accommodate changes in pointing behavior. Unfortunately, pointing data from individuals with pointing problems is rarely studied during real world use. Instead, it is most frequently evaluated in a laboratory where it is easier to collect and evaluate data. We developed a technique to collect and analyze real world pointing performance which we used to investigate the variance in performance of six individuals with a range of pointing abilities. Features of pointing performance we analyzed include metrics such as movement trajectories, clicking, and double clicking. These individuals exhibited high variance during both supervised and unsupervised (or real world) computer use across multiple login sessions. The high variance found within each participant highlights the potential inaccuracy of judging performance based on a single laboratory session.

Older adults

Hover or tap?: supporting pen-based menu navigation for older adults BIBAFull-Text 51-58
  Karyn Moffatt; Sandra Yuen; Joanna McGrenere
Tablet PCs are gaining popularity, but many users, particularly older ones, still struggle with pen-based interaction. One type of error, drifting, occurs when users accidentally hover over an adjacent menu, causing their focus menu to close and the adjacent one to open. In this paper, we propose two approaches to address drifting. The first, tap, requires an explicit tap to switch menus, and thus, eliminates the possibility of a drift. The second, glide, uses a distance threshold to delay switching, and thereby reduce the likelihood of a drift. We performed a comparative evaluation of our approaches with a control interface. Tap was effective at reducing drifts for both groups, but it was only popular among older users. Glide surprisingly did not show any performance improvement. Additional research is needed to determine if the negative findings for glide are a result of the particular threshold used, or reflect a fundamental flaw in the glide approach.
Technology devices for older adults to aid self management of chronic health conditions BIBAFull-Text 59-66
  Amritpal Singh Bhachu; Nicolas Hine; John Arnott
The overall purpose of this study is the enhancement of devices and visualisations used by older adults as part of a telecare system for the self-management of health conditions. The opinions and feelings towards devices that could be used as part of a telecare system were gathered from a range of older people. This was done through the use of technology evaluation workshops, and the subsequent analysis of the collected data using grounded theory and thematic coding methodologies. Presenting healthcare data to an elderly person with chronic health issues, may be an appropriate way to help that person to better manage their condition, if the data can be understood.
How older and younger adults differ in their approach to problem solving on a complex website BIBAFull-Text 67-72
  Peter G. Fairweather
Older adults differ from younger ones in the ways they experience the World Wide Web. For example, they tend to move from page to page more slowly, take more time to complete tasks, make more repeated visits to pages, and take more time to select link targets than their younger counterparts. These differences are consistent with the physical and cognitive declines associated with aging. The picture that emerges has older adults doing the same sorts of things with websites as younger adults, although less efficiently, less accurately and more slowly. This paper questions that view. We present new findings that show that, to accomplish their purposes, older adults may systematically undertake different activities and use different parts of websites than younger adults. We examined how a group of adults 18 to 73 years of age moved through a complex website seeking to solve a specific problem. We found that the users exhibited strong age-related tendencies to follow particular paths and visit particular zones while in pursuit of a common goal. We also assessed how experience with the web may mediate these tendencies. We conclude the paper with a discussion of the implications of the finding that users' characteristics not only affect how they navigate but what activities they undertake along the way.

Vision

Slide rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques BIBAFull-Text 73-80
  Shaun K. Kane; Jeffrey P. Bigham; Jacob O. Wobbrock
Recent advances in touch screen technology have increased the prevalence of touch screens and have prompted a wave of new touch screen-based devices. However, touch screens are still largely inaccessible to blind users, who must adopt error-prone compensatory strategies to use them or find accessible alternatives. This inaccessibility is due to interaction techniques that require the user to visually locate objects on the screen. To address this problem, we introduce Slide Rule, a set of audio-based multi-touch interaction techniques that enable blind users to access touch screen applications. We describe the design of Slide Rule, our interaction techniques, and a user study in which 10 blind people used Slide Rule and a button-based Pocket PC screen reader. Results show that Slide Rule was significantly faster than the button-based system, and was preferred by 7 of 10 users. However, users made more errors when using Slide Rule than when using the more familiar button-based system.
Note-taker: enabling students who are legally blind to take notes in class BIBAFull-Text 81-88
  David Hayden; Dirk Colbry; John A., Jr. Black; Sethuraman Panchanathan
The act of note-taking is a key component of learning in secondary and post-secondary classrooms. Students who take notes retain information from classroom lectures better, even if they never refer to those notes afterward. However, students who are legally blind, and who wish to take notes in their classrooms are at a disadvantage. Simply equipping classrooms with lecture recording systems does not substitute for note taking, since it does not actively engage the student in note-taking during the lecture. In this paper we detail the problems encountered by one math and computer science student who is legally blind, and we present our proposed solution: the CUbiC Note-Taker, which is a highly portable device that requires no prior classroom setup, and does not require lecturers to adapt their presentations. We also present results from two case studies of the Note-Taker, totaling more than 200 hours of in-class use.
Refreshable tactile graphics applied to schoolbook illustrations for students with visual impairment BIBAFull-Text 89-96
  Grégory Petit; Aude Dufresne; Vincent Levesque; Vincent Hayward; Nicole Trudeau
This article presents research on making schoolbook illustrations accessible for students with visual impairment. The MaskGen system was developed to interactively transpose illustrations of schoolbooks into tactile graphics. A methodology was designed to transpose the graphics and prepare them to be displayed on the STReSS2, a refreshable tactile device. We experimented different associations of tactile rendering and audio feedbacks to find a model that children with visual impairment could use. We experimented with three scientific graphics (diagram, bar-chart and map) with forty participants: twenty sighted adults, ten adults with visual impairment, and ten children with visual impairment. Results show that the participants with visual impairment liked the tactile graphics and could use them to explore illustrations and answer questions about their content.

Audio interaction

Constructing relational diagrams in audio: the multiple perspective hierarchical approach BIBAFull-Text 97-104
  Oussama Metatla; Nick Bryan-Kinns; Tony Stockman
Although research on non-visual access to visually represented information is steadily growing, very little work has investigated how such forms of representation could be constructed through non-visual means. We discuss in this paper our approach for providing audio access to relational diagrams using multiple perspective hierarchies, and describe the design of two interaction strategies for constructing and manipulating such diagrams through this approach. A comparative study that we conducted with sighted users showed that a non-guided strategy allowed for significantly faster interaction times, and that both strategies supported similar levels of diagram comprehension. Overall, the reported study revealed that using multiple perspective hierarchies to structure the information encoded in a relational diagram enabled users construct and manipulate such information through an audio-only interface, and that combining aspects from the guided and the non-guided strategies could support greater usability.
Advanced auditory menus: design and evaluation of auditory scroll bars BIBAFull-Text 105-112
  Pavani Yalla; Bruce N. Walker
Auditory menus have the potential to make devices that use visual menus accessible to a wide range of users. Visually impaired users could especially benefit from the auditory feedback received during menu navigation. However, auditory menus are a relatively new concept, and there are very few guidelines that describe how to design them. This paper details how visual menu concepts may be applied to auditory menus in order to help develop design guidelines. Specifically, this set of studies examined possible ways of designing an auditory scrollbar for an auditory menu. The following different auditory scrollbar designs were evaluated: single-tone, double-tone, alphabetical grouping, and proportional grouping. Three different evaluations were conducted to determine the best design. The first two evaluations were conducted with sighted users, and the last evaluation was conducted with visually impaired users. The results suggest that pitch polarity does not matter, and proportional grouping is the best of the auditory scrollbar designs evaluated here.

Accessibility studies

A comparative test of web accessibility evaluation methods BIBAFull-Text 113-120
  Giorgio Brajnik
Accessibility auditors have to choose a method when evaluating accessibility: expert review (a.k.a. conformance testing), user testing, subjective evaluations, barrier walkthrough are some possibilities. However, little is known to date about their relative strengths and weaknesses. Furthermore, what happened for usability evaluation methods is likely to repeat for accessibility: that there is uncertainty about not only pros and cons of methods, but also about criteria to be used to compare them and metrics to measure these criteria. After a quick review and description of methods, the paper illustrates a comparative test of two web accessibility evaluation methods: conformance testing and barrier walkthrough. The comparison aims at determining merits of barrier walkthrough, using conformance testing as a control condition. A comparison framework is outlined, followed by the description of a laboratory experiment with 12 subjects (novice accessibility evaluators), and its results. Significant differences were found in terms of correctness, one of the several metrics used to compare the methods. Reliability also appears to be different.
Investigating sighted users' browsing behaviour to assist web accessibility BIBAFull-Text 121-128
  Eleni Michailidou; Simon Harper; Sean Bechhofer
The rapid advancement of World Wide Web (Web) technology and constant need for attractive Websites produce pages that hinder visually impaired users. We assert that understanding how sighted users browse Web pages can provide important information that will enhance Web Accessibility, especially for visually impaired users. We present an eye tracking study where sighted users' browsing behaviour on nine Web pages was investigated to determine how the page's visual clutter is related to sighted users' browsing patterns. The results show that salient elements attract users' attention first, users spend more time on the main content of the page and users tend to fixate on the first three or four items on the menu lists. Common gaze patterns begin at the salient elements of the page, move to the main content, header, right column and left column of the page and finish at the footer area. We argue that the results should be used as the initial step for proposing guidelines that assist in designing and transforming Web pages for an easier and faster access for visually impaired users.
Evaluation of a psycholinguistically motivated timing model for animations of american sign language BIBAFull-Text 129-136
  Matt Huenerfauth
Using results in the psycholinguistics literature on the speed and timing of American Sign Language (ASL), we built algorithms to calculate the time-duration of signs and the location/length of pauses during an ASL animation. We conducted a study in which native ASL signers evaluated the ASL animations processed by our algorithms, and we found that: (1) adding linguistically motivated pauses and variations in sign-durations improved signers' performance on a comprehension task and (2) these animations were rated as more understandable by ASL signers.

Web accessibility

A user evaluation of the SADIe transcoder BIBAFull-Text 137-144
  Darren Lunn; Sean Bechhofer; Simon Harper
The World Wide Web (Web) is a visually complex, dynamic, multimedia system that can be inaccessible to people with visual impairments. SADIe addresses this problem by using Semantic Web technologies to explicate implicit visual structures through a combination of an upper and lower ontology. This is then used to apply transcoding to a range of Websites. This paper describes a user evaluation that was performed using the SADIe system. Four users were presented with a series of Web pages, some having been adapted using SADIe's transcoding functionality and others retaining in their original state. The results of the evaluation showed that providing answers to a fact based question could be achieved more quickly when the information on the page was exposed via SADIe's transcoding. The data obtained during the experiment was analysed and shown to be statistically significant. This suggests that the transcoding techniques offered by SADIe can assist visually impaired users accessing content on the Web.
What's new?: making web page updates accessible BIBAFull-Text 145-152
  Yevgen Borodin; Jeffrey P. Bigham; Rohit Raman; I. V. Ramakrishnan
Web applications facilitated by technologies such as JavaScript, DHTML, AJAX, and Flash use a considerable amount of dynamic web content that is either inaccessible or unusable by blind people. Server side changes to web content cause whole page refreshes, but only small sections of the page update, causing blind web users to search linearly through the page to find new content. The connecting theme is the need to quickly and unobtrusively identify the segments of a web page that have changed and notify the user of them. In this paper we propose Dynamo, a system designed to unify different types of dynamic content and make dynamic content accessible to blind web users. Dynamo treats web page updates uniformly and its methods encompass both web updates enabled through dynamic content and scripting, and updates resulting from static page refreshes, form submissions, and template-based web sites. From an algorithmic and interaction perspective Dynamo detects underlying changes and provides users with a single and intuitive interface for reviewing the changes that have occurred. We report on the quantitative and qualitative results of an evaluation conducted with blind users. These results suggest that Dynamo makes access to dynamic content faster, and that blind web users like it better than existing interfaces.
Accessibility commons: a metadata infrastructure for web accessibility BIBAFull-Text 153-160
  Shinya Kawanaka; Yevgen Borodin; Jeffrey P. Bigham; Darren Lunn; Hironobu Takagi; Chieko Asakawa
Research projects, assistive technology, and individuals all create metadata in order to improve Web accessibility for visually impaired users. However, since these projects are disconnected from one another, this metadata is isolated in separate tools, stored in disparate repositories, and represented in incompatible formats. Web accessibility could be greatly improved if these individual contributions were merged. An integration method will serve as the bridge between future academic research projects and end users, enabling new technologies to reach end users more quickly. Therefore we introduce Accessibility Commons, a common infrastructure to integrate, store, and share metadata designed to improve Web accessibility. We explore existing tools to show how the metadata that they produce could be integrated into this common infrastructure, we present the design decisions made in order to help ensure that our common repository will remain relevant in the future as new metadata is developed, and we discuss how the common infrastructure component facilitates our broader social approach to improving accessibility.

Games and gaming

Sudoku access: a sudoku game for people with motor disabilities BIBAFull-Text 161-168
  Stéphane Norte; Fernando G. Lobo
Educational games are a beneficial activity motivating a large number of students in our society. Unfortunately, disabled people have reduced opportunities when using a computer game. We have created a new Sudoku game for people whose motion is impaired, called Sudoku Access. This special interface allows the control of the game either by voice or by a single switch. We conducted a user study of the Sudoku Access that shows that people can play the game quickly and accurately. With this special Sudoku puzzle we can help more people to get involved in computer games and contribute to develop logic thinking and concentration in students. Our research aims at building enabling technologies that increase individuals' functional independence in a game environment.
Blind hero: enabling guitar hero for the visually impaired BIBAFull-Text 169-176
  Bei Yuan; Eelke Folmer
Very few video games have been designed or adapted to allow people with vision impairment to play. Music/rhythm games however are particularly suitable for such people as they are perfectly capable of perceiving audio signals. Guitar Hero is a popular rhythm game yet it is not accessible to the visually impaired as it relies on visual stimuli. This paper explores replacing visual stimuli with haptic stimuli as a viable strategy to make games accessible. We developed a glove that transforms visual information into haptic feedback using small pager motors attached to the tip of each finger. This allows a blind player to play Guitar Hero. Several tests have been conducted and despite minor changes to the gameplay, visually impaired players are able to play the game successfully and enjoy the challenge the game provides. The results of the study also give valuable insights on how to make mainstream games blind-accessible.
PowerUp: an accessible virtual world BIBAFull-Text 177-184
  Shari Trewin; Vicki L. Hanson; Mark R. Laff; Anna Cavender
PowerUp is a multi-player virtual world educational game with a broad set of accessibility features built in. This paper considers what features are necessary to make virtual worlds usable by individuals with a range of perceptual, physical, and cognitive disabilities. The accessibility features were included in the PowerUp game and validated, to date, with blind and partially sighted users. These features include in-world navigation and orientation tools, font customization, self-voicing text-to-speech output, and keyboard-only and mouse-only navigation. We discuss user requirements gathering, the validation study, and further work needed.

Collaborative accessibility

RouteCheckr: personalized multicriteria routing for mobility impaired pedestrians BIBAFull-Text 185-192
  Thorsten Völkel; Gerhard Weber
Mobility impaired people use a variety of assistive technologies to navigate independently in everyday life. Although several technical approaches for navigation systems exist, many drawbacks remain due to lack of geospatial resolution, inadequate geographical data provided, and missing adaptation of routes to a multitude of user specific criteria. We developed RouteCheckr, a client/server system for collaborative multimodal annotation of geographical data and personalized routing of mobility impaired pedestrians. The construction of algorithms supporting multiple bipolar criteria is described, applied to route calculation, and demonstrated in our university's campus. To satisfy individual requirements, user profiles are incorporated enabling adaptivity over heterogeneous user groups while preserving privacy. Finally, a general architecture for RouteCheckr is presented and simulation results are analyzed.
Social accessibility: achieving accessibility through collaborative metadata authoring BIBAFull-Text 193-200
  Hironobu Takagi; Shinya Kawanaka; Masatomo Kobayashi; Takashi Itoh; Chieko Asakawa
Web content is under the control of site owners, and therefore the site owners have the responsibility to make their content accessible. This is a basic assumption of Web accessibility. Users who want access to inaccessible content must ask the site owners for help. However, the process is slow and too often the need is mooted before the content becomes accessible. Social Accessibility is an approach to drastically reduce the burden on site owners and to shorten the time to provide accessible Web content by allowing volunteers worldwide to -- renovate' any webpage on the Internet. Users encountering Web access problems anywhere at any time will be able to immediately report the problems to a social computing service. Volunteers can be quickly notified, and they can easily respond by creating and publishing the requested accessibility metadata -- also helping any other users who encounter the same problems. Site owners can learn about the methods for future accessibility renovations based on the volunteers' external metadata. There are two key technologies to enable this process, the external metadata that allows volunteers to annotate existing Web content, and the social computing service that supports the collaborative renovations. In this paper, we will first review previous approaches, and then propose the Social Accessibility approach. The scenario, implementation, and results of a pilot service are introduced, followed by discussion of future directions.
Hunting for headings: sighted labeling vs. automatic classification of headings BIBAFull-Text 201-208
  Jeremy T. Brudvik; Jeffrey P. Bigham; Anna C. Cavender; Richard E. Ladner
Proper use of headings in web pages can make navigation more efficient for blind web users by indicating semantic divisions in the page. Unfortunately, many web pages do not use proper HTML markup (h1-h6 tags) to indicate headings, instead using visual styling to create headings, thus making the distinction between headings and other page text indistinguishable to blind users. In a user study in which sighted participants labeled headings on a set of web pages, participants did not often agree on which elements on the page should be labeled as headings, suggesting why headings are not used properly on the web today. To address this problem, we have created a system called HeadingHunter that predicts whether web page text semantically functions as a heading by examining visual features of the text as rendered in a web browser. Its performance in labeling headings compares favorably with both a manually-classified set of heading examples and the combined results of the sighted labelers in our study. The resulting system illustrates a general methodology of creating simple scripts operating over visual features that can be directly included in existing tools.

Motor function

Text entry for mobile devices and users with severe motor impairments: handiglyph, a primitive shapes based onscreen keyboard BIBAFull-Text 209-216
  Mohammed Belatar; Franck Poirier
In recent works, we have developed a text input method based on analogy with capital Latin characters and on the decomposition of characters into basic shapes. It has been designed to be universal and to allow for typing text with only one keystroke per character. In this paper we present a new implementation of this method for users with motor impairments. It uses the same principles in addition to some common techniques used in the Augmentative and Alternative Communication (AAC) domain. The new solution has been tested with a person with Locked-In Syndrome and the results are promising.
Performance-based functional assessment: an algorithm for measuring physical capabilities BIBAFull-Text 217-224
  Kathleen J. Price; Andrew Sears
The description of users with motor limitations is a significant dilemma for accessibility researchers and system designers alike. Current practice is to use descriptors such as medical diagnoses to represent a person's physical capabilities. This solution is not adequate due to similarities in functional capabilities between diagnoses as well as differences in capabilities within a diagnosis. An alternative is user self-reporting or observation by another person. These solutions are also problematic because they rely on individual interpretation of capabilities. The current research focuses on defining an objective, quantitative and repeatable methodology for assessing a person's physical capabilities in relation to use of computer technology. Results from this initial study are encouraging, including the development of a model which accounts for up to 85% of the variance in user capabilities.
Laser pointers and a touch screen: intuitive interfaces for autonomous mobile manipulation for the motor impaired BIBAFull-Text 225-232
  Young Sang Choi; Cressel D. Anderson; Jonathan D. Glass; Charles C. Kemp
El-E ("Ellie") is a prototype assistive robot designed to help people with severe motor impairments manipulate everyday objects. When given a 3D location, El-E can autonomously approach the location and pick up a nearby object. Based on interviews of patients with amyotrophic lateral sclerosis (ALS), we have developed and tested three distinct interfaces that enable a user to provide a 3D location to El-E and thereby select an object to be manipulated: an ear-mounted laser pointer, a hand-held laser pointer, and a touch screen interface. Within this paper, we present the results from a user study comparing these three user interfaces with a total of 134 trials involving eight patients with varying levels of impairment recruited from the Emory ALS Clinic. During this study, participants used the three interfaces to select everyday objects to be approached, grasped, and lifted off of the ground.
   The three interfaces enabled motor impaired users to command a robot to pick up an object with a 94.8% success rate overall after less than 10 minutes of learning to use each interface. On average, users selected objects 69% more quickly with the laser pointer interfaces than with the touch screen interface. We also found substantial variation in user preference. With respect to the Revised ALS Functional Rating Scale (ALSFRS-R), users with greater upper-limb mobility tended to prefer the hand-held laser pointer, while those with less upper-limb mobility tended to prefer the ear-mounted laser pointer. Despite the extra efficiency of the laser pointer interfaces, three patients preferred the touch screen interface, which has unique potential for manipulating remote objects out of the user's line of sight. In summary, these results indicate that robots can enhance accessibility by supporting multiple interfaces. Furthermore, this work demonstrates that the communication of 3D locations during human-robot interaction can serve as a powerful abstraction barrier that supports distinct interfaces to assistive robots while using identical, underlying robotic functionality.

Posters and system demonstrations

Speech technology in real world environment: early results from a long term study BIBAFull-Text 233-234
  Jinjuan Feng; Shaojian Zhu; Ruimin Hu; Andrew Sears
Existing knowledge on how people use speech-based technologies in realistic settings is limited. We are conducting a longitudinal field study, spanning six months, to investigate how users with no physical impairments and users with upper body physical impairments use speech technologies when interacting with computers in their home environment. Digital data logs, time diaries, and interviews are being used to record the types of applications used, frequency of use of each application, and difficulties experienced as well as subjective data regarding the usage experience. While confirming many expectations, initial results have provided several unexpected insights including a preference to use speech for navigation instead of dictation tasks, and the use of speech technology for programming and games.
Evaluation of spatial abilities within a 2D auditory platform game BIBAFull-Text 235-236
  Mike A. Oren; Chris Harding; Terri Bonebright
In this paper, we compare the mental maps created by sighted participants with the maps created by participants with visual impairments during the study of an auditory platform game. The game uses audio cues to convey a 2D side-scrolling view of the spatial layout of the game world. We studied three groups with 9 participants each: a control group of sighted participants playing an audio-visual version of the game, sighted participants playing the audio-only version of the game, and participants with visual impairments playing the audio-only version of the game. Immediately after playing two of the game's levels, we transferred the participants' verbal descriptions of the level's layout onto paper maps and gave each map a "mapping score" that expressed how closely a map matched the actual layout of the game level. The results suggest that in this setting, all participants were able to create maps with roughly the same accuracy no matter which version of the game they played (audio-visual or audio-only) and independent of the participant's level of visual ability (sighted persons or persons with visual impairments).
Identifying pedagogical, technological and organizational barriers in virtual learning environments BIBAFull-Text 237-238
  Maria Galofré; Julià Minguillón
Virtual learning environments are online spaces where learners interact with other learners, teachers, resources and the environment in itself. Although technology is meant to enhance the learning process, there are important issues regarding pedagogical and organizational aspects that must be addressed. In this paper we review the barriers detected in a virtual university which exclusively uses Internet as the main channel of communication, with no face-to-face requirements except those related to final evaluation.
Potential of mobile social networks as assistive technology: a case study in supported employment for people with severe mental illness BIBAFull-Text 239-240
  Yao-Jen Chang; Hung-Huan Liu; Shu-Min Peng; Tsen-Yung Wang
In this paper, we propose a general architecture of mobile social networks where individuals of the networks form a virtual community, conversing and connecting with one another based on mobile communication. Second, mobile social networks are implemented in collaboration with a supported employment program set up for people with severe mental illness.
Shoptalk: toward independent shopping by people with visual impairments BIBAFull-Text 241-242
  Vladimir Kulyukin; John Nicholson; Daniel Coster
ShopTalk, a proof-of-concept system designed to assist individuals with visual impairments with finding shelved products in grocery stores, is built on the assumption that simple verbal route directions and layout descriptions can be used to leverage the O&M skills of independent visually impaired travelers to enable them to navigate the store and retrieve shelved products. This paper introduces ShopTalk and summarizes experiments performed in a real-world supermarket.
Low-cost accelerometry-based posture monitoring system for stroke survivors BIBAFull-Text 243-244
  Sonia Arteaga; Jessica Chevalier; Andrew Coile; Andrew William Hill; Serdar Sali; Sangheeta Sudhakhrisnan; Sri H. Kurniawan
This paper reports a low-cost autonomous wearable accelerometry-based posture monitoring system for stroke survivors. The hardware part of the system consists of monitoring devices, each of which comprises of a three-axial accelerometer and a beeper, LED light and vibrator to provide redundant modes of inappropriate posture warnings that would hopefully trigger self-correction. The inappropriate posture data are stored in an EEPROM. The software part of the system downloads, analyzes and presents the data in graphical format to enable a carer or therapist to quickly glance at the durations, frequency and locations of inappropriate postures.
Evaluation of an intelligent e-tool for deaf children: extended abstract BIBAFull-Text 245-246
  Rosella Gennari; Ornella Mich
LODE is a web tool for deaf children, which aims at stimulating global reasoning on written e-stories. This paper reports on an initial prototype application of LODE. First, we motivate the need of an e-tool such as LODE for deaf children, then we report on the assessment of the prototype with expert users and two deaf children, laying the groundwork for the final evaluation and development of LODE.
Suitable representations of hyperlinks for deaf persons: an eye-tracking study BIBAFull-Text 247-248
  Miki Namatame; Muneo Kitajima
This paper reports an eye-tracking experiment conducted to compare alternative representations of directories typically shown on web pages in search of a best representation for deaf persons. The experiment simulated a directory-based information search task to understand how it is performed when directories are represented in text, labeled-pictograms, or unlabeled-pictograms.
   Twenty-one deaf and 21 hearing participants were asked to select one of 27 directories represented in one of the three alternative formats for each of 38 queries. The result demonstrated that only in the labeled-pictogram representation, the hearing group and the deaf group performed equally well in terms of the eye movement measures.
MySpeechWeb: software to facilitate the construction and deployment of speech applications on the web BIBAFull-Text 249-250
  Richard A. Frost; Ali Karaki; David A. Dufour; Josh Greig; Rahmatullah Hafiz; Yue Shi; Shawn Daichendt; Shahriar Chandon; Justin Barolak; Randy J. Fortier
Few voice-in/voice-out applications are available on the web. One problem appears to be the lack of appropriate open-source tools. More speech applications would increase the functionality of the web for people with visual, cognitive, and motor disabilities. Our research group has developed open-source tools for the creation and deployment of speech applications by non-expert as well as expert users, and an open-source software platform to deploy those applications on the web. In addition, a suite of exemplar speech applications, together with documentation, has been built to facilitate the creation and deployment of similar applications by others.
Multimodal vision glove for touchscreens BIBAFull-Text 251-252
  Muhanad S. Manshad; Ahmad S. Manshad
This paper presents the development of an inexpensive haptic glove enabling people with visual impairments to picture and interact with basic algebra graphs through multiple points of interaction on touchscreens. The glove sends vibrations to each finger representing a direction which the user must follow to reach a graph on a grid. Through repeated movement, a person will reach, trace, and visualize the graph. Evaluations by thirteen students with visual impairments indicated that visualizing a graph, using multiple points of interaction with haptic feedback, is much faster than single-point-interaction with audio feedback.
Creating and evaluating a video vocabulary for communicating verbs for different age groups BIBAFull-Text 253-254
  Xiaojuan Ma; Perry R. Cook
Icons and digital images used in augmentative and alternative communication (AAC) are not as effective in illustrating verbs, especially for people with cognitive degeneration or impairment. Realistic videos have possible advantages for conveying verbs, as verified in our studies with young and old adults comparing single image, multiple images, animations, and video clips. Videos are especially more effective for verbs that show concrete movements or actions. Based on our studies, we propose rules for filming video verb representations, exploring possible visual cues and other factors that may affect people's perception and interpretation.
Tactile chart generation tool BIBAFull-Text 255-256
  Cagatay Goncu; Kim Marriott
We have implemented a Java application that automatically generates tactile bar and pie charts from data values given in a formatted text file. The tool is designed to semi-automate the construction of tactile versions of bar and pie charts in educational material. The tool provides a wide variety of layout styles. While the tool allows the user to fine tune the layout, the generated SVG diagram can also be modified in a standard diagram editor.
Brain-controlled finite state machine for wheelchair navigation BIBAFull-Text 257-258
  Amir Teymourian; Thorsten Lueth; Axel Graeser; Torsten Felzer; Rainer Nordmann
This proposal is about a brain-controlled electrically powered wheelchair. The system comprises a brain-computer interface based on steady-state visual evoked potentials and a processing unit relying on a finite state machine (FSM). Results of first simulation experiments comparing two different FSMs are presented.
A web-based braille translation for digital music scores BIBAFull-Text 259-260
  Toshiyuki Gotoh; Reiko Minamikawa-Tachino; Naoyoshi Tamura
This proposal is about a brain-controlled electrically powered wheelchair. The system comprises a brain-computer interface based on steady-state visual evoked potentials and a processing unit relying on a finite state machine (FSM). Results of first simulation experiments comparing two different FSMs are presented.
Experiences using a hands-free interface BIBAFull-Text 261-262
  Cristina Manresa-Yee; Javier Varona; Francisco J. Perales; Francesca Negre; Joan Jordi Muntaner
Hands-free interfaces could be the best choice for Human-Computer Interaction (HCI) for people with physical disabilities that are not capable of using traditional input devices. Once a first prototype is developed in the laboratory taking into account design and usability requirements, real users is what finally categorize an interface as useful or not. Therefore, an evaluation of our interface with users with cerebral palsy and multiple sclerosis has been carried out during a project of 9 months long. This paper presents a vision-based user interface designed to achieve computer accessibility together with the validation and evaluation of its human computer interaction issues such as usability and accessibility.
Designing an assistive device for the blind based on object localization and augmented auditory reality BIBAFull-Text 263-264
  Florian Dramas; Bernard Oriola; Brian G. Katz; Simon J. Thorpe; Christophe Jouffrais
There are many electronic devices for the visually impaired but few actually get used on a daily basis. This is due in part to the fact that many devices often fail to address the real needs of the users. In this study, we begin with a review of the existing literature followed by a survey of 54 blind people which suggests that one particular function could be particularly useful in a new device, namely, the ability to localize objects. We have tested the possibility of using a sound rendering system to indicate a particular spatial location, and propose to couple this with a biologically inspired image processing system that can locate visual patterns that correspond to particular objects and places. We believe that such a system can address one of the major problems faced by the visually handicapped, namely their difficulty in localizing specific objects.
Naming practice on an open platform for people with aphasia BIBAFull-Text 265-266
  Chris Benjamin; Jesse Harris; Alex Moncrief; Gail Ramsberger; Clayton Lewis
Banga is a software system that uses the Android open source software platform for mobile phones to support word finding practice, a form of therapy for people with aphasia. By connecting to a Web site, Banga provides for remote monitoring and management of the therapy, making it easier for patients to participate in treatment, and for clinicians to deliver it.
Stakeholder involvement in the design and development of a domestic well-being indicator system BIBAFull-Text 267-268
  Nubia M. Gil; Nicolas A. Hine; John L. Arnott
Older people living in their own dwelling may be at increasing risk of loss of well-being and quality of life as a result of changes in health or personal circumstances. Processing the information gathered from sensors in the home and presenting it in the form of functional and practical interfaces to users might help decision-making before a critical situation arises in the older adult's life. The objectives of this study were to understand how to include various stakeholders in the design process and how to design intuitive and self-explanatory interfaces for older people and carers.
Pupil diameter measurements: untapped potential to enhance computer interaction for eye tracker users? BIBAFull-Text 269-270
  Armando Barreto; Ying Gao; Malek Adjouadi
In this paper, we propose the possibility of using real-time pupil diameter measurements, which are provided by most Eye Gaze Tracking (EGT) systems utilized for computer input by many individuals with severe motor impairments, to obtain an on-line estimation of the quality of the interaction with the computer, as perceived by the user. Increased computational power available to process, in real-time, pupil diameter data could, in the future, enable an assessment of the level of frustration (stress) that an EGT user may experience when the quality of the interaction deteriorates. This would be a critical first step in the direction of enabling EGT systems to initiate a re-calibration process, under those circumstances.
Experiment and evaluation of the RAMPE interactive auditive information system for the mobility of blind people in public transport BIBAFull-Text 271-272
  Olivier Venard; Geneviève Baudoin; Gérard Uzan
This paper will present the design and experimentation of the RAMPE system intended for the assistance and information of visually disabled/impaired people so that they can increase their mobility and autonomy in public transports. The system is intended to equip bus or tramway stops or to be installed in connection link. It is based either on a remote control (RC) or on a smart hand-held WiFi enabled device like a Personal Digital Assistant (PDA) able to communicate by a WiFi connection with fixed equipment in stations to retrieve travelers information.
   This system has been evaluated by Visually Disabled People (VDP) in real public transport environment in the city of Lyon in France. Ergonomics data has been collected during the experiment and analyzed, showing the usability of such devices by VDP and hence also by Visually Impaired Peoples (VIP).
Accessible lectures: moving towards automatic speech recognition models based on human methods BIBAFull-Text 273-274
  Miltiades Papadopoulos; Elaine Pearson
The traditional lecture remains the most common method of teaching and while it is the most convenient from a delivery point of view, it is the least flexible and accessible. This paper responds to the challenge of meeting the needs and access requirements of students with disabilities by urging further adaptations in the learning environment. The aim of this work is to explore the way speech recognition technology can be employed in the University classroom to make lectures more flexible and accessible. The concluding section explores the concept of an ASR model, based on principles derived from studies of human methods of recognition, in order to increase their performance and efficiency.
Effective simulations to support academics in inclusive online learning design BIBAFull-Text 275-276
  George Papadopoulos; Elaine Pearson; Steve Green
Accessibility simulations can give an understanding of the effect a disability may have on the way students access online materials. This paper briefly describes the evaluation of a prototype set of accessibility simulations. The purpose of the prototype was to establish the specification for a second, revised version, which would incorporate the simulation into a learning activity that could be used in Continuing Professional Development (CPD) training for academics. The cognitive overload simulation -- part of the second application -- has been developed and is subsequently described in detail. In conclusion, this paper discusses planned evaluation of this and similar simulations as an awareness raising tool in workshops for academic staff in Higher Education.
TTY phone: direct, equal emergency access for the deaf BIBAFull-Text 277-278
  Zahoor Zafrulla; John Etherton; Thad Starner
Seeking to enable direct and equal access for the Deaf to emergency call centers, we analyze the current state of the emergency phone system in the United States and elsewhere in the world. Leveraging teletypewriter (TTY) technology mandated by the Americans with Disabilities Act of 1990 to be installed in all emergency call centers in the United States, we developed software that emulates a TTY on a smart phone. We present an Instant Messaging style interface for mobile phones that uses the existing emergency infrastructure and allows Deaf users to communicate directly with emergency operators.
Creating an automatic question answering text skimming system for non-visual readers BIBAFull-Text 279-280
  Debra Yarrington; Kathleen McCoy
This poster describes an approach to creating a system that will allow blind, low vision, dyslexic, and other non-visual readers to skim through documents for answers to questions. The ultimate goal is a system that gives the users information similar to that which a sighted individual obtains while skimming through a document for an answer to a question. To create this system, we will be incorporating open domain question answering (QA) techniques, word clustering techniques, and data gathered from visual readers skimming through documents. Ultimately the system will take a question and a text document and return a list of rated links that suggest where the answers within a document are likely to be located. This poster focuses on results of data gathered from individuals skimming through documents.
American sign language vocabulary: computer aided instruction for non-signers BIBAFull-Text 281-282
  Valerie Henderson-Summet; Kimberly Weaver; Tracy L. Westeyn; Thad E. Starner
In this paper we present the results of a study designed to evaluate the computer-based methods of learning American Sign Language (ASL). We describe a method including an initial instruction session along with receptive and generative language tests which were administered after a week-long retention interval. We show a strong correlations (r=.62, r=.57) between the initial session's instruction and the receptive and generative levels of vocabulary signing. Based on the results of our experiment, we establish a baseline for further exploration of ASL vocabulary acquisition and identify further paths for language based instruction.
Assisting mobility of the disabled using space-identifying ubiquitous infrastructure BIBAFull-Text 283-284
  Masahiro Bessho; Shinsuke Kobayashi; Noboru Koshizuka; Ken Sakamura
In achieving the mobility assistance for the individuals with different disabilities, it is required to recognize places with different granularities defined based on human interest, which is hard to be accomplished using traditional GPS-based approaches. In order to meet this requirement, a space-identifying ubiquitous infrastructure designed upon the concept of the universal design has been proposed, which is capable of constructing a service that is aware of places of human interest using ubiquitous computing technologies. Upon this infrastructure, two case studies are performed in the existing civil space, consisting of a pedestrian navigation service for the individuals with different needs, and a mobility assistant service for the visually-impaired. Through these case studies, the possibilities of the proposed infrastructure are discussed.
Enabling the legally blind in classroom note-taking BIBAFull-Text 285-286
  David Hayden; Dirk Colbry; John A., Jr. Black; Sethuraman Panchanathan
Classroom note-taking has been shown to be beneficial, even if the student never reviews his/her own notes. Students that are legally blind are thus at a disadvantage because they face significant barriers to note-taking in the classroom. This presentation demonstrates a working prototype of the CUbiC Note Taker, which is a highly portable device that allows students who are legally blind to take their own notes in class without any special in-classroom accommodations, and without requiring lecturers to adapt their presentation in any way.
Context aware documenting for aphasics BIBAFull-Text 287-288
  Thomas de Wolf; Dirk W. H. Gooren; Jean-Bernard O. S. Martens
In this paper, we describe two projects aimed at designing a tool for documenting and reporting everyday experiences by aphasics. Proxy feedback on design concepts was used to select concepts for further in-depth prototyping. Both final prototypes use automated image capturing and provide a means for browsing the collected images. One design is mobile and separates picture taking from picture viewing, while the other design is a larger static object that integrates image input and output.
An alternative information web for visually impaired users in developing countries BIBAFull-Text 289-290
  Nitendra Rajput; Sheetal Agarwal; Arun Kumar; Amit Anil Nanavati
Websites in World Wide Web are primarily meant for visual consumption. Moreover, the wide variety of visual controls available make it harder to interpret the websites with screen readers. This problem of accessing information and services on the web escalates even further for visually impaired in developing regions since most are either semi-literate/illiterate or cannot afford computers and high-end phones with screen reading capability.
   In this paper, we present an alternate platform -- the World Wide Telecom Web (WWTW). WWTW is already being successfully deployed as a network of VoiceSites that can be created and accessed by a voice interaction over an ordinary phone. WWTW presents a whole new set of opportunities for delivering information and services to the visually impaired. We present a preliminary user study that leads us towards the belief that the Telecom Web can be the mainstream Web for blind users.
Computer vision-based clear path guidance for blind wheelchair users BIBAFull-Text 291-292
  Volodymyr Ivanchenko; James Coughlan; William Gerrey; Huiying Shen
We describe a system for guiding blind and visually impaired wheelchair users along a clear path that uses computer vision to sense the presence of obstacles or other terrain features and warn the user accordingly. Since multiple terrain features can be distributed anywhere on the ground, and their locations relative to a moving wheelchair are continually changing, it is challenging to communicate this wealth of spatial information in a way that is rapidly comprehensible to the user. The main contribution of our system is the development of a novel user interface that allows the user to interrogate the environment by sweeping a standard (unmodified) white cane back and forth: the system continuously tracks the cane location and sounds an alert if a terrain feature is detected in the direction the cane is pointing. Experiments are described demonstrating the feasibility of the approach.
AmbienNet: an intelligent environment to support people with disabilities and elderly people BIBAFull-Text 293-294
  Julio Abascal; Borja Bonail; Alvaro Marco; Roberto Casas; José Luis Sevillano
AmbienNet is an ongoing project aiming to demonstrate the viability of accessible intelligent environments to support people with disabilities and elderly people living autonomously. Based on the Ambient Intelligence paradigm, it tries to study in depth its advantages and disadvantages for people with sensory, physical or cognitive restrictions. To this end diverse supporting technologies and applications have been designed, in order to test their accessibility, usability and validity. After introducing the objectives and findings of the project, in this paper a number of preliminary results are presented and discussed.
The flote: an instrument for people with limited mobility BIBAFull-Text 295-296
  Amal Dar Aziz; Chris Warren; Hayden Bursk; Sean Follmer
The Flote is a wind instrument designed for people with limited mobility. Past work in this area has failed to deliver the musical expressiveness expected of an instrument while maintaining the low cost required for wide adoption. Using only head movement and breath control, both calibrated to match the player's abilities, the Flote is an avenue for creative expression and an enjoyable form of physical therapy. The software is available as a free download at http://www.theflote.com, and the hardware can be easily built by anyone with minimal familiarity with circuits.
The accessible aquarium: identifying and evaluating salient creature features for sonification BIBAFull-Text 297-298
  Anandi Pendse; Michael Pate; Bruce N. Walker
Informal learning environments (e.g., aquaria, zoos, science centers) are often inaccessible to the visually impaired. Sonification can make such environments more accessible while also adding to the experience of sighted visitors. This study was to determine the salient features of moving creatures in the sort of dynamic display typically found in such environments and to evaluate the efficacy of sonification in improving the experience of viewing such displays by sighted research participants.
A demonstration of phototacs: a simple image-based phone dialing interface for people with cognitive or visual impairments BIBAFull-Text 299-300
  Michael J. Astrauskas; John A., Jr. Black; Sethuraman Panchanathan
Modern cell phones are becoming increasingly complex, as more and more features are added to every new generation. These complex user interfaces can make even placing a call difficult for those with disabilities. This presentation demonstrates a fully-functional version of PhotoTacs, an easy-to-use image-based phone book, designed specifically for those with cognitive disabilities and visual impairments.

Microsoft student research competition

CATPro: context-aware task prompting system with multiple modalities for individuals with cognitive impairments BIBAFull-Text 301-302
  Wan Chih Chang
Difficulties in executing daily living tasks hamper the quality of life of many individuals with cognitive impairments who are otherwise physically mobile. For example, an adult with mental disorder may want to lead a more independent life and be capable of getting trained and keeping employed, but may experience difficulty in using public transportation to and from the workplace. Task prompting systems for the cognitively impaired have been developed for skill training of activities of daily living. Recently, supported employment programs targeted for people transitioning from institutional to community care have created more demand of cognitive aids to increase their workplace independence.
   One of the key research issues in task prompting is the timing of the prompts. Put it in more precise terms, researchers are faced with challenges of when, where, and how the prompts are delivered to the users. By bringing context awareness to handheld prompting devices, reducing cognitive load on users, and eliminating the need of shadow teams as "wizards", people with cognitive impairments can have the prompting experiences in easier and more comfortable ways.
   In collaboration with NGOs dedicated to supported employment programs, a novel Context-Aware Task Prompting system (CATPro) is presented in this research to increase work and life independence for cognitive-impaired patients such as people with traumatic brain injury, cerebral palsy, mental retardation, schizophrenia, dementia, and Alzheimer's disease.
Input to the mobile web is situationally-impaired BIBAFull-Text 303-304
  Tianyi Chen
Our work investigates the common problems experienced by mobile Web users and disabled desktop users. Leveraging research between these two domains is important because if common problems exist, available solutions can be migrated from one domain to another. For a proof of concept, we conducted a study with mobile Web users. The study replicated a previous experiment which investigated keyboard and mouse errors of motor impaired desktop users. Results confirm that these two domains share similar problems with regard to typing and pointing. Following the same methodology, we will investigate the problems shared by mobile Web users and disabled desktop users for output.
A camera phone based currency reader for the visually impaired BIBAFull-Text 305-306
  Xu Liu
In this paper we present a camera phone-based currency reader for the visually impaired that can identify the value of U.S. paper currency. Currently, U.S. paper currency can only be identified visually and this situation will continue for a foreseeable future. Our solution harvests the imaging and computational power on camera phones to read these bills. Considering it is impractical for the visually impaired to capture high quality image, our currency reader performs real time processing for each captured frame as the camera approaches the bill. We developed efficient background subtraction and perspective correction algorithms and trained our currency reader using an efficient Ada-boost framework. Our currency reader processes 10 frames/second and achieves a false positive rate of approximately 1/10000. Major smart phone platforms, including Symbian and Windows Mobile, are supported.
Automation of repetitive web browsing tasks with voice-enabled macros BIBAFull-Text 307-308
  Yevgen Borodin
Non-visual aural web browsing remains inefficient as compared to regular browsing with visual modalities. This paper proposes an approach for automation of repetitive browsing tasks by using personalized macros, which are easy to record and replay with a speech-enabled interface. The prototype system is implemented in the framework of the HearSay non-visual web browser.
Lessons from an evaluation of a domestic well-being indicator system BIBAFull-Text 309-310
  Nubia M. Gil
The objective of this study was to find out whether a domestic well-being indicator system might support the dialogue of care between the older person and his or her carer. Ten evaluation sessions, each one with three parts, were run with ten older people and ten carers. Peoples' attitudes and feelings, perceptions, and preferences were compared. Both qualitative and quantitative data were collected. In conclusion the prototype could be used as a tool to improve the dialogue of care between the older person and the carer.
Analysis of speech properties of neurotypicals and individuals diagnosed with autism and down BIBAFull-Text 311-312
  Mohammed E. Hoque
Many individuals diagnosed with autism and Down syndrome have difficulties producing intelligible speech. Systematic analysis of their voice parameters could lead to better understanding of the specific challenges they face in achieving proper speech production. In this study, 100 minutes of speech data from natural conversations between neurotypicals and individuals diagnosed with autism/Down-syndrome was used. Analyzing their voice parameters indicated new findings across a variety of speech parameters. An immediate extension of this work would be to customize this technology allowing participants to visualize and control their speech parameters in real time and get live feedback.
TAIG: textually accessible information graphics BIBAFull-Text 313-314
  Seniz Demir
Information graphics (such as bar charts and line graphs) are an important component of many documents. Unfortunately, these representations present serious access challenges for individuals with sight impairments. This paper describes our ongoing research on the TAIG system which is a part of a larger system whose long term goal is to enable visually impaired users to gain access to the content of information graphics and therefore benefit from these valuable resources. TAIG first provides the user with a brief textual summary of the graphic with the inferred overall message as the core content, and then will respond to follow-up questions which may request further detail about the graphic.
Adapting word prediction to subject matter without topic-labeled data BIBAFull-Text 315-316
  Keith Trnka
Word prediction helps to increase communication rate when using Augmentative and Alternative Communication devices. Basic prediction systems offer topically inappropriate predictions for the context, thus we adapt the predictions to the topic of discourse. However, previous work has relied on texts that are grouped into topics by humans. In contrast, we avoid this restriction by treating each document as a topic. The results are comparable to human-labeled topics and also the method is applicable to unlabeled text.
Phototacs: an image-based cell phone interface BIBAFull-Text 317-318
  Michael J. Astrauskas
As new features have been added to cellular phones, their user interfaces have become increasingly complex, making it difficult for people with cognitive or visual impairments to use them. This paper describes the development and preliminary testing of a simplified, intuitive image-based cell phone interface.