HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies

Fullname:The 12th International ACM SIGACCESS Conference on Computers and Accessibility
Editors:Armando Barreto; Vicki L. Hanson
Location:Orlando, Florida
Dates:2010-Oct-25 to 2010-Oct-27
Publisher:ACM
Standard No:ISBN 1-60558-881-4, 978-1-60558-881-0; ACM DL: Table of Contents hcibib: ASSETS10
Papers:81
Pages:332
Links:Conference Home Page
  1. Keynote address
  2. Considering accessibility
  3. Evaluating accessibility
  4. Accessibility research in the wild
  5. Non-visual access
  6. Sign language
  7. Accessible education
  8. Communication
  9. Mobility
  10. Approaches to therapy
  11. Posters and Demonstrations
  12. ACM student research competition

Keynote address

The future of assistive technologies: a time of promise and apprehension BIBAFull-Text 1-2
  Albert M. Cook
Continual advances in the capabilities of both assistive and mainstream technologies offers great potential for meeting the needs people with disabilities. However, there are significant potential obstacles to actually realizing the benefits of these advances for people with disabilities.

Considering accessibility

Disability studies as a source of critical inquiry for the field of assistive technology BIBAFull-Text 3-10
  Jennifer Mankoff; Gillian R. Hayes; Devva Kasnitz
Disability studies and assistive technology are two related fields that have long shared common goals -- understanding the experience of disability and identifying and addressing relevant issues. Despite these common goals, there are some important differences in what professionals in these fields consider problems, perhaps related to the lack of connection between the fields. To help bridge this gap, we review some of the key literature in disability studies. We present case studies of two research projects in assistive technology and discuss how the field of disability studies influenced that work, led us to identify new or different problems relevant to the field of assistive technology, and helped us to think in new ways about the research process and its impact on the experiences of individuals who live with disability. We also discuss how the field of disability studies has influenced our teaching and highlight some of the key publications and publication venues from which our community may want to draw more deeply in the future.
A general education course on universal access, disability, technology and society BIBAFull-Text 11-18
  Sri H. Kurniawan; Sonia Arteaga; Roberto Manduchi
This paper reports on a General Education course called "Universal Access: Disability, Technology and Society" that enables students from all majors to learn more about disability and the issues that surround it, as well as how Assistive Technology facilitates effective participation of those with disabilities in society. Guest lectures, meant to give the students different perspectives on disability, are integral part of the course. Guest lecturers include experts in disability studies, professionals working with people with disabilities, and persons with disability. To gain practical knowledge, the students carry out group projects or volunteering activities that involves people with disabilities. Since its first introduction in 2006, the course had always filled to capacity. A survey with 75 students conducted in Winter 2010 revealed that students felt that their knowledge about universal access and disabilities had improved significantly, and that they had become aware of accessibility in everyday life.
Towards accessible touch interfaces BIBAFull-Text 19-26
  Tiago Guerreiro; Hugo Nicolau; Joaquim Jorge; Daniel Gonçalves
Touch screen mobile devices bear the promise of endless leisure, communication, and productivity opportunities to motor-impaired people. Indeed, users with residual capacities in their upper extremities could benefit immensely from a device with no demands regarding strength. However, the precision required to effectively select a target without physical cues creates problems to people with limited motor abilities. Our goal is to thoroughly study mobile touch screen interfaces, their characteristics and parameterizations, thus providing the tools for informed interface design for motor-impaired users. We present an evaluation performed with 15 tetraplegic people that allowed us to understand the factors limiting user performance within a comprehensive set of interaction techniques (Tapping, Crossing, Exiting and Directional Gesturing) and parameterizations (Position, Size and Direction). Our results show that for each technique, accuracy and precision vary across different areas of the screen and directions, in a way that is directly dependent on target size. Overall, Tapping was both the preferred technique and among the most effective. This proves that it is possible to design inclusive unified interfaces for motor-impaired and able-bodied users once the correct parameterization or adaptability is assured.

Evaluating accessibility

Towards a tool for keystroke level modeling of skilled screen reading BIBAFull-Text 27-34
  Shari Trewin; Bonnie E. John; John Richards; Cal Swart; Jonathan Brezin; Rachel Bellamy; John Thomas
Designers often have no access to individuals who use screen reading software, and may have little understanding of how their design choices impact these users. We explore here whether cognitive models of auditory interaction could provide insight into screen reader usability. By comparing human data with a tool-generated model of a practiced task performed using a screen reader, we identify several requirements for such models and tools. Most important is the need to represent parallel execution of hearing with thinking and acting. Rules for placement of cognitive operators that were developed for visual user interfaces may not be applicable in the auditory domain. Other mismatches between the data and the model were attributed to the extremely fast listening rate and differences between the typing patterns of screen reader usage and the model's assumptions. This work informs the development of more accurate models of auditory interaction. Tools incorporating such models could help designers create user interfaces that are well tuned for screen reader users, without the need for modeling expertise.
Accessibility by demonstration: enabling end users to guide developers to web accessibility solutions BIBAFull-Text 35-42
  Jeffrey P. Bigham; Jeremy T. Brudvik; Bernie Zhang
Few web developers have been explicitly trained to create accessible web pages, and are unlikely to recognize subtle accessibility and usability concerns that disabled people face. Evaluating web pages with assistive technology can reveal problems, but this software takes time to install and its complexity can be overwhelming. To address these problems, we introduce a new approach for accessibility evaluation called Accessibility by Demonstration (ABD). ABD lets assistive technology users retroactively record accessibility problems at the time they experience them as human-readable macros and easily send those recordings and the software necessary to replay them to others. This paper describes an implementation of ABD as an extension to the WebAnywhere screen reader, and presents an evaluation with 15 web developers not experienced with accessibility showing that interacting with these recordings helped them understand and fix some subtle accessibility problems better than existing tools.
Testability and validity of WCAG 2.0: the expertise effect BIBAFull-Text 43-50
  Giorgio Brajnik; Yeliz Yesilada; Simon Harper
Web Content Accessibility Guidelines 2.0 (WCAG 2.0) require that success criteria be tested by human inspection. Further, testability of WCAG 2.0 criteria is achieved if 80% of knowledgeable inspectors agree that the criteria has been met or not. In this paper we investigate the very core WCAG 2.0, being their ability to determine web content accessibility conformance. We conducted an empirical study to ascertain the testability of WCAG 2.0 success criteria when experts and non-experts evaluated four relatively complex web pages; and the differences between the two. Further, we discuss the validity of the evaluations generated by these inspectors and look at the differences in validity due to expertise.
   In summary, our study, comprising 22 experts and 27 non-experts, shows that approximately 50% of success criteria fail to meet the 80% agreement threshold; experts produce 20% false positives and miss 32% of the true problems. We also compared the performance of experts against that of non-experts and found that agreement for the non-experts dropped by 6%, false positives reach 42% and false negatives 49%. This suggests that in many cases WCAG 2.0 conformance cannot be tested by human inspection to a level where it is believed that at least 80% of knowledgeable human evaluators would agree on the conclusion. Why experts fail to meet the 80% threshold and what can be done to help achieve this level are the subjects of further investigation.

Accessibility research in the wild

Field evaluation of a collaborative memory aid for persons with amnesia and their family members BIBAFull-Text 51-58
  Mike Wu; Ronald M. Baecker; Brian Richards
The loss of memory can have a profound and disabling effect on individuals. People who acquire memory impairments are often unable to live independent lives because they cannot remember what they need to do. In many cases, they rely on family members who live with them to accomplish everyday activities, such as coordinating a doctor's appointment. To design technology for persons with amnesia and their families, we involved end users in the participatory design of a collaborative memory aid called Family-Link. We evaluated Family-Link by comparing it to a commercially available calendar application. We found that participants shared significantly more events when using Family-Link. Qualitative evidence also suggests that Family-Link increased participants' awareness of family members' schedules, enabled caregivers to track the person with amnesia leading to a greater a sense of security and reduced stress, and reduced the amount of caregiver coordination effort. The paper concludes with design implications.
In-situ study of blind individuals listening to audio-visual contents BIBAFull-Text 59-66
  Claude Chapdelaine
Videodescription (VD) or audio description is added to the sound track of audio-visual contents to make media such as film and television accessible to individuals with visual impairment. VD translates the relevant visual information into auditory information. In our previous users' testing, we found that the need of VD could be quite different depending on the visual disabilities of the participants. In order to better identify those differences, we conducted a study with ten legally blind individuals (with and without residual vision) to observe the type, quantity and frequency of the information needed by them. We learned that the degree of residual vision and the complexity of the content have a significant impact of the required level of VD. This suggests that a tool to render VD should offer a basic level of information, allow enough flexibility to provide more VD if needed, and answer on the fly demands for specific information. These specifications were implanted into an accessible video player.
Understanding the challenges and opportunities for richer descriptions of stereotypical behaviors of children with ASD: a concept exploration and validation BIBAFull-Text 67-74
  Fnu Nazneen; Fatima A. Boujarwah; Shone Sadler; Amha Mogus; Gregory D. Abowd; Rosa I. Arriaga
Individuals with Autism Spectrum Disorder (ASD) often engage in stereotypical behaviors. In some individuals these behaviors occur with very high frequency and can be disruptive and at times self-injurious. We propose a system that can tacitly collect contextual data related to the individual's physiological state and their external environment, and map it to occurrences of stereotypies. A user study was conducted with children with ASD, parents, and caregivers to explore and validate this concept. A prototype of the system, developed through participatory design, was used in the study as a probe to elicit the information needs of these stakeholders, and provide a better understanding of the nuances involved in supporting those needs. Here we present the findings of this study, and four design recommendations; promoting ecological integration, addressing privacy concerns, supporting inference, and enabling customization.

Non-visual access

Designing auditory cues to enhance spoken mathematics for visually impaired users BIBAFull-Text 75-82
  Emma Murphy; Enda Bates; Dónal Fitzpatrick
Visual mathematic notation provides a succinct and unambiguous description of the structure of mathematical formulae in a manner that is difficult to replicate through the linear channels of synthesized speech and Braille. It is proposed that the use of auditory cues can enhance accessibility to mathematical material and reduce common ambiguities encountered through spoken mathematics. However, the use of additional complex hierarchies of non-speech sounds to represent the structure and scope of equations may be cognitively demanding to process. This can detract from the users' understanding of the mathematical content. In this paper, a new system is presented, which uses a mixture of non-speech auditory cues, modified speech (spearcons) and binaural spatialization to disambiguate the structure of mathematical formulae. A design study, involving an online survey with 56 users, was undertaken to evaluate an existing set of auditory cues and to brainstorm alternative ideas and solutions from users before implementing modified designs and conducting a separate controlled evaluation. It is proposed that by involving a wide number of users in the creative design process, intuitive auditory cues will be implemented with the potential to enhance spoken mathematics for visually impaired users.
Evaluating a tool for improving accessibility to charts and graphs BIBAFull-Text 83-90
  Leo Ferres; Gitte Lindgaard; Livia Sumegi
We discuss factors in the design and evaluation of natural language-driven assistive technologies that generate descriptions of, and allow interaction with, graphical representations of numerical data. In particular, we provide data in favor of 1) screen-reading technologies as a usable, useful, and cost-effective means of interacting with graphs. The data also show that by carrying out evaluation of Assistive Technologies on populations other than the target communities, certain subtleties of navigation and interaction may be lost or distorted.
A tactile windowing system for blind users BIBAFull-Text 91-98
  Denise Prescher; Gerhard Weber; Martin Spindler
Today's window systems present the information in a graphical and thereby a spatial manner making the text-only access of a standard Braille device insufficient to enable blind users an equivalent exploration of the data. In this paper we present the planar Braille Window System (BWS) designed for a tactile display consisting of a pin-matrix of 120 columns and 60 rows. The system is composed of six separate regions enabling the user to receive different types of information simultaneously. The content of the main region containing Braille windows can be shown in various manners (text-or graphics-based) through four different views. The interaction within our Braille Window System is implemented not only by keyboard shortcuts but also by the use of multitouch gestures. Therefore the user is able to interact directly on the touch-sensitive display. A study conducted with eight blind users has confirmed the concept of Braille windows, regions and views. Especially the gestural input for exploring details of the content offers new possibilities in interacting within a GUI.

Sign language

Modeling and synthesizing spatially inflected verbs for American sign language animations BIBAFull-Text 99-106
  Matt Huenerfauth; Pengfei Lu
Animations of American Sign Language (ASL) have accessibility benefits for many signers with lower levels of written language literacy. This paper introduces a novel method for modeling and synthesizing ASL animations based on movement data collected from native signers. This technique allows for the synthesis of animations of signs (in particular, inflecting verbs, which are frequent in ASL) whose performance is affected by the arrangement of locations in 3D space that represent entities under discussion. Mathematical models of hand movement are trained on examples of signs produced by a human animator. Animations of ASL synthesized from the model were judged to be of similar quality to animations produced by a human animator, and these animations led to higher comprehension scores (than baseline approaches limited to selecting signs from a finite dictionary) in an evaluation study conducted with 18 native signers. This novel technique is applicable to ASL or other sign languages. It can significantly increase the repertoire of generation systems and can partially automate the work of humans using scripting systems.
An evaluation of video intelligibility for novice American sign language learners on a mobile device BIBAFull-Text 107-114
  Kimberly A. Weaver; Thad Starner; Harley Hamilton
Language immersion from birth is crucial to a child's language development. However, language immersion can be particularly challenging for hearing parents of deaf children to provide as they may have to overcome many difficulties while learning sign language. We intend to create a mobile device-based system to help hearing parents learn sign language. The first step is to understand what level of detail (i.e., resolution) is necessary for novice signers to learn from video of signs. In this paper we present the results of a study designed to evaluate the ability of novices learning sign language to ascertain the details of a particular sign based on video presented on a mobile device. Four conditions were presented. Three conditions involve manipulation of video resolution (low, medium, and high). The fourth condition employs insets showing the sign handshapes along with the high resolution video. Subjects were tested on their ability to emulate the given sign over 80 signs commonly used between parents and their young children. Although participants noticed a reduction in quality in the low resolution condition, there was no significant effect of condition on ability to generate the sign. Sign difficulty had a significant correlation with ability to correctly reproduce the sign. Although the inset handshape condition did not improve the participants' ability to emulate the signs correctly, participant feedback provided insight into situations where insets would be more useful, as well as further suggestions to improve video intelligibility. Participants were able to reproduce even the most complex signs tested with relatively high accuracy.
A web-based user survey for evaluating power saving strategies for deaf users of mobileASL BIBAFull-Text 115-122
  Jessica J. Tran; Tressa W. Johnson; Joy Kim; Rafael Rodriguez; Sheri Yin; Eve A. Riskin; Richard E. Ladner; Jacob O. Wobbrock
MobileASL is a video compression project for two-way, real-time video communication on cell phones, allowing Deaf people to communicate in the language most accessible to them, American Sign Language. Unfortunately, running MobileASL quickly depletes a full battery charge in a few hours. Previous work on MobileASL investigated a method called variable frame rate (VFR) to increase the battery duration. We expand on this previous work by creating two new power saving algorithms, variable spatial resolution (VSR), and the application of both VFR and VSR. These algorithms extend the battery life by altering the temporal and/or spatial resolutions of video transmitted on MobileASL. We found that implementing only VFR extended the battery life from 284 minutes to 307 minutes; implementing only VSR extended the battery life to 306 minutes, and implementing both VFR and VSR extended the battery life to 315 minutes. We evaluated all three algorithms by creating a linguistically accessible online survey to investigate Deaf people's perceptions of video quality when these algorithms were applied. In our survey results, we found that VFR produces perceived video choppiness and VSR produces perceived video blurriness; however, a surprising finding was that when both VFR and VSR are used together, they largely ameliorate the choppiness and blurriness perceived, i.e., they each improve the use of the other. This is a useful finding because using VFR and VSR together saves the most battery life.

Accessible education

Multiple view perspectives: improving inclusiveness and video compression in mainstream classroom recordings BIBAFull-Text 123-130
  Raja S. Kushalnagar; Anna C. Cavender; Jehan-François Pâris
Multiple View Perspectives (MVP) enables deaf and hard of hearing students to view and record multiple video views of a classroom presentation using a stand-alone solution. We show that deaf and hard of hearing students prefer multiple, focused videos over a single, high-quality video and that a compacted layout of only the most important views is preferred. We also show that this approach empowers deaf and hard of hearing students by virtue of its low cost, flexibility, and ease of use in the classroom.
Note-taker 2.0: the next step toward enabling students who are legally blind to take notes in class BIBAFull-Text 131-138
  David S. Hayden; Liqing Zhou; Michael J. Astrauskas; John A., Jr. Black
In-class note-taking is a vital learning activity in secondary and post-secondary classrooms. The process of note-taking helps students stay focused on the instruction, forces them to cognitively process what is being presented, and better retain what has been taught, even if they never refer to their notes after the class. However, note-taking is difficult for students with low vision, or who are legally blind for two reasons. First, they are less able to see what is being presented at the front of them room, and second, they must repeatedly switch between the far-sight task of viewing the front of the room, and the near-sight task of taking notes. This paper describes ongoing research aimed at developing a portable assistive device (called the Note-Taker) that a student can take to class, to assist in the process of taking notes. It describes the principles that have guided the development of the proof-of-concept Note-Taker prototype and the Note-Taker 2.0 prototype. Initial testing of those prototypes has been encouraging, but some significant problems remain to be solved. Proposed solutions are currently being implemented, and appear to be effective. If ongoing usability testing confirms their effectiveness, they will be implemented on the planned Note-Taker 3.0 prototype.
Using accessible math textbooks with students who have learning disabilities BIBAFull-Text 139-146
  Preston Lewis; Steve Noble; Neil Soiffer
Math is a subject that most students in K-12 participate in every school day. This includes students with learning disabilities as they are equally accountable to meeting general math curriculum requirements. Project SMART provided digital versions of math textbooks modified to include MathML for use by eighth grade students with various learning disabilities. A goal of Project SMART was to determine whether these accessible digital textbooks improved student test performance as compared to control groups using the same texts in print format with a traditional oral accommodation. The study also examined the extent to which using accessible math impacted student perceptions about math abilities. Students and most teachers found the accessible digital textbooks preferable to the print versions. This was generally reflected in higher test scores as well as consistently positive responses from qualitative measures obtained from ongoing student and teacher surveys.
Relating computer tasks to existing knowledge to improve accessibility for older adults BIBAFull-Text 147-154
  Nic Hollinworth; Faustina Hwang
Routine computer tasks are often difficult for older adult computer users to learn and remember. People tend to learn new tasks by relating new concepts to existing knowledge. However, even for 'basic' computer tasks there is little, if any, existing knowledge on which older adults can base their learning. This paper investigates a custom file management interface that was designed to aid discovery and learnability by providing interface objects that are familiar to the user. A study was conducted which examined the differences between older and younger computer users when undertaking routine file management tasks using the standard Windows desktop as compared with the custom interface. Results showed that older adult computer users requested help more than ten times as often as younger users when using a standard windows/mouse configuration, made more mistakes and also required significantly more confirmations than younger users. The custom interface showed improvements over standard Windows/mouse, with fewer confirmations and less help being required. Hence, there is potential for an interface that closely mimics the real world to improve computer accessibility for older adults, aiding self-discovery and learnability.

Communication

Click on bake to get cookies: guiding word-finding with semantic associations BIBAFull-Text 155-162
  Sonya Nikolova; Marilyn Tremaine; Perry R. Cook
It is challenging to navigate a dictionary consisting of thousands of entries in order to select appropriate words for building communication. This is particularly true for people with lexical access disorders like those present in aphasia. We make vocabulary navigation and word-finding easier by building a vocabulary network where links between words reflect human judgments of semantic relatedness. We report the results from a user study with people with aphasia that evaluated how our system (called ViVA) performs compared to a widely used vocabulary access system in which words are organized hierarchically into common categories and subcategories. The results indicate that word retrieval is significantly better with ViVA, but finding the first word to start a communication is still problematic and requires further investigation.
Are synthesized video descriptions acceptable? BIBAFull-Text 163-170
  Masatomo Kobayashi; Trisha O'Connell; Bryan Gould; Hironobu Takagi; Chieko Asakawa
We conducted a series of experiments to assess the feasibility of synthesized narrations to describe online videos. To reduce the cultural bias, we included adult blind or low-vision participants from Japan and the U.S. in the main study. Our research also includes a follow-up study we conducted in Japan to assess the effectiveness of synthesized video descriptions in realistic situations. The results showed that synthesized video descriptions were generally accepted in both countries. We also found that appropriate technology support allowed a novice describer to make effective video descriptions. Based on these results, we discuss the implications for developing a technology platform for describing online videos.
Broadening accessibility through special interests: a new approach for software customization BIBAFull-Text 171-178
  Robert R. Morris; Connor R. Kirschbaum; Rosalind W. Picard
Individuals diagnosed with autism spectrum disorder (ASD) often fixate on narrow, restricted interests. These interests can be highly motivating, but they can also create attentional myopia, preventing individuals from pursuing a broad range of activities. Interestingly, researchers have found that preferred interests can be used to help individuals with ASD branch out and participate in educational, therapeutic, or social situations they might otherwise shun. When interventions are modified, such that an individual's interest is properly represented, task adherence and performance can increase. While this strategy has seen success in the research literature, it is difficult to implement on a large scale and therefore has not been widely adopted. This paper describes a software approach designed to solve this problem. The approach facilitates customization, allowing users to easily embed images of almost any special interest into computer-based interventions. Specifically, we describe an algorithm that will: (1) retrieve any image from the Google image database; (2) strip it of its background; and (3) embed it seamlessly into Flash-based computer programs. To evaluate our algorithm, we employed it in a naturalistic setting with eleven individuals (nine diagnosed with ASD and two diagnosed with other developmental disorders). We also tested its ability to retrieve and process examples of preferred interests previously reported in the ASD literature. The results indicate that our method was an easy and efficient way for users to customize our software programs. While we believe this model is uniquely suited for individuals with ASD, we also foresee this approach being useful for anyone that might like a quick and simple way to personalize software programs.

Mobility

Vi-bowling: a tactile spatial exergame for individuals with visual impairments BIBAFull-Text 179-186
  Tony Morelli; John Foley; Eelke Folmer
Lack of sight forms a significant barrier to participate in physical activity. Consequently, individuals with visual impairments are at greater risk for developing serious health problems, such as obesity. Exergames are video games that provide physical exercise. For individuals with visual impairments, exergames have the potential to reduce health disparities as they may be safer to play and can be played without the help of others. This paper presents VI Bowling, a tactile/audio exergame that can be played using an inexpensive motion-sensing controller. VI Bowling explores tactile dowsing: a novel technique for performing spatial sensorimotor challenges, which can be used for motor learning. VI Bowling was evaluated with six blind adults. All players enjoyed VI Bowling and the challenge tactile dowsing provided. Players could throw their ball with an average error of 9.76 degrees using tactile dowsing. Participants achieved an average active energy expenditure of 4.61 kJ/Min while playing VI Bowling, which is comparable to walking.
Leveraging proprioception to make mobile phones more accessible to users with visual impairments BIBAFull-Text 187-194
  Frank Chun Yat Li; David Dearman; Khai N. Truong
Accessing the advanced functions of a mobile phone is not a trivial task for users with visual impairments. They rely on screen readers and voice commands to discover and execute functions. In mobile situations, however, screen readers are not ideal because users may depend on their hearing for safety, and voice commands are difficult for a system to recognize in noisy environments. In this paper, we extend Virtual Shelves -- an interaction technique that leverages proprioception to access application shortcuts -- for visually impaired users. We measured the directional accuracy of visually impaired participants and found that they were less accurate than people with vision. We then built a functional prototype that uses an accelerometer and a gyroscope to sense its position and orientation. Finally, we evaluated the interaction and prototype by allowing participants to customize the placement of seven shortcuts within 15 regions. Participants were able to access shortcuts in their personal layout with 88.3% accuracy in an average of 1.74 seconds.
Autonomous navigation through the city for the blind BIBAFull-Text 195-202
  Jaime Sánchez; Natalia de la Torre
Autonomous navigation in the city has become a necessity for people with visual disabilities, due to the fact that they now enjoy a higher degree of social insertion. As such, several technological solutions seek to assist with this autonomy. In this work, we present a study on the effect of the use of an easy-to-access, audio-based GPS software program on navigation through open spaces, and in particular on the stimulation of orientation and mobility skills in blind people. Results show that the use of the audio-based GPS software allowed blind users to be able to get to various destinations without the need for prior information on the environment, favoring the navigation of blind people in unfamiliar contexts, stimulating the use of different orientation and mobility skills, and finally providing help to users that habitually navigate spaces in the city only in the company of other people.

Approaches to therapy

Introducing multimodal paper-digital interfaces for speech-language therapy BIBAFull-Text 203-210
  Anne Marie Piper; Nadir Weibel; James D. Hollan
After a stroke or brain injury, it may be more difficult to understand language and communicate with others. Speech-language therapy may help an individual regain language and cope with changes in their communication abilities. Our research examines the process of speech-language therapy with an emphasis on the practices of therapists working with adults with aphasia and apraxia of speech. This paper presents findings from field work undertaken to inform the design of a mixed paper-digital interface prototype using multimodal digital pens. We describe and analyze therapists' initial reactions to the system and present two case studies of use by older adults undergoing speech-language therapy. We discuss the utility of multimodal paper-digital interfaces to assist therapy and describe our vision of a system to help therapists independently create custom interactive paper materials for their clients.
A tool to promote prolonged engagement in art therapy: design and development from arts therapist requirements BIBAFull-Text 211-218
  Jesse Hoey; Krists Zutis; Valerie Leuty; Alex Mihailidis
This paper describes the development of a tool that assists arts therapists working with older adults with dementia. Participation in creative activities is becoming accepted as a method for improving quality of life. This paper presents the design of a novel tool to increase the capacity of creative arts therapists to engage cognitively impaired older adults in creative activities. The tool is a creative arts touch-screen interface that presents a user with activities such as painting, drawing, or collage. It was developed with a user-centered design methodology in collaboration with a group of creative arts therapists. The tool is customizable by therapists, allowing them to design and build personalized therapeutic/goal-oriented creative activities for each client. In this paper, we evaluate the acceptability of the tool by arts therapists (our primary user group). We perform this evaluation qualitatively with a set of one-on-one interviews with arts therapists who work specifically with persons with dementia. We show how their responses during interviews support the idea of a customizable assistance tool. We evaluate the tool in simulation by showing a number of examples, and by demonstrating customizable components.
Stroke therapy through motion-based games: a case study BIBAFull-Text 219-226
  Gazihan Alankus; Rachel Proffitt; Caitlin Kelleher; Jack Engsberg
In the United States alone, more than five million people are living with long term motor impairments caused by a stroke. Video game-based therapies show promise in helping people recover lost range of motion and motor control. While researchers have demonstrated the potential utility of game-based rehabilitation through controlled studies, relatively little work has explored longer-term home-based use of therapeutic games. We conducted a six-week home study with a 62 year old woman who was seventeen years post-stroke. She played therapeutic games for approximately one hour a day, five days a week. Over the six weeks, she recovered significant motor abilities, which is unexpected given the time since her stroke. Through observations and interviews, we present lessons learned about the barriers and opportunities that arise from long-term home-based use of therapeutic games.

Posters and Demonstrations

A low-cost, variable-amplitude haptic distributed display for persons who are blind and visually impaired BIBAFull-Text 227-228
  Patrick C. Headley; Dianne T. V. Pawluk
Previously in our lab, we developed a low-cost mouse-like device that has high position accuracy, very good temporal and spatial collocation between kinesthetic and tactile information, a fairly large temporal bandwidth and a short time delay. However, it can still be limiting when generating texture-like patterns. We have therefore extended the function of the device to be able to vary the amplitude as well, while maintaining its low cost (under $500). Various virtual textures have been developed which can be used to create salient graphics that can be perceived through this device. Preliminary investigations suggest that having multiple amplitude levels increases the number of distinguishable textures as well as the amount of information that can be displayed.
A multimodal, computer-based drawing system for persons who are blind and visually impaired BIBAFull-Text 229-230
  Patrick C. Headley; Dianne T. V. Pawluk
Many individuals who are blind or visually impaired are interested in drawing tactile pictures, and some have already done so with static raised-line drawing kits. However, static raised-line drawing kits are problematic as they are non-erasable. Various computer systems, with or without accompanying device interfaces, have been designed to allow these individuals the ability to both perceive and construct their own drawings on a computer, but each with various drawbacks. A focus group was conducted to gain input from blind and visually impaired computer users on a new design. Based on these results, a new multimodal device design concept has been generated which will improve upon previous solutions and give graphic editing and viewing to blind and visually impaired individuals at a low cost.
Adaptive mappings for mouse-replacement interfaces BIBAFull-Text 231-232
  John J. Magee; Samuel Epstein; Eric S. Missimer; Margrit Betke
Users of mouse-replacement interfaces may have difficulty conforming to the motion requirements of their interface system. We have observed users with severe motor disabilities who controlled the mouse pointer with a head tracking interface. Our analysis shows that some users may be able to move in some directions easier than other directions. We propose several mouse pointer mappings that adapt to the user's movement abilities. These mappings will take into account the user's motions in two-or three-dimensions to move the mouse pointer in the intended direction.
Anti-blur feedback for visually impaired users of smartphone cameras BIBAFull-Text 233-234
  Pannag R. Sanketi; James M. Coughlan
A wide range of smartphone applications are emerging that employ image processing and computer vision algorithms to interpret the contents of images acquired by the phone's built-in camera, including applications that read product barcodes and recognize a variety of documents and other objects. However, almost all of these applications are designed for normally sighted users; a major barrier for visually impaired users (who might benefit greatly from such applications) is the difficulty of taking good-quality images. To overcome this barrier, this paper focuses on reducing the incidence of motion blur, caused by camera shake and other movements, which is a common cause of poor-quality, unusable images. We propose a simple technique for detecting camera shake, using the smartphone's built-in accelerometer (i.e. tilt sensor) to alert the user in real-time to any shake, providing feedback that enables him/her to hold the camera more steadily. A preliminary experiment with a blind iPhone user demonstrates the feasibility of the approach.
Assistive web browsing with touch interfaces BIBAFull-Text 235-236
  Faisal Ahmed; Muhammad Asiful Islam; Yevgen Borodin; I. V. Ramakrishnan
This demonstration will propose a touch-based directional navigation technique, on touch interface (e.g., iPhone, Macbook) for people with visual disabilities especially blind individuals. Such interfaces coupled with TTS (text-to-speech) systems open up intriguing possibilities for browsing and skimming web content with ease and speed. Apple's seminal VoiceOver system for iOS is an exemplar of bringing touch-based web navigation to blind people. There are two major shortcomings: "fat finger" and "finger-fatigue" problems, which have been addressed in this paper with two proposed approaches. A preliminary user evaluation of the system incorporating these ideas suggests that they can be effective in practice.
Audio and haptic based virtual environments for orientation and mobility in people who are blind BIBAFull-Text 237-238
  Jaime Sánchez; Angelo Tadres
This study presents the development of a videogame with two audio and haptic interfaces that allow for the stimulation of orientation and mobility skills in people who are blind through the use of virtual environments. Our idea was to test the hypothesis regarding the use of audio and haptic interfaces together, allowing for the creation of a better mental representation of a virtual environment, compared to that which results from the use of each kind of interface separately. The videogame's icon usability has been evaluated, as well as a posterior cognitive analysis of the skills acquired through its use. The preliminary usability results show that the people correctly describe the textures and shapes used in the software.
Automatic, intuitive zooming for people who are blind or visually impaired BIBAFull-Text 239-240
  Ravi Rastogi; Dianne T. V. Pawluk
In this paper we present a novel technique of automatic, "intuitive" zooming of graphical information for individuals who are blind or visually impaired. The idea is to automatically choose for the user only zoom levels with significantly different content than the last and which preserve the cognitive grouping of information (such as whole objects or whole object parts) thereby making "intuitive" sense. The algorithm uses wavelet analysis to localize the details of a graphic. It then uses methods looking at the clustering of details to decide on the levels of zoom. An initial pilot study is presented that uses this zooming method with three individuals who are visually impaired and four who are sighted. Results show that all participants liked intuitive zooming over areas of details and being prevented from zooming when no details were present. Almost all participants required zooming to perform the identification task, which had an 86% correct rate.
Blind guidance using mobile computer vision: a usability study BIBAFull-Text 241-242
  Roberto Manduchi; Sri Kurniawan; Homayoun Bagherinia
We present a study focusing on the usability of a wayfinding and localization system for persons with visual impairment. This system uses special color markers, placed at key locations in the environment, that can be detected by a regular camera phone. Three blind participants tested the system in various indoor locations and under different system settings. Quantitative performance results are reported.
Cerebral palsy and online social networks BIBAFull-Text 243-244
  Makayla Lewis
This study qualitatively explores the experiences and challenges faced when people with cerebral palsy use online social networks. Fourteen interviews were carried out consisting of participants with different types of cerebral palsy. The study identified the reasons for use and non-use and also discovered key themes together with challenges that affected their experiences. For example abrupt and frequently changing online social networks were reported to slow down or prevent use. In spite of this, the study recognized the technology is a vital way for these people to communicate and would continue to play a crucial role within their lives.
Color-audio encoding interface for visual substitution: see color matlab-based demo BIBAFull-Text 245-246
  Juan Diego Gomez; Guido Bologna; Thierry Pun
Providing the blind with substitute visual perception is a relentless challenge confronting researchers of diverse areas. The See ColOr (Seeing Color with an Orchestra) system translates aimed-at regions of a color scene into 2D-spatialized sound signals represented by musical instruments. Associating sounds with colors is achieved thanks to an efficient quantization of the regions' HSL representation (Hue, Saturation, Luminosity). See ColOr can be used both for exploring static images and as a mobility aid. In this report, we introduce its general framework for color-audio encoding of images to provide visually impaired individuals with a low-cost application for color perception substitution, using the sense of hearing instead of sight. The experimental tests using a demo implementation have shown the feasibility and usefulness of this approach.
Comparison of methods presenting contour information to individuals who are blind or visually impaired BIBAFull-Text 247-248
  Steve D'Souza; Dianne T. V. Pawluk
Graphs and charts are frequently used in school, work and everyday living. However, traditional techniques of providing this information to individuals who are visually impaired are cumbersome and slow. Refreshable tactile displays have been developed and used to display line graphs, bar graphs and pie charts. In this paper, we investigate the use of a haptic matrix-like display that can produce multiple amplitude levels and multiple frequencies for displaying contour plots. Experiments were performed to compare performance and usability for the two different methods. According to preliminary results, the use of amplitude variation provides the user with a more accurate awareness of contour level differences than frequency variation.
Customizable keyboard BIBAFull-Text 249-250
  Eric S. Missimer; Samuel Epstein; John J. Magee; Margrit Betke
Customizable Keyboard is an on-screen keyboard designed to be flexible and expandable. Instead of giving the user a keyboard layout Customizable Keyboard allows the user to create a layout that is accommodating to the user's needs. Customizable Keyboard also allows the user to select from a variety of ways to interact with the keyboard including but not limited to using the mouse pointer to select keys and different types of scan based systems. Customizable Keyboard provides more functionality than a typical onscreen keyboard including the ability to control infrared devices such as TVs and send Twitter® Tweets.
Designing effective sound-based aquarium exhibit interpretation for visitors with vision impairments BIBAFull-Text 251-252
  Carrie M. Bruce; Bruce N. Walker
Sound-based exhibit interpretation at aquariums has the potential to more effectively mediate visitor-exhibit interaction and support participation for visitors with vision impairments. However, existing interpretation strategies do not adequately convey dynamic animal information to visitors with vision impairments. In an effort to improve access, we are developing research-based guidelines for sound-based exhibit interpretation including audio tours, interpretive staff presentations, and a real-time information delivery system. This poster reports on proposed and completed user-centered design activities.
Detecting objects and obstacles for visually impaired individuals using visual saliency BIBAFull-Text 253-254
  Benoît Deville; Guido Bologna; Thierry Pun
In this demo, we present the detection module of the See ColOr (Seeing Colors with an Orchestra) mobility aid for visually impaired persons. This module points out areas that present either particular interest or potential threat. In order to detect object and obstacles, we propose a bottom-up approach based on visual saliency: objects that would attract the visual attention of a non-disabled individual are pointed out by the system as areas of interest for the user. The device uses a stereoscopic camera, a laptop, and standard headphones. Given the type of scene and/or scenario, specific feature maps are computed in order to indicate areas of interest in real-time. This demonstration shows that the module indicates objects and obstacles as accurately as a system using all available feature maps.
Development of tactile map production device and tactile map with multilingual vocal guidance function BIBAFull-Text 255-256
  Kouki Doi; Wataru Toyoda; Hiroshi Fujimoto
A tactile map is an assistive tool for visually impaired persons to obtain special information. As new barrier-free laws were recently enacted in Japan, many Braille signs and tactile guide maps have been installed in various facilities. Several production methods are available for tactile guide maps. Screen prints have become increasingly popular in Japan because they offer several advantages, including the fact that they can be printed together with visual characters. However, users have one major complaint: screen printing does not create lines and dots on tactile maps, thus necessitating improvement in screen printing performance. In this study, we developed a new tactile map production device as an alternative to screen printing devices. Also, a tactile map created by our method included a vocal guidance function based on users' needs. We found that the finished-print performance and convenience of tactile maps improved considerably. This study will be useful in providing a new style for future tactile maps.
EPG: speech access to program guides for people with disabilities BIBAFull-Text 257-258
  Michael Johnston; Amanda J. Stent
Over the last 10 years, in-home entertainment options have expanded dramatically. However, interfaces to listing data are still very limited. For people with visual disabilities, or those with limited hand mobility, it can be difficult or impossible to use the "guide" provided by many cable and satellite television companies. In this demo, we present the assistive technology features of AT&T's Electronic Program Guide (EPG) prototype. These features include: speech input for listing search, speech commands for browsing search results, and text to speech for browsing search results. In addition, EPG uses commodity hardware and software to reduce barriers to entry.
Evaluating text descriptions of mathematical graphs BIBAFull-Text 259-260
  Yarden Moskovitch; Bruce N. Walker
One approach to making graphs more accessible has been the incorporation of natural language descriptions of graphs into multimodal assistive technologies. MathTrax is software targeted at middle and high school students that employs a Math Description Engine (MDE) [1] to produce a textual description of graphs, as well as a visual and auditory representation of the graphs. Our study compared descriptions generated by the MDE to those generated by teachers of high school math, in order to better understand how to optimize the structure and content of mathematical graph descriptions. Feedback from these experts is compiled into suggestions for description templates to improve graphs descriptions, as well as design recommendations for future applications.
Gesture recognition for fingerspelling applications: an approach based on sign language cheremes BIBAFull-Text 261-262
  Renata C. B. Madeo; Sarajane M. Peres; Daniel B. Dias; Clodis Boscarioli
This paper presents an approach for carrying out gesture recognition for the Brazilian Sign Language Manual Alphabet. The gestural patterns are treated as a combination of three primitives, or cheremes -- hand configuration, hand orientation and hand movement. The recognizer is built in a modular architecture composed by inductive reasoning modules, which use the artificial neural network Fuzzy Learning Vector Quantization; and rule-based modules. This architecture has been tested and results are presented here. Some strengths of such approach are: robustness of recognition, portability to similar contexts, extensibility of the dataset to be recognize and reduction of the vocabulary recognition problem to the recognition of its primitives.
I like to log: a questionnaire study towards accessible lifelogging for older users BIBAFull-Text 263-264
  Niamh Caprani; Cathal Gurrin; Noel E. O'Connor
Lifelogging is the capture and storage of everyday experiences and the act of reviewing lifelog data can significantly support episodic memory, which is particularly vulnerable to the effects of ageing. To design an accessible lifelogging application for older users we firstly need to explore what lifelogging features the application should include. We carried out a questionnaire study to investigate what lifelogging items people from different age groups are currently collecting. Also of interest to us was what items the participants would like to collect and the differences in lifelogging choices between age groups. The results from this questionnaire will contribute to the design of a lifelogging application which focuses on older users' preferences, motivations and abilities.
Image categorization for improving accessibility to information graphics BIBAFull-Text 265-266
  Jinglun Gao; Rafael E. Carrillo; Kenneth E. Barner
Information graphics are important visual information in digital media. This paper investigates the accessibility issues associate with the information graphics for visually impaired people. The goal is to provide them with comprehensive numerical information contained in the figures. Towards the goal, we address the practical problems of automatic figure categorization, information extraction and multi-modal presentation scheme. In particular, the system identifies the image class using image processing and machine learning algorithms. With the knowledge of the image class, specific domain features and information are extracted, and then different modalities of presentation are employed based on the need. This paper first proposes the system framework and then focuses on the automated categorization algorithm.
Interactive SIGHT demo: textual summaries of simple bar charts BIBAFull-Text 267-268
  Seniz Demir; David Oliver; Edward Schwartz; Stephanie Elzer; Sandra Carberry; Kathleen F. McCoy
Interactive SIGHT is a system that is intended to provide people with visual impairments access to the kind of information graphics found in popular media (i.e., electronic newspapers or magazines). The majority of such graphics are intended to convey a message; the graphic designer chose the graphic and its design in order to make a point. Interactive SIGHT, which is implemented as a browser extension that works on simple bar charts, provides a brief high-level summary of the graphic through natural language text that is conveyed to the user as speech. The user may request further information about the graphic through a follow-up question facility which allows many follow-up responses to be generated. The demo will illustrate the system's methodology on several bar charts that have been preloaded along with some accompanying text.
iWalk: a lightweight navigation system for low-vision users BIBAFull-Text 269-270
  Amanda J. Stent; Shiri Azenkot; Ben Stern
Smart phones typically support a range of GPS-enabled navigation services. However, most navigation services on smart phones are of limited use to people with visual disabilities. In this paper, we present iWalk, a speech-enabled local search and navigation prototype for people with low vision. iWalk runs on smart phones. It supports speech input, and provides real-time turn-by-turn walking directions in speech and text, using distances and time-to-turn information in addition to street names so that users are not forced to read street signs. In between turns iWalk uses non-speech cues to indicate to the user that s/he is 'on-track'.
JBrick: accessible Lego Mindstorm programming tool for users who are visually impaired BIBAFull-Text 271-272
  Stephanie Ludi; Mohammed Abadi; Yuji Fujiki; Priya Sankaran; Spencer Herzberg
Despite advances in assistive technology, relatively few visually impaired students participate in computer science courses. Significant factors in this underrepresentation include lack of precollege preparation, access to resources, and the highly visual nature of computing. This poster describes the development of a prototype to provide an accessible programming environment for Lego Mindstorm NXT. With the popularity of robotics in both pre-college and introductory programming classes, such an environment has the potential to better accommodate students who are visually impaired. JBrick's motivation in addition to its design will be presented.
Naming practice for people with aphasia in a mobile web application: early user experience BIBAFull-Text 273-274
  Khalyle Hagood; Terrance Moore; Tiffany Pierre; Paula Messamer; Gail Ramsberger; Clayton Lewis
Bangaten is a new version of Banga [2,3], a smart phone application that supports word finding practice, a form of therapy for people with aphasia. Early user experience shows that Bangaten offers useful cross-platform operation, on both Android and iPhone devices, including remote management of a client's device. Bangaten demonstrates the growing usefulness of emerging HTML5 technology for implementing assistive technology applications, while also illustrating some remaining limitations.
Performance-based functional assessment: integrating multiple perspectives BIBAFull-Text 275-276
  Kathleen J. Price; Andrew Sears
The lack of quantifiable, reliable and repeatable methods for assessing functional capabilities of users with physical limitations creates challenges for accessibility researchers and practitioners. Current practice includes descriptors such as medical diagnoses, third-party observations, and self-assessment to characterize physical capabilities of information technology users. These solutions are inadequate due to similarities in functional capabilities between diagnoses, differences in capabilities within a diagnosis, and the potential for bias when characterizing functional capabilities. The current research examines performance-based functional assessment as an alternative to existing assessment techniques. Initial study results based on a single focus model (task efficiency) were reported earlier [1, 2]. This paper builds on that work, highlighting the benefits of integrating multiple perspectives such that both efficiency and anomalies are considered. A decision tree was produced combining results from several performance-based functional assessment models providing improved predictive capabilities.
Reading difficulty in adults with intellectual disabilities: analysis with a hierarchical latent trait model BIBAFull-Text 277-278
  Martin Jansche; Lijun Feng; Matt Huenerfauth
In prior work, adults with intellectual disabilities answered comprehension questions after reading texts. We apply a latent trait model to this data to infer the intrinsic difficulty of texts for the participant group. We then analyze the correlation between grade levels predicted by an automatic readability assessment tool and the inferred text difficulty.
Sasayaki: an augmented voice-based web browsing experience BIBAFull-Text 279-280
  Shaojian Zhu; Daisuke Sato; Hironobu Takagi; Chieko Asakawa
While the usability of voice-based Web navigation has been steadily improving, it is still not as easy for users with visual impairments as it is for sighted users. One reason is that sequential voice representation can only convey a limited amount of information at a time. Another challenge comes from the fact that current voice browsers omit various visual cues such as text styles and page structures, and lack meaningful feedback about the current focus. To address these issues, we created Sasayaki, an intelligent voice-based user agent that augments the primary voice output of a voice browser with a secondary voice that whispers contextually relevant information as appropriate or in response to user requests. A prototype has been implemented as a plug-in for a voice browser. The results from a pilot study show that our Sasayaki agent is able to improve users' information search task time and their overall confidence level. We believe that our intelligent voice-based agent has great potential to enrich the Web browsing experiences of users with visual impairments.
A customized mouse for people with physical disabilities BIBAFull-Text 281-282
  Minsun Jang; Jiho Choi; Seongil Lee
In a rapidly growing information-oriented society, people with disabilities are faced with serious inconveniences in accessing products due to the increasingly complicated use of technology-oriented but poorly designed devices. To solve these problems, we designed a customized computer input device (mouse) to be used by physically impaired people. The users performed better with the customized computer mouse than with the traditional computer mouse.
The augenda: structuring the lives of autistic teenagers BIBAFull-Text 283-284
  Alain P. C. I. Hong; Sjoerd van Heugten; Tom Kooken; Nikkie Vinke; Myrtille Vromans; Suleman Shahid
In this paper we present an application that assists autistic teenagers in organizing their lives and enhances the communication between autistic teenagers and their caregivers. This application was designed in a participatory manner where autistics teenagers and caregivers participated in all phases of the application development. The early results show that the application has a potential to improve the lives of autistic teenagers by not only bringing structure in their lives but also by improving the communication channel between teenagers and their caregivers.
The design and development of an interactive aural rehabilitation therapy program BIBAFull-Text 285-286
  Najwa Musfer Al-Ghamdi; Yousef Al-Ohali
In this paper, we describe our current work in developing a computer-based aural rehabilitation tool for profoundly deaf children that have recently acquired Cochlear Implants. The software is an interactive program aimed at young Arabic-speaking children, called Rannan. Evaluations of Rannan involved comparing different input modalities and evaluating the effectiveness of sound discrimination activities. Findings show that touch-based interaction facilitated faster response and improved accuracy over cursor-based input modalities. Moreover, usability evaluations suggest that Rannan can be an effective bridge between clinical-based therapy and home-based aural rehabilitation.
The evaluation of visually impaired people's ability of defining the object location on touch-screen BIBAFull-Text 287-288
  Keijiro Usui; Masamitsu Takano; Yusuke Fukushima; Ikuko Eguchi Yairi
Touch-screen has been used not only on home appliances but also on so many kinds of machines in public facilities at the present time. However the fact is that most of visually impaired people have a problem with the difficult usability of a touch-screen. We come up with an idea of Presentation Methods of Sound Information, which is thought to enable visually impaired people to define the object location on the touch-screen. We also make a proposal of the Concentration game content that is possible in improving the Sound localization's ability and evaluating the visually impaired people's characteristics of touch-screen's operation method. In this paper, we introduce the results of the experiment that we conducted with 5 visually impaired people. These are evaluated through the way of the finger moving on the touch-screen panel while using this content, and the ability of defining the object location.
Toward tactile authentication for blind users BIBAFull-Text 289-290
  Ravi Kuber; Shiva Sharma
This paper describes the design of an accessible authentication mechanism. The Tactile Authentication System has been adapted to enable individuals who are blind to access electronic data using their sense of touch. To enter the system, users must identify a set of pre-selected pin-based icons from a wider range presented via a tactile mouse. As information is presented underneath the user's fingertips, 'tactile passwords' are shielded from observers, thereby enhancing security from third-party attacks. Results from a pilot study showed that five participants were able to authenticate entry to the non-visual interface over the course of a two week period. However, findings have revealed that the time needed to perform this process should be reduced to improve the quality of the user experience.
Usability and use of SLS: caption BIBAFull-Text 291-292
  Deborah I. Fels; Martin Gerdzhev; Janice Ho; Ellen Hibbard
SLS:Caption provides captioning functionality for deaf and hearing users to provide captions to video content (including sign language content). Users are able to enter and modify text as well as adjust its font, colour, location and background opacity. An initial user study with hearing users showed that SLS:Caption was easy to learn and use. However, users seem reluctant to produce captions for their own video material; this was likely due to the task complexity and time required to create captions regardless of the usability of the captioning tool.
Using the phone with a single input signal only: evaluation of 3dScan's telephone module BIBAFull-Text 293-294
  Torsten Felzer; Stephan Rinderknecht; Tsvetoslava Vateva
3dScan is a scanning-based software system allowing its user to interact with the immediate environment by intentionally contracting a single muscle of choice as input signals. The system is targeted at persons with severe physical impairments with the objective to improve its users' quality of life by empowering them to independently perform certain activities of daily life (ADL), e.g., to open the door by activating a suitable switch or to control the TV remote. The required input signals can, for example, be produced by merely raising the eyebrow ('frowning') and thus demand only a minimum of physical effort. A recent design change in 3dScan's data acquisition component made it possible to add telephony functionality to the list of supported ADL's. The proposed poster describes the 3dScan's telephone module which now is already in a revised and mature state. The usability of the newest version of the module was tested in two evaluations, where the second one involved participants belonging to the target population, and the results are presented in the poster.
V-braille: haptic braille perception using a touch-screen vibration on mobile phones BIBAFull-Text 295-296
  Chandrika Jayant; Christine Acuario; William Johnson; Janet Hollier; Richard Ladner
V-Braille is a novel way to haptically represent Braille characters on a standard mobile phone using the touch-screen and vibration. V-Braille may be suitable for deaf-blind people who rely primarily on their tactile sense. A preliminary study with deaf-blind Braille users found that, with minimal training, V-Braille can be used to read individual characters and sentences.
Vocsyl: visualizing syllable production for children with ASD and speech delays BIBAFull-Text 297-298
  Joshua Hailpern; Karrie Karahalios; Laura DeThorne; Jim Halle
Communication disorders occur across the lifespan and encompass a wide range of conditions that interfere with individuals' abilities to hear (e.g., hearing loss), speak (e.g., voice disorders; motor speech disorders), and/or use language (e.g., specific language impairment; aphasia) to meet their communication needs. Such disorders often compromise the social, recreational, emotional, educational, and vocational aspects of an individual's life. This research examines the development and implementation of new software that facilitates multi-syllabic speech production in children with autism and speech delays. The VocSyl software package utilizes a suite of audio visualizations that represent a myriad of audio features in abstract representations. The goal of these visualizations is to provide children with language impairments a new persistent modality in which to experience and practice speech-language skills.
Walking in another's shoes: aphasia emulation software BIBAFull-Text 299-300
  Joshua Hailpern; Marina Danilevsky; Karrie Karahalios
The impact of living in a world that does not understand your impairment can be frustrating and a daunting task. Consider how an individual would feel if their family, friends, or doctors did not understand or were not even empathetic to daily struggles brought on by an acquired language disorder such as Aphasia. This work seeks to shed new light on aphasia by creating an instant message client which emulates the effects of aphasia. The goal of this new system is to raise awareness, teach, and increase empathy for caregivers, family members, and doctors/therapists who work with this population on a daily basis.
What can the 'ash cloud' tell us about older adults' technology adoption BIBAFull-Text 301-302
  Lorna Gibson; Paula Forbes; Vicki Hanson
Older adults are often encouraged to try new technology by a specific motivator or 'trigger'. Recently, a surprising 'trigger' has emerged -- the 'ash cloud' which caused large scale disruption for air travel across Europe earlier in 2010. Understanding why this unexpected event managed to motivate interest into technology where other efforts have failed may provide further insight for research looking at the digitally disinterested.

ACM student research competition

A system of clothes matching for visually impaired persons BIBAFull-Text 303-304
  Shuai Yuan
Choosing clothes is a challenge task for blind people. In this paper, we propose a proof of concept system to match a pair of image from different clothes for both pattern and color. Our system consists of 1) a camera and a computer to perform color detection, pattern and color matching process; 2) supporting speech commands to flexibly control and configure; and 3) audio feedbacks to provide matching results of both color and pattern. The system can deal with clothes in uniform color without any pattern, as well as clothes with multiple colors and complex patterns. Furthermore, our method is robust to variations of illumination, clothes rotation, and clothes wrinkles. The proposed method is evaluated on two challenge databases of clothes. Experimental results demonstrate the robustness and effectiveness on clothes matching.
Accessible indoor navigation BIBAFull-Text 305-306
  Kyle Montague
This research focuses on designing an indoor navigation application for disabled users. Outdoor navigation systems make use of GPS satellites to locate users; this same technique, however, is not reliable enough for indoor way-finding. Indoor Positioning Systems (IPS) exist but rely on complex and expensive networks. Described here is a new approach towards such indoor navigation, reporting on research related to the interactions and user experiences involved in locating a user within a building. Interactions are customized to suit the needs of individual users when way-finding helping to ensure that the tool is both usable and accessible by users of varying abilities.
Audiowiz: nearly real-time audio transcriptions BIBAFull-Text 307-308
  Samuel White
Existing automated transcription solutions filter out environmental noises and focus only on transcribing the spoken word. This leaves deaf and hard of hearing users with no way of learning about events that provide no spoken information such as the sounds produced by a faulty appliance or the barked alert of a dutiful guard dog. In this paper we present AudioWiz, a mobile application that provides highly detailed audio transcriptions of both the spoken word and the accompanying environmental sounds. This approach is made possible by harnessing humans to provide audio transcriptions instead of more traditional automated means. Web-workers are recruited automatically in nearly real-time as dictated by demand.
Does a sonar system make a blind maze navigation computer game more "fun"? BIBAFull-Text 309-310
  Matt Wilkerson; Amanda Koenig; James Daniel
As part of the Blind Programming Project at Southern Illinois University Edwardsville, we are investigating ways to make programming more fun for school aged blind children. We are beginning this search by creating and empirically analyzing a number of different auditory computer games. These games are being analyzed to see if we can make them customizable (programmable), using blind programming environments, and to see what kind of strategies work best when designing video games in general for blind users.
   In this work, we explored the creation of a zombie-killing maze navigation game for the blind. Specifically, we were curious whether a sonar based system would be more fun for users when trying to navigate the maze as compared to a navigation system that literally told the user which way to travel. We hypothesized that the sonar system would be more fun, as it provided a playful challenge. Unlike most auditory games for the blind, which typically use screen readers, we made high quality voice recordings of the entire user interface for our game. Overall, results did not show that the sonar based game was more fun, however our game was rated so highly by users, that the navigation system itself appears less important than careful user design and innovative and fun auditory feedback.
FlashDOM: interacting with flash content from the document object model BIBAFull-Text 311-312
  Kyle I. Murray
Assistive technologies are often only adapted to emerging web technologies after they are common enough to generate demand for relevant assistive technology. This paper proposes a general approach to speed up and simplify the process of making assistive technologies compatible with innovation on the web by providing hooks to the browser's standard Document Object Model that expose the capabilities of a new technology.
   In this paper, we will illustrate the utility of this model by applying it to Adobe Flash content, which, despite years of attempts to produce more accessible content, remains woefully inaccessible.
   We show how this approach enables screen readers that would normally not have permission to read Flash content to access this content, how it can enable interactivity with Flash content in a modern browser.
Head-guided wheelchair control system BIBAFull-Text 313-314
  John B., III Hinkel
Many individuals using a wheelchair do not have the use of their hands, thereby impeding their ability to control a motorized wheelchair. Available systems that accomplish wheelchair control utilize potentially inhibitive peripheral devices. The purpose of this project was to design a system that allows individuals to control a motorized wheelchair with simple head movements. This system consists of a self-designed program, headset, and a control module that interfaces with a motorized wheelchair joystick.
Helping older adults locate 'lost' cursors using FieldMouse BIBAFull-Text 315-316
  Nic Hollinworth
This paper describes how a standard optical mouse was augmented by the addition of a touch sensor inside the body of the mouse. When the mouse is released and subsequently touched it generates a Windows message which can then be used to execute an action. Three techniques were developed using the augmented mouse (nicknamed 'FieldMouse') to help older adult computer users find the mouse cursor when it has become 'lost': placing the mouse cursor at the center of the screen; wiggling the mouse cursor from side to side; displaying a flashing red ring around the mouse cursor. The techniques were compared in a pilot study to see which one was the most effective, and this would then be used in a later study to compare techniques of finding the mouse cursor. Following the pilot, the technique of centering the mouse cursor was chosen for the later study since it was the quickest to find the mouse cursor, without needing to search the entire screen.
Improving public transit usability for blind and deaf-blind people by connecting a braille display to a smartphone BIBAFull-Text 317-318
  Shiri Azenkot; Emily Fortuna
We conducted interviews with blind and deaf-blind people to understand how they use the public transit system. In this paper, we discuss key challenges our participants faced and present a tool we developed to alleviate these challenges. We built this tool on MoBraille, a novel framework that enables a Braille display to benefit from many features in an Android phone without knowledge of proprietary, device-specific protocols. We conducted participatory design with a deaf-blind person and describe the lessons learned about designing an interface for a deaf-blind person.
Investigating meaning in uses of assistive devices: implications of social and professional contexts BIBAFull-Text 319-320
  Kristen Shinohara
People with disabilities use assistive devices both to bridge accessibility gaps in everyday tasks, and to augment inaccessible technologies, such as desktop computers. This interview study investigates how people with disabilities are affected when using assistive devices in professional and social situations. Participants were asked about different contexts of use, and how people around them reacted to their devices. Key findings were that individuals experienced issues of self consciousness and empowerment when using assistive devices and that specific aspects of assistive device design, such as size and perceived sleekness, contributed to these feelings.
Joystick text entry with word prediction for people with motor impairments BIBAFull-Text 321-322
  Young Chol Song
Joysticks are used by people with motor impairments as an assistive device which acts as a replacement for the keyboard and mouse. Most existing entry methods that use the joystick take the form of on-screen or selection keyboards which require multiple movements of a joystick to enter a single character, making text input slow. We try to reduce the number of required joystick movements by adding word completion and next word prediction. Evaluations show text entry with word prediction is 30% faster compared with entry on a regular selection keyboard and reduces the amount of movements by 50%, even for first-time users with less than 15 minutes of practice.
LocalEyes: accessible GPS and points of interest BIBAFull-Text 323-324
  Jason Behmer; Stillman Knox
Most current GPS devices are inaccessible for blind and low-vision users and the few that are tend to be expensive and highly specialized [2]. LocalEyes uses less expensive, multi-purpose smart phone technology and freely available data sources to provide an accessible GPS application that will increase a user's independence and ability to explore new places alone.
Text locating in scene images for reading and navigation aids for visually impaired persons BIBAFull-Text 325-326
  Chucai Yi
Many reading assistants and navigation systems have been designed specifically for people who are blind or visually impaired, but text locating in scene image with complex background has not yet been successfully addressed. In this paper, we propose a novel method to locate scene text by combining color uniformity and high edge density together. We perform structural analysis of text strings which contain several characters in alignment. First, we calculate the edge image and then repaint the corresponding edge pixels in the original image by using a non-dominant color. Second, color reduction is performed by color histogram and K-means algorithms to segment the repainted image into color layers. Third, we perform edge detection and label the boundaries of both text characters and unexpected noises in each color layer. Each centroid is assigned a degree which is the number of overlap in the same position among color layers. Fourth, text line fitting among centroids with high degree is performed to cascade the character boundaries which belong to the same text string. The detected text string is presented by a rectangle region covering all character boundaries in its text line. Experimental results demonstrate that our algorithm is able to locate text strings with arbitrary orientations. The performance of our algorithm is comparable with the state-of-art algorithms.
Utterance-based systems: organization and design of AAC interfaces BIBAFull-Text 327-328
  Timothy J. Walsh
Augmented and Alternative Communication (AAC) interfaces present many unique design challenges due to the wide variation of physical ability among AAC users. The development of technological aids for AAC users to bridge the communication gap, increase both rate and comprehensiveness of communication [1]. Here we focus on the design of an utterance-based system developed for literate, high-functioning adults interacting in public situations. This research involves the design and development of a coherent and intuitive AAC interface for an utterance-based system built upon theoretical evidence and observation of commercial-grade AAC interface software.
ZigAlert: a Zigbee alert for toileting training children with developmental delay in a public school setting BIBAFull-Text 329-330
  Yi-Chien Chen
Zigbee is used as assistive technology for children with developmental delays to achieve the goal of going to the toilet independently. In special education under the guidance of a teacher, we use a Zigbee sensor network to assist a 9-year-old child with toilet training.