HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

Fifteenth Annual ACM SIGACCESS Conference on Assistive Technologies

Fullname:Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
Location:Bellevue, Washington
Dates:2013-Oct-21 to 2013-Oct-23
Standard No:ISBN: 978-1-4503-2405-2; ACM DL: Table of Contents; hcibib: ASSETS13
Links:Conference Website
  1. Papers
  2. Experience reports
  3. Posters and demos
  4. Student competition posters


A haptic ATM interface to assist visually impaired users BIBAFull-Text 1
  Brendan Cassidy; Gilbert Cockton; Lynne Coventry
This paper outlines the design and evaluation of a haptic interface intended to convey non audio-visual directions to an ATM (Automated Teller Machine) user. The haptic user interface is incorporated into an ATM test apparatus on the keypad. The system adopts a well known 'clock face' metaphor and is designed to provide haptic prompts to the user in the form of directions to the current active device, e.g. card reader or cash dispenser. Results of an evaluation of the device are reported that indicate that users with varying levels of visual impairment are able to appropriately detect, distinguish and act on the prompts given to them by the haptic keypad. As well as reporting on how participants performed in the evaluation we also report the results of a semi structured interview designed to find out how acceptable participants found the technology for use on a cash machine. As a further contribution the paper also presents observations on how participants place their hands on the haptic device and compare this with their performance.
A web-based intelligibility evaluation of sign language video transmitted at low frame rates and bitrates BIBAFull-Text 2
  Jessica J. Tran; Rafael Rodriguez; Eve A. Riskin; Jacob O. Wobbrock
Mobile sign language video conversations can become unintelligible due to high video transmission rates causing network congestion and delayed video. In an effort to understand how much sign language video quality can be sacrificed, we evaluated the perceived lower limits of intelligible sign language video transmitted at four low frame rates (1, 5, 10, and 15 frames per second [fps]) and four low fixed bitrates (15, 30, 60, and 120 kilobits per second [kbps]). We discovered an "intelligibility ceiling effect," where increasing the frame rate above 10 fps decreased perceived intelligibility, and increasing the bitrate above 60 kbps produced diminishing returns. Additional findings suggest that relaxing the recommended international video transmission rate, 25 fps at 100 kbps or higher, would still provide intelligible content while considering network resources and bandwidth consumption. As part of this work, we developed the Human Signal Intelligibility Model, a new conceptual model useful for informing evaluations of video intelligibility.
An empirical study of issues and barriers to mainstream video game accessibility BIBAFull-Text 3
  John R. Porter; Julie A. Kientz
A gap between the academic human-computer interaction community and the game development industry has led to games not being as thoroughly influenced by accessibility standards as most other facets of information and communication technology. As a result, individuals with disabilities are unable to fully, if at all, engage with many commercial games. This paper presents the findings of a pair of complementary empirical studies intended to understand the current state of game accessibility in a grounded, real-world context and identify issues and barriers. The first study involved an online survey of 55 gamers with disabilities to elicit information about their play habits, experiences, and accessibility issues. The second study consisted of a series of semi-structured interviews with individuals from the game industry to better understand accessibility's situation in their design and development processes. Through quantitative and qualitative thematic analysis, we derive high-level insights from the data, such as the prevalence of assistive technology incompatibility and the value of middleware for implementing accessibility standardization. Finally, we discuss specific implications and how these insights can be used to define future work which may help to narrow the gap.
AphasiaWeb: a social network for individuals with aphasia BIBAFull-Text 4
  Hannah Miller; Heather Buhr; Chris Johnson; Jerry Hoepner
With the rise of social networks like Facebook and Twitter, it might seem that our opportunity to communicate with others is limited only by our access to smart phones and computers. However, most social networks are not designed with complete accessibility in mind. In particular, these networks' chronological organization of news items, abundant feature sets, and busy presentation can make these tools unusable to individuals with aphasia, an acquired language disorder that compromises an individual's ability to speak, write, and recognize language. This is unfortunate, as one of the primary means of managing aphasia is to keep individuals in community. To counter this, we have developed AphasiaWeb, a social network designed exclusively for keeping individuals with aphasia and their friends and families connected. In this paper we describe the social network and share findings from a two-month trial program conducted with a local aphasia support group.
Architecture of an automated therapy tool for childhood apraxia of speech BIBAFull-Text 5
  Avinash Parnandi; Virendra Karappa; Youngpyo Son; Mostafa Shahin; Jacqueline McKechnie; Kirrie Ballard; Beena Ahmed; Ricardo Gutierrez-Osuna
We present a multi-tier system for the remote administration of speech therapy to children with apraxia of speech. The system uses a client-server architecture model and facilitates task-oriented remote therapeutic training in both in-home and clinical settings. Namely, the system allows a speech therapist to remotely assign speech production exercises to each child through a web interface, and the child to practice these exercises on a mobile device. The mobile app records the child's utterances and streams them to a back-end server for automated scoring by a speech-analysis engine. The therapist can then review the individual recordings and the automated scores through a web interface, provide feedback to the child, and adapt the training program as needed. We validated the system through a pilot study with children diagnosed with apraxia of speech, and their parents and speech therapists. Here we describe the overall client-server architecture, middleware tools used to build the system, the speech-analysis tools for automatic scoring of recorded utterances, and results from the pilot study. Our results support the feasibility of the system as a complement to traditional face-to-face therapy through the use of mobile tools and automated speech analysis algorithms.
Audio-visual speech understanding in simulated telephony applications by individuals with hearing loss BIBAFull-Text 6
  Linda Kozma-Spytek; Paula Tucker; Christian Vogler
We present a study into the effects of the addition of a video channel, video frame rate, and audio-video synchrony, on the ability of people with hearing loss to understand spoken language during video telephone conversations. Analysis indicates that higher frame rates result in a significant improvement in speech understanding, even when audio and video are not perfectly synchronized. At lower frame rates, audio-video synchrony is critical: if the audio is perceived 100 ms ahead of video, understanding drops significantly; if on the other hand the audio is perceived 100 ms behind video, understanding does not degrade versus perfect audio-video synchrony. These findings are validated in extensive statistical analysis over two within-subjects experiments with 24 and 22 participants, respectively.
Bypassing lists: accelerating screen-reader fact-finding with guided tours BIBAFull-Text 7
  Tao Yang; Prathik Gadde; Robert Morse; Davide Bolchini
Navigating back and forth from a list of links (index) to its target pages is common on the web, but tethers screen-reader users to unnecessary cognitive and mechanical steps. This problem worsens when indexes lack information scent: cues that enable users to select a link with confidence during fact-finding. This paper investigates how blind users who navigate the web with screen-readers can bypass a scentless index with guided tours: a much simpler browsing pattern that linearly concatenates items of a collection. In a controlled study (N=11) at the Indiana School for the Blind and Visually Impaired (ISBVI), guided tours lowered user's cognitive effort and significantly decreased time-on-task and number of pages visited when compared to an index with poor information scent. Our findings suggest that designers can supplement indexes with guided tours to benefit screen-reader users in a variety of web navigation contexts.
Communication and coordination for institutional dementia caregiving in China BIBAFull-Text 8
  Claire Barco; Koji Yatani; Yuanye Ma; Joyojeet Pal
With a general trend worldwide towards greater life expectancies, interventions and tools that can help caregivers working in elder care are becoming increasingly important. In China, with a greater number and proportion of elders due to the long-term effects of the one-child policy, these interventions and tools are needed even more. Improved communication between care staff of an institutional home can reduce medical errors and improve coordination of care. At the same time, increased conversation with elders with cognitive impairments like dementia or Alzheimer's can help the elder to maintain their cognitive ability, and can reduce negative feelings like loneliness. Our qualitative study with eleven institutional caregivers in Beijing delved into the communication patterns that exist between caregivers and elders with dementia. We found that knowing more about each individual resident's disposition and personal history was helpful in maintaining quality care, that many care staff in China use placating talk as a means to calm or guide elders to a desired action, and that care staff found the topic of past careers or past 'glories' to be the most efficient in getting elders to chat. In addition, we also found that much of the information that is gleaned through working with an elder long-term is not recorded or shared in any official capacity with other care workers, an area where technology could be particularly helpful.
Comparing native signers' perception of American Sign Language animations and videos via eye tracking BIBAFull-Text 9
  Hernisa Kacorri; Allen Harper; Matt Huenerfauth
Animations of American Sign Language (ASL) have accessibility benefits for signers with lower written-language literacy. Our lab has conducted prior evaluations of synthesized ASL animations: asking native signers to watch different versions of animations and answer comprehension and subjective questions about them. Seeking an alternative method of measuring users' reactions to animations, we are now investigating the use of eye tracking to understand how users perceive our stimuli. This study quantifies how the eye gaze of native signers varies when they view: videos of a human ASL signer or synthesized animations of ASL (of different levels of quality). We found that, when viewing videos, signers spend more time looking at the face and less frequently move their gaze between the face and body of the signer. We also found correlations between these two eye-tracking metrics and participants' responses to subjective evaluations of animation-quality. This paper provides methodological guidance for how to design user studies evaluating sign language animations that include eye tracking, and it suggests how certain eye-tracking metrics could be used as an alternative or complimentary form of measurement in evaluation studies of sign language animation.
Do you see what I see?: designing a sensory substitution device to access non-verbal modes of communication BIBAFull-Text 10
  M. Iftekhar Tanveer; A. S. M. Iftekhar Anam; Mohammed Yeasin; Majid Khan
The inability to access non-verbal cues is a setback for people who are blind or visually impaired. A visual-to-auditory Sensory Substitution Device (SSD) may help improve the quality of their lives by transforming visual cues into auditory cues. In this paper, we describe the design and development of a robust and real-time SSD called iFEPS -- improved Facial Expression Perception through Sound. The implementation of the iFEPS evolved over time through a participatory design process. We conducted both subjective and objective experiments to quantify the usability of the system. Evaluation with 14 subjects (7 blind + 7 blind-folded) shows that the users were able to perceive the facial expressions in most of the time. In addition, the overall subjective usability of the system was found to be scoring 4.02 in a 5 point Likert scale.
Exploring the use of speech input by blind people on mobile devices BIBAFull-Text 11
  Shiri Azenkot; Nicole B. Lee
Much recent work has explored the challenge of nonvisual text entry on mobile devices. While researchers have attempted to solve this problem with gestures, we explore a different modality: speech. We conducted a survey with 169 blind and sighted participants to investigate how often, what for, and why blind people used speech for input on their mobile devices. We found that blind people used speech more often and input longer messages than sighted people. We then conducted a study with 8 blind people to observe how they used speech input on an iPod compared with the on-screen keyboard with VoiceOver. We found that speech was nearly 5 times as fast as the keyboard. While participants were mostly satisfied with speech input, editing recognition errors was frustrating. Participants spent an average of 80.3% of their time editing. Finally, we propose challenges for future work, including more efficient eyes-free editing and better error detection methods for reviewing text.
Eyes-free yoga: an exergame using depth cameras for blind & low vision exercise BIBAFull-Text 12
  Kyle Rector; Cynthia L. Bennett; Julie A. Kientz
People who are blind or low vision may have a harder time participating in exercise classes due to inaccessibility, travel difficulties, or lack of experience. Exergames can encourage exercise at home and help lower the barrier to trying new activities, but there are often accessibility issues since they rely on visual feedback to help align body positions. To address this, we developed Eyes-Free Yoga, an exergame using the Microsoft Kinect that acts as a yoga instructor, teaches six yoga poses, and has customized auditory-only feedback based on skeletal tracking. We ran a controlled study with 16 people who are blind or low vision to evaluate the feasibility and feedback of Eyes-Free Yoga. We found participants enjoyed the game, and the extra auditory feedback helped their understanding of each pose. The findings of this work have implications for improving auditory-only feedback and on the design of exergames using depth cameras.
Follow that sound: using sonification and corrective verbal feedback to teach touchscreen gestures BIBAFull-Text 13
  Uran Oh; Shaun K. Kane; Leah Findlater
While sighted users may learn to perform touchscreen gestures through observation (e.g., of other users or video tutorials), such mechanisms are inaccessible for users with visual impairments. As a result, learning to perform gestures can be challenging. We propose and evaluate two techniques to teach touchscreen gestures to users with visual impairments: (1) corrective verbal feedback using text-to-speech and automatic analysis of the user's drawn gesture; (2) gesture sonification to generate sound based on finger touches, creating an audio representation of a gesture. To refine and evaluate the techniques, we conducted two controlled lab studies. The first study, with 12 sighted participants, compared parameters for sonifying gestures in an eyes-free scenario and identified pitch + stereo panning as the best combination. In the second study, 6 blind and low-vision participants completed gesture replication tasks with the two feedback techniques. Subjective data and preliminary performance findings indicate that the techniques offer complementary advantages.
Good fonts for dyslexia BIBAFull-Text 14
  Luz Rello; Ricardo Baeza-Yates
Around 10% of the people have dyslexia, a neurological disability that impairs a person's ability to read and write. There is evidence that the presentation of the text has a significant effect on a text's accessibility for people with dyslexia. However, to the best of our knowledge, there are no experiments that objectively measure the impact of the font type on reading performance. In this paper, we present the first experiment that uses eye-tracking to measure the effect of font type on reading speed. Using a within-subject design, 48 subjects with dyslexia read 12 texts with 12 different fonts. Sans serif, monospaced and roman font styles significantly improved the reading performance over serif, proportional and italic fonts. On the basis of our results, we present a set of more accessible fonts for people with dyslexia.
Improved inference and autotyping in EEG-based BCI typing systems BIBAFull-Text 15
  Andrew Fowler; Brian Roark; Umut Orhan; Deniz Erdogmus; Melanie Fried-Oken
The RSVP Keyboard™ is a brain-computer interface (BCI)-based typing system for people with severe physical disabilities, specifically those with locked-in syndrome (LIS). It uses signals from an electroencephalogram (EEG) combined with information from an n-gram language model to select letters to be typed. One characteristic of the system as currently configured is that it does not keep track of past EEG observations, i.e., observations of user intent made while the user was in a different part of a typed message. We present a principled approach for taking all past observations into account, and show that this method results in a 20% increase in simulated typing speed under a variety of conditions on realistic stimuli. We also show that this method allows for a principled and improved estimate of the probability of the backspace symbol, by which mis-typed symbols are corrected. Finally, we demonstrate the utility of automatically typing likely letters in certain contexts, a technique that achieves increased typing speed under our new method, though not under the baseline approach.
Improving public transit accessibility for blind riders by crowdsourcing bus stop landmark locations with Google street view BIBAFull-Text 16
  Kotaro Hara; Shiri Azenkot; Megan Campbell; Cynthia L. Bennett; Vicki Le; Sean Pannella; Robert Moore; Kelly Minckler; Rochelle H. Ng; Jon E. Froehlich
Low-vision and blind bus riders often rely on known physical landmarks to help locate and verify bus stop locations (e.g., by searching for a shelter, bench, newspaper bin). However, there are currently few, if any, methods to determine this information a priori via computational tools or services. In this paper, we introduce and evaluate a new scalable method for collecting bus stop location and landmark descriptions by combining online crowdsourcing and Google Street View (GSV). We conduct and report on three studies in particular: (i) a formative interview study of 18 people with visual impairments to inform the design of our crowdsourcing tool; (ii) a comparative study examining differences between physical bus stop audit data and audits conducted virtually with GSV; and (iii) an online study of 153 crowd workers on Amazon Mechanical Turk to examine the feasibility of crowdsourcing bus stop audits using our custom tool with GSV. Our findings reemphasize the importance of landmarks in non-visual navigation, demonstrate that GSV is a viable bus stop audit dataset, and show that minimally trained crowd workers can find and identify bus stop landmarks with 82.5% accuracy across 150 bus stop locations (87.3% with simple quality control).
IncluCity: using contextual cues to raise awareness on environmental accessibility BIBAFull-Text 17
  Jorge Goncalves; Vassilis Kostakos; Simo Hosio; Evangelos Karapanos; Olga Lyra
Awareness campaigns aiming to highlight the accessibility challenges affecting people with disabilities face an important challenge. They often describe the environmental features that pose accessibility barriers out of context, and as a result public cannot relate to the problems at hand. In this paper we demonstrate that contextual cues can enhance people's perception and understanding of accessibility. We describe a two-week study where our participants submitted reports of inaccessible spots all over the city through a web application. Using a 2x2 factorial design we contrast the impact of two types of contextual cues, visual cues (i.e., displaying a picture of the inaccessible spot) and location cues (i.e., ability to zoom-in the exact location). We measure participants' perceptions of accessibility and how they are challenged to consider their own limitations and barriers that may also affect themselves in certain circumstances. Our results suggest that visual cues led to a bigger sense of urgency while also improving participants' attitude towards disability.
Answering visual questions with conversational crowd assistants BIBAFull-Text 18
  Walter S. Lasecki; Phyo Thiha; Yu Zhong; Erin Brady; Jeffrey P. Bigham
Blind people face a range of accessibility challenges in their everyday lives, from reading the text on a package of food to traveling independently in a new place. Answering general questions about one's visual surroundings remains well beyond the capabilities of fully automated systems, but recent systems are showing the potential of engaging on-demand human workers (the crowd) to answer visual questions. The input to such systems has generally been a single image, which can limit the interaction with a worker to one question; or video streams where systems have paired the end user with a single worker, limiting the benefits of the crowd. In this paper, we introduce Chorus:View, a system that assists users over the course of longer interactions by engaging workers in a continuous conversation with the user about a video stream from the user's mobile device. We demonstrate the benefit of using multiple crowd workers instead of just one in terms of both latency and accuracy, then conduct a study with 10 blind users that shows Chorus:View answers common visual questions more quickly and accurately than existing approaches. We conclude with a discussion of users' feedback and potential future work on interactive crowd support of blind users.
Physical accessibility of touchscreen smartphones BIBAFull-Text 19
  Shari Trewin; Cal Swart; Donna Pettick
This paper examines the use of touchscreen smartphones, focusing on physical access. Using interviews and observations, we found that participants with dexterity impairment considered a smartphone both useful and usable, but tablet devices offer several important advantages. Cost is a major barrier to adoption. We describe usability problems that are not addressed by existing accessibility options, and observe that the dexterity demands of important accessibility features made them unusable for many participants. Despite participants' enthusiasm for both smartphones and tablet devices, their potential is not yet fully realized for this population.
Real time object scanning using a mobile phone and cloud-based visual search engine BIBAFull-Text 20
  Yu Zhong; Pierre J. Garrigues; Jeffrey P. Bigham
Computer vision and human-powered services can provide blind people access to visual information in the world around them, but their efficacy is dependent on high-quality photo inputs. Blind people often have difficulty capturing the information necessary for these applications to work because they cannot see what they are taking a picture of. In this paper, we present Scan Search, a mobile application that offers a new way for blind people to take high-quality photos to support recognition tasks. To support realtime scanning of objects, we developed a key frame extraction algorithm that automatically retrieves high-quality frames from continuous camera video stream of mobile phones. Those key frames are streamed to a cloud-based recognition engine that identifies the most significant object inside the picture. This way, blind users can scan for objects of interest and hear potential results in real time. We also present a study exploring the tradeoffs in how many photos are sent, and conduct a user study with 8 blind participants that compares Scan Search with a standard photo-snapping interface. Our results show that Scan Search allows users to capture objects of interest more efficiently and is preferred by users to the standard interface.
Safe walking technology for people with dementia: what do they want? BIBAFull-Text 21
  Kristine Holbø; Silje Bøthun; Yngve Dahl
This paper presents an attempt to understand how safe walking technology can be designed to fit the needs of people with dementia. Taking inspiration from modern dementia care philosophy, and its emphasis on the individual with dementia, we have performed in-depth investigations of three persons' experiences of living with early-stage dementia. From interviews and co-design workshops with them and their family caregivers, we identified several factors that influence people with dementia's attitudes toward safe walking technology, and how they want the technology to assist them. Relevant factors include: The desire for control and self-management, the subjective experiences of symptoms, personal routines and skills, empathy for care-givers, and the local environment in which they live. Based on these findings, we argue there is a need to reconsider "surveillance" as a concept on which to base design of safe walking technology. We also discuss implications for design ethics.
Touchplates: low-cost tactile overlays for visually impaired touch screen users BIBAFull-Text 22
  Shaun K. Kane; Meredith Ringel Morris; Jacob O. Wobbrock
Adding tactile feedback to touch screens can improve their accessibility to blind users, but prior approaches to integrating tactile feedback with touch screens have either offered limited functionality or required extensive (and typically expensive) customization of the hardware. We introduce touchplates, carefully designed tactile guides that provide tactile feedback for touch screens in the form of physical guides that are overlaid on the screen and recognized by the underlying application. Unlike prior approaches to integrating tactile feedback with touch screens, touchplates are implemented with simple plastics and use standard touch screen software, making them versatile and inexpensive. Touchplates may be customized to suit individual users and applications, and may be produced on a laser cutter, 3D printer, or made by hand. We describe the design and implementation of touchplates, a "starter kit" of touchplates, and feedback from a formative evaluation with 9 people with visual impairments. Touchplates provide a low-cost, adaptable, and accessible method of adding tactile feedback to touch screen interfaces.
UbiBraille: designing and evaluating a vibrotactile Braille-reading device BIBAFull-Text 23
  Hugo Nicolau; João Guerreiro; Tiago Guerreiro; Luís Carriço
Blind people typically resort to audio feedback to access information on electronic devices. However, this modality is not always an appropriate form of output. Novel approaches that allow for private and inconspicuous interaction are paramount. In this paper, we present a vibrotactile reading device that leverages the users' Braille knowledge to read textual information. UbiBraille consists of six vibrotactile actuators that are used to code a Braille cell and communicate single characters. The device is able to simultaneously actuate the users' index, middle, and ring fingers of both hands, providing fast and mnemonic output. We conducted two user studies on UbiBraille to assess both character and word reading performance. Character recognition rates ranged from 54% to 100% and were highly character- and user-dependent. Indeed, participants with greater expertise in Braille reading/writing were able to take advantage of this knowledge and achieve higher accuracy rates. Regarding word reading performance, we investigated four different vibrotactile timing conditions. Participants were able to read entire words and obtained recognition rates up to 93% with the most proficient ones being able achieve a rate of 1 character per second.
Uncovering information needs for independent spatial learning for users who are visually impaired BIBAFull-Text 24
  Nikola Banovic; Rachel L. Franz; Khai N. Truong; Jennifer Mankoff; Anind K. Dey
Sighted individuals often develop significant knowledge about their environment through what they can visually observe. In contrast, individuals who are visually impaired mostly acquire such knowledge about their environment through information that is explicitly related to them. This paper examines the practices that visually impaired individuals use to learn about their environments and the associated challenges. In the first of our two studies, we uncover four types of information needed to master and navigate the environment. We detail how individuals' context impacts their ability to learn this information, and outline requirements for independent spatial learning. In a second study, we explore how individuals learn about places and activities in their environment. Our findings show that users not only learn information to satisfy their immediate needs, but also to enable future opportunities -- something existing technologies do not fully support. From these findings, we discuss future research and design opportunities to assist the visually impaired in independent spatial learning.
Visual complexity, player experience, performance and physical exertion in motion-based games for older adults BIBAFull-Text 25
  Jan Smeddinck; Kathrin M. Gerling; Saranat Tiemkeo
Motion-based video games can have a variety of benefits for the players and are increasingly applied in physical therapy, rehabilitation and prevention for older adults. However, little is known about how this audience experiences playing such games, how the player experience affects the way older adults interact with motion-based games, and how this can relate to therapy goals. In our work, we decompose the player experience of older adults engaging with motion-based games, focusing on the effects of manipulations of the game representation through the visual channel (visual complexity), since it is the primary interaction modality of most games and since vision impairments are common amongst older adults. We examine the effects of different levels of visual complexity on player experience, performance, and exertion in a study with fifteen participants. Our results show that visual complexity affects the way games are perceived in two ways: First, while older adults do have preferences in terms of visual complexity of video games, notable effects were only measurable following drastic variations. Second, perceived exertion shifts depending on the degree of visual complexity. These findings can help inform the design of motion-based games for therapy and rehabilitation for older adults.
What health topics older adults want to track: a participatory design study BIBAFull-Text 26
  Jennifer L. Davidson; Carlos Jensen
Older adults are increasingly savvy consumers of smartphone-based health solutions and information. These technologies may enable older adults to age-in-place more successfully. However, many app creators fail to do needs assessments of their end-users. To rectify this issue, we involved older adults (aged 65+) in the beginning stages of designing a mobile health and wellness application. We conducted a participatory design study, where 5 groups of older adults created 5 designs. Four groups identified at least 1 health metric not currently offered in either the iPhone app store or the Google Play store. At the end of the sessions we administered a questionnaire to determine what health topics participants would like to track via smartphone or tablet. The designs included 13 health topics that were not on the questionnaire. Seventeen of eighteen participants expressed interest in tracking health metrics using a smartphone/tablet despite having little experience with these devices. This shows that older adults have unique ideas that are not being considered by current technology designers. We conclude with recommendations for future development, and propose continuing to involve to older adults in participatory design.
Wheelchair-based game design for older adults BIBAFull-Text 27
  Kathrin M. Gerling; Regan L. Mandryk; Michael R. Kalyn
Few leisure activities are accessible to institutionalized older adults using wheelchairs; in consequence, they experience lower levels of perceived health than able-bodied peers. Video games have been shown to be an engaging leisure activity for older adults. In our work, we address the design of wheelchair-accessible motion-based games. We present KINECTWheels, a toolkit designed to integrate wheelchair movements into motion-based games, and Cupcake Heaven, a wheelchair-based video game designed for older adults using wheelchairs. Results of two studies show that KINECTWheels can be applied to make motion-based games wheelchair-accessible, and that wheelchair-based games engage older adults. Through the application of the wheelchair as an enabling technology in play, our work has the potential of encouraging older adults to develop a positive relationship with their wheelchair.
"Pray before you step out": describing personal and situational blind navigation behaviors BIBAFull-Text 28
  Michele A. Williams; Amy Hurst; Shaun K. Kane
Personal navigation tools have greatly impacted the lives of people with vision impairments. As people with vision impairments often have different requirements for technology, it is important to understand users' ever-changing needs. We conducted a formative study exploring how people with vision impairments used technology to support navigation. Our findings from interviews with 30 adults with vision impairments included insights about experiences in Orientation & Mobility (O&M) training, everyday navigation challenges, helpful and unhelpful technologies, and the role of social interactions while navigating. We produced a set of categorical data that future technologists can use to identify user requirements and usage scenarios. These categories consist of Personality and Scenario attributes describing navigation behaviors of people with vision impairments. We demonstrate the usefulness of these attributes by introducing navigation-style personas backed by our data. This work demonstrates the complex choices individuals with vision impairments undergo when leaving their home, and the many factors that affect their navigation behavior.

Experience reports

How someone with a neuromuscular disease experiences operating a PC (and how to successfully counteract that) BIBAFull-Text 29
  Torsten Felzer; Stephan Rinderknecht
This paper describes the experiences of the first author, who has been diagnosed with the neuromuscular disease Friedreich's Ataxia more than 25 years ago, with the innovative approach to human-computer interaction characterized by the software tool OnScreenDualScribe. Originally developed by (and for!) the first author, the tool replaces the standard input devices -- i.e., keyboard and mouse -- with a small numerical keypad, making optimal use of his abilities. The paper attempts to illustrate some of the difficulties the first author usually has to face when operating a computer, due to considerable motor problems. It will be shown what he tried in the past, and why OnScreenDualScribe, offering various assistive techniques -- including word prediction, an ambiguous keyboard, and stepwise pointing operations -- is indeed a viable alternative. The ultimate goal is to help not only one single person, but to make the system -- which does not accelerate entry very much, but clearly reduces the required effort -- available to anyone with similar conditions.
Mixed local and remote participation in teleconferences from a deaf and hard of hearing perspective BIBAFull-Text 30
  Christian Vogler; Paula Tucker; Norman Williams
In this experience report we describe the accessibility challenges that deaf and hard of hearing committee members faced while collaborating with a larger group of hearing committee members over a period of 2½ years. We explain what some recurring problems are, how audio-only conferences fall short even when relay services and interpreters are available, and how we devised a videoconferencing setup using FuzeMeeting to minimize the accessibility barriers. We also describe some best practices, as well as lessons learned, and pitfalls to avoid in deploying this type of setup.

Posters and demos

A collection of conversational AAC-like communications BIBAFull-Text 31
  Keith Vertanen
We contribute a public test set of everyday conversational communications. The communications were written in response to ten hypothetical situations given to workers on the crowdsourcing site Amazon Mechanical Turk. After quality control, our public dataset consists of 1,506 unique communications. These communications can be used to help design and evaluate text-based predictive communication aids. The collection also provides a common public test set for research into predictive conversational text entry.
A convenient heuristic model for understanding assistive technology adoption BIBAFull-Text 32
  Katherine Deibel
This short paper presents a generalized heuristic model for understanding various factors that influence the adoption and usage of an assistive technology.
A path-guided audio based indoor navigation system for persons with visual impairment BIBAFull-Text 33
  Dhruv Jain; Akhil Jain; Rohan Paul; Akhila Komarika; M. Balakrishnan
Independent path-based mobility in an unfamiliar indoor environment is a common problem faced by visually impaired community. We present the design of an infra-red based active wayfinding system for the visually impaired. Our proposed system: downloads the floor plan of the building, locates and tracks the user inside the building, finds the shortest path and provides step-by-step direction to the destination using voice messages. The audio instructions include active guidance for impending turns in the path of travel, distance of each section between turns, obstacle warning instructions and position correction messages when the user gets lost. Results from a needs finding study with visually impaired individuals formed the design of the system. We then deployed the system in a building and field tested it with users using a standardized before-and-after study. The comparison of the results demonstrated that the system is usable and useful.
A system helping blind people to get character information in their surrounding environment BIBAFull-Text 34
  Noboru Ohnishi; Tetsuya Matsumoto; Hiroaki Kudo; Yoshinori Takeuchi
We propose a system helping blind people to get character information in their surrounding environment, such as merchandise information (name, price, and best-before/use-by date) and restaurant menu (name and price). The system consists of a computer, a wireless camera/scanner and an earphone. It processes images captured/scanned by a user and extracts character regions in the image by using Support Vector Machine (SVM). Applying Optical Character Recognition (OCR) to the extracted regions, the system outputs the character information as synthesized speech.
Accessibility in smartphone applications: what do we learn from reviews? BIBAFull-Text 35
  Asm Iftekhar Anam; Mohammed Yeasin
We explored the efficacy of smartphone App reviews to understand the user experience reports that may facilitate ranking and provide insight about accessibility gaps. The main goal was to analyze the contents of the reviews to infer the presence and polarity of accessibility information. In particular, we focused on applications that are used by the users who are blind or visually impaired. In this pilot study, the contents of 173 reviews from 25 applications were analyzed. The proposed system automatically detects accessibility information in the reviews and also tests their polarity. Such a system would be useful in application ranking based on accessibility features and improve the users' interaction experiences.
An indoor human behavior gathering system toward future support for visually impaired people BIBAFull-Text 36
  Ryotaro Okada; Ikuko Yairi
A Visually impaired people's life is changing significantly in recent information society; even the visually impaired person who masters a various functions of a smart device exists. However, the diversity of their lifestyle has not been thoroughly investigated yet, compared to that of sighted people's. Therefore, the lifestyles of visually impaired people tend to be misunderstood by others; and even wrong stereotypes of them could be spread in society. This situation has been making it hard for us to find actually useful living assistants for them. The biggest factor that prevents investigations of bare lifestyles of visually impaired people is the privacy issue caused by video recording. This study's purpose is to propose and evaluate a new sensing system that substitutes the video ethnography and raises degree of abstraction so that it would not specify personal information and invade their privacy.
An iOS reader for people with dyslexia BIBAFull-Text 37
  Luz Rello; Ricardo Baeza-Yates; Horacio Saggion; Clara Bayarri; Simone D. J. Barbosa
We present DysWebxia, an eBook reader for iOS which modifies the form and the content of the text. This tool is specifically designed for people with dyslexia according to previous research with this target group. The settings are customizable depending on the reading preferences.
Auditory displays for accessible fantasy sports BIBAFull-Text 38
  Jared M. Batterman; Jonathan H. Schuett; Bruce N. Walker
In this paper we address the lack of accessibility in fantasy sports for visually impaired users and discuss the accessible fantasy sports system that we have designed using auditory displays. Fantasy sports are a fun and social activity requiring users to make decisions about their fantasy teams, which use real athletes' weekly performance to gain points and compete against other users' fantasy teams. Fantasy players manage their teams by making informed decisions using statistics about real sports related data. These statistics are usually presented online in a spreadsheet layout, however online fantasy sports are usually inaccessible to screen readers due to the use of Flash on most sites. Our current system, described in this paper, utilizes auditory display techniques such as auditory alerts, earcons, spearcons, general text-to-speech, and auditory graphs to present sports statistics to visually impaired fantasy users. The current version of our system was designed based on feedback from current fantasy sports users during a series of think-aloud walkthroughs.
Automatic image conversion to tactile graphic BIBAFull-Text 39
  Tyler J. Ferro; Dianne T. V. Pawluk
Currently, individuals who are blind or visually impaired have limited resources to allow them to interpret information contained in images. The aim of this project is to provide an accessible system to automatically generate tactile graphics for those who need to interpret information contained in visual images. The fundamental steps to accomplish this are to segment and simplify the image. The focus of this paper will be on several methods to segment an image.
Brain-training software for stroke survivors BIBAFull-Text 40
  Lourdes Morales Villaverde; Sean-Ryan Smith; Sri Kurniawan
Brain-training software and websites are becoming more prevalent nowadays because of their wide availability and the key idea that they enable people to independently improve their memory and problem solving skills. Such systems offer a simple alternative to expensive specialized stroke rehabilitation software. Our research seeks to investigate the feasibility of using web-based brain-training software to help stroke survivors and, in general, individuals with cognitive impairments. We observed and interviewed stroke survivors to get a better understanding of the technologies that they feel are helpful, as well as examine the effectiveness and limitations of such technologies. We compiled an informal set of guidelines that such systems should follow in order to be effective and usable rehabilitation software aimed to help stroke survivors or individuals with cognitive impairments. To validate the guidelines and see if new ones emerged, we developed a low-fidelity prototype of a web-based brain-training software and tested it with five participants to check its feasibility as a cognitive rehabilitation software solution. The result was an improved set of guidelines for software that aims to improve the cognitive skills of stroke survivors and, in general, individuals with cognitive impairments.
CamIO: a 3D computer vision system enabling audio/haptic interaction with physical objects by blind users BIBAFull-Text 41
  Huiying Shen; Owen Edwards; Joshua Miele; James M. Coughlan
CamIO (short for "Camera Input-Output") is a novel camera system designed to make physical objects (such as documents, maps, devices and 3D models) fully accessible to blind and visually impaired persons, by providing real-time audio feedback in response to the location on an object that the user is pointing to. The project will have wide ranging impact on access to graphics, tactile literacy, STEM education, independent travel and wayfinding, access to devices, and other applications to increase the independent functioning of blind, low vision and deaf-blind individuals. We describe our preliminary results with a prototype CamIO system consisting of the Microsoft Kinect camera connected to a laptop computer. An experiment with a blind user demonstrates the feasibility of the system, which issues Text-to-Speech (TTS) annotations whenever the user's fingers approach any pre-defined "hotspot" regions on the object.
Collaborative music application for visually impaired people with tangible objects on table BIBAFull-Text 42
  Shotaro Omori; Ikuko Eguchi Yairi
The collaborative work of visually impaired people and sighted people on equal ground plays a significant role for visually impaired people's social advance in society. We developed a collaborative application of music composition to achieve the goal mentioned above. This application has a beautiful tangible interface that would attract the attention of both visually impaired and sighted people, and multiple functions that are likely to induce collaborative communication among users. We demonstrated the experiment with six visually impaired people and six sighted people. In the experiment, the visually impaired people could lead the collaborative work without hesitating even in front of the sighted people whom they did not know very well. Then we focused our attention on the moment in which the visually impaired were having fun, and discussed the factor of the excitement.
Comparison of reading accuracy between tactile pie charts and tactile band charts BIBAFull-Text 43
  Kosuke Araki; Tetsuya Watanabe
We compared the accuracy of reading tactile pie charts and reading tactile band charts with sighted and blind students as participants. They were presented with a set of tactile pie charts and band charts with individual division ratios and asked to answer the ratio. The number of errors and reading times were measured as accuracy metrics. The results from sighted participants showed that the error sizes in the reading pie charts were, on the whole, smaller than those in the reading band charts and that there was little difference between the reading times of the two charts. The results from a blind participant showed similar trend to those from sighted participants except for the reading times.
Computerized short-term memory treatment for older adults with aphasia: feedback from clinicians BIBAFull-Text 44
  Diana Molero Martin; Robert Laird; Faustina Hwang; Christos Salis
We report on a software prototype designed to deliver a novel short-term memory treatment for older adults with aphasia. We conducted a review of the prototype with 14 speech and language therapists, to elicit feedback on the potential usefulness of the ultimate application and, particularly, the prototype in terms of its main design features and use by older adults with aphasia. The clinicians' feedback highlights a number of design considerations relating to the usability, training methods, and appeal of treatment software, which can help engage patients more fully in computerized treatments and improve treatment outcomes.
Crowd caption correction (CCC) BIBAFull-Text 45
  Rebecca Perkins Harrington; Gregg C. Vanderheiden
Captions can be found in a variety of media, including television programs, movies, webinars and telecollaboration meetings. Although very helpful, captions sometimes have errors, such as misinterpretations of what was said, missing words and misspellings of technical terms and proper names. Due to the labor intensive nature of captioning, caption providers may not have the time or in some cases, the background knowledge of meeting content, that would be needed to correct errors in the captions. The Crowd Caption Correction (CCC) feature (and service) addresses this issue by allowing meeting participants or third party individuals to make corrections to captions in realtime during a meeting. Additionally, the feature also uses the captions to create a transcript of all captions broadcast during the meeting, which users can save and reference both during the meeting and at a later date. The feature will be available as a part of the Open Access Tool Tray System (OATTS) suite of open source widgets developed under the University of Wisconsin-Madison Trace Center Telecommunications RERC. The OATTS suite is designed to increase access to information during telecollaboration for individuals with a variety of disabilities.
Designing an accessible clothing tag system for people with vision impairments BIBAFull-Text 46
  Michele A. Williams; Kathryn Ringland; Amy Hurst
Many clothing characteristics (from garment color to care instructions) are inaccessible to people with vision impairments. To address this problem, clothing information is gathered from sighted companions, and later recalled using low-tech solutions such as adding safety pins to clothes. Unfortunately, these low-tech solutions require precise memory (such as recalling a pin's meaning) and provide limited information. Using an iterative design approach, we prototyped several alternative technology solutions and tested them with five people with vision impairments. We are working towards an interface that provides detailed information in a streamlined interaction, focusing our future efforts on a wearable RFID tagging solution.
Developing tactile icons to support mobile users with situationally-induced impairments and disabilities BIBAFull-Text 47
  Huimin Qian; Ravi Kuber; Andrew Sears
Although it is well known that interaction with a mobile device can be impacted when the environment is inhospitable or when the user is on the move, situationally-induced impairments and disabilities (SIIDs) are often overlooked in the mobile interface design process. In this paper, we describe one step toward supporting mobile users with SIIDs, through the design of tactile notifications. The tactile channel offers considerable promise to convey notifications to the user, freeing their visual and auditory channels for other tasks. A study was conducted to determine whether participants could develop tactile cues to convey the key characteristics of alerts to mobile users (e.g. urgency, relationship with the sender). The results highlight the benefits of tactile prototyping tools to encourage generation of design ideas, and the use of scenarios to situate these design ideas within the intended context of use.
Evaluation of touch screen vibration accessibility for blind users BIBAFull-Text 48
  Amine Awada; Youssef Bou Issa; Joe Tekli; Richard Chbeir
In this demo paper, we briefly present our experimental prototype, entitled EVIAC (EValuation of VIbration Accessibility), allowing visually impaired users to access simple contour-based images using vibrating touch screen technology. We provide an overview of the system's main functionalities and discuss some experimental results.
Extending access to personalized verbal feedback about robots for programming students with visual impairments BIBAFull-Text 49
  Sekou L. Remy
This work demonstrates improvements in a software tool that provides verbal feedback about executed robot code. Designed for programming students with visual impairments, the tool is now multi-lingual and no longer requires locally installed text-to-speech software. These developments use cloud and web standards to provide greater flexibility in generating personalized verbal feedback.
How accessible is the process of web interface design? BIBAFull-Text 50
  Kirk Norman; Yevgeniy Arber; Ravi Kuber
This paper describes a data gathering study, examining the experiences and day-to-day challenges faced by blind web interface developers when designing sites and online applications. Findings have revealed that considerable amounts of time and cognitive effort can be spent checking code in text editing software and examining the content presented via the web browser. Participants highlighted the burden experienced from committing large sections of code to memory, and the restrictions associated with assistive technologies when performing collaborative tasks with sighted developers and clients. Our future work aims to focus on the development of a multimodal web editing and browsing solution, designed to support both blind and sighted parties during the design process.
How power wheelchair users choose computing devices BIBAFull-Text 51
  Patrick Carrington; Amy Hurst; Shaun K. Kane
People with motor impairments experience a range of challenges when interacting with computers. While much prior research has explored the effects of motor impairments on accessing computer input devices, such as keyboards, mice, and touch screens, we know relatively little about how real world use of a wheelchair affects why people in power wheelchairs choose specific computing devices, and how they switch between such devices. We interviewed 8 power wheelchair users about their use of computers and mobile devices. We found that participants often had difficulty switching between the various devices in their life, and that technology use was especially challenging on the go. Our findings suggest that there are numerous opportunities to make computing more wheelchair-friendly, by consolidating devices, improving reachability and portability of devices, and by creating technology that is robust to the challenges of moving around in a wheelchair.
Initial results from a critical review of research on technology for older and disabled people BIBAFull-Text 52
  Bláithín Gallagher; Helen Petrie
In light of changing demographics and an ageing population, research on new technologies to support older people and people with disabilities in independent living is vital. This paper will present the results to date of a review, currently being undertaken, of recent research on new and emerging technologies designed for older people and people with disabilities. The review covers research published between 2005 and 2012 in a range of international peer-reviewed journals and conferences, in the areas of technology, human-computer interaction, disability and assistive devices. On the basis of this review of research, we are exploring what problems of older and disabled people are being addressed by researchers and developers; whether the research is motivated by user needs; the methodologies used and outcomes presented. First results will be presented in the poster.
Interviewing blind photographers: design insights for a smartphone application BIBAFull-Text 53
  Dustin Adams; Tory Gallagher; Alexander Ambard; Sri Kurniawan
Studies showed that people with limited or no vision are taking, storing, organizing, and sharing photos, but there isn't much information of how they do these. While the process for taking photos is somewhat understood, there has been little work researching how exactly blind people are storing, organizing, and sharing their photos without sighted help. We ran interviews with 11 people with limited to no vision who have taken digital photos, and analyzed their responses. We aim to use this information to motivate features of a smartphone application that will assist people with limited vision not only with aiming the camera to capture a "good" photo, but also with locating and organizing their photos so they may retrieve their photos at a later date or share them with others, online or offline.
Involving clinical staff in the design of a support tool improve dental communication for patients with intellectual disabilities BIBAFull-Text 54
  Rachel Menzies; Daniel Herron; Lesley Scott; Ruth Freeman; Annalu Waller
Communication within clinical settings is crucial for successful clinical practice. However, this is challenging when the clinician interacts with patients with Intellectual Disabilities (ID) who may have communication difficulties or find it difficult to understand the treatment process. The "Stories at the Dentist" project aims to develop a support tool to improve clinical communication between clinicians and patients with ID. This paper outlines a design workshop undertaken as part of a user centered design process.
Real-time captioning by non-experts with legion scribe BIBAFull-Text 55
  Walter S. Lasecki; Christopher D. Miller; Raja Kushalnagar; Jeffrey P. Bigham
Real-time captioning provides people who are deaf or hard of hearing access to speech in settings such as classrooms and live events. The most reliable approach to provide these captions is to recruit an expert stenographer who is able to type at natural speaking rates, but they charge more than $100 USD per hour and must be scheduled in advance. We introduce Legion Scribe (Scribe), a system that allows 3-5 ordinary people who can hear and type to jointly caption speech in real-time. Each person is unable to type at natural speaking rates, and so is asked only to type part of what they hear. Scribe automatically stitches all of the partial captions together to form a complete caption stream. We have shown that the accuracy of Scribe captions approaches that of a professional stenographer, while its latency and cost is dramatically lower.
Manual evaluation of synthesised sign language avatars BIBAFull-Text 56
  Robert Smith; Brian Nolan
The evaluation discussed in this paper explores the role that underlying facial expressions might have regarding understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community's appetite for sign language avatars. The work presented explores the following hypothesis: Augmenting an existing avatar with various combinations of the 7 widely accepted universal emotions identified by Ekman [1] to achieve underlying facial expressions, will make that avatar more human-like and consequently improve usability and understandability for the ISL user. Using human evaluation methods [2] we compare an augmented set of avatar utterances against a baseline set, focusing on two key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment and evaluation methodology.
Motion-games in brain injury rehabilitation: an in-situ multi-method study of inpatient care BIBAFull-Text 57
  Cynthia Putnam; Jinghui Cheng
In this project, we explored how commercial motion-based video games were used in a rehabilitation hospital with patients who have had a brain injury (BI). We interviewed therapists and observed game sessions. Major findings included: (a) the social aspects of gaming were highly valued; (b) therapists had varied physical, cognitive and social goals when using games; and (c) therapists made game decisions primarily based on familiarity versus choosing games that best match therapeutic goals and patient profiles. Our exploration exposed a need for decision tools to help therapists make evidence-based decisions about commercial games; i.e. to help them choose games that match session goals and patient profiles. We have expanded our research to include diary studies in order to gather data for 'seed cases' for decision tools that use case-based reasoning.
Multilingual website assessment for accessibility: a survey on current practices BIBAFull-Text 58
  Silvia Rodríguez Vázquez; Anton Bolfing
The accessibility degree achieved in a monolingual website may vary throughout the localization process, when it is made multilingual. This paper overviews the results of a survey conducted with the aim of exploring current practices followed when specifically assessing multilingual websites for accessibility. Respondents (N=67) were web accessibility experts with at least two years of experience in the field. While our work does not return conclusive results, findings suggest that multilingual website assessment practices, as they stand today, do not follow a standardized pattern, and time spent on textual and culture-related elements, which still remain key information assets within a webpage, is considerably low. The study also sheds light on the need of localization-related knowledge and know-how to successfully achieve accessible websites where more than one language version is available.
Optimization of switch keyboards BIBAFull-Text 59
  Xiao (Cosmo) Zhang; Kan Fang; Gregory Francis
Patients with motor control difficulties often "type" on a computer using a switch keyboard to guide a scanning cursor to text elements. We show how to optimize some parts of the design of switch keyboards by casting the design problem as mixed integer programming. A new algorithm to find an optimized design solution is approximately 3600 times faster than a previous algorithm, which was also susceptible to finding a non-optimal solution. The optimization requires a model of the probability of an entry error, and we show how to build such a model from experimental data. Example optimized keyboards are demonstrated.
Page sample size in web accessibility testing: how many pages is enough? BIBAFull-Text 60
  Eric Velleman; Thea van der Geest
Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This poster reports the work-in-progress. Data collection has been completed and we have started the analysis to determine how many pages is enough for specified reliability levels.
PlatMult: a multisensory platform with web accessibility features for low vision users BIBAFull-Text 61
  Marcio Oyamada; Jorge Bidarra; Clodis Boscarioli
Screen magnifiers are the usual assistive technology for low vision users in computer interaction. However, even using a screen magnifier, low vision users have serious limitations for web access. In order to mitigate this problem, we propose a multisensory platform called PlatMult, composed of visual, auditory, tactile feedback. The PlatMult provides a screen magnifier, integrated with a screen reader, and an adapted mouse with motor feedback. The feedback is fired in an integrated way; for instance, when the user moves the mouse over links and buttons in a webpage, the PlatMult activates the screen reader and the mouse motor feedback. This paper describes the components developed in order to obtain the accessible events from the Firefox web browser.
PuffText: a voiceless and touchless text entry solution for mobile phones BIBAFull-Text 62
  Jackson Feijó Filho; Thiago Valle; Wilson Prata
This work proposes the use of a low-cost software based puff controlled spin keyboard for mobile phones as an alternative interaction technology for people with motor disabilities. It attempts to explore the processing of the audio from the microphone in mobile phones to select characters from a spinning keyboard. A proof of concept of this work is demonstrated by the implementation and experimentation of a mobile application prototype that enables users to perform text entry through "puffing" interaction.
SlideType: universal design text-entry interface for touchscreen devices BIBAFull-Text 63
  Xiao "Nikita" Xiong; Jon A. Sanford
In this work, we present SlideType, an intuitive text-entry system on touchscreen devices with touch and gestural inputs as well as visual and auditory output to be used by as many people as possible, including those have vision, dexterity and cognition impairments. 9 participants were tested using SlideType to input a name such as "John Smith" without training. Overall, participants were able to complete the task of typing and editing. We reported some usability study results.
Social platform for sharing accessibility information among people with disabilities: evaluation of a field assessment BIBAFull-Text 64
  Takahiro Miura; Ken-ichiro Yabu; Masatsugu Sakajiri; Mari Ueda; Junya Suzuki; Atsushi Hiyama; Michitaka Hirose; Tohru Ifukube
Accessibility information can allow disabled people to identify suitable pathways to reach their destinations, but it is difficult to obtain new accessible pathway information rapidly because of limited local information disclosure. Thus, it is necessary to develop a comprehensive system that acquires barrier-free information from various sources and makes that information available in an intuitive form. In this study, we aimed to develop a social platform to obtain and present appropriate information depending on the user's situation, such as the user's disabilities and location, and to share the barrier-free information provided by other users.
Standardization of real-time text in instant messaging BIBAFull-Text 65
  Mark Rejhon; Christian Vogler; Norman Williams; Gunnar Hellström
We demonstrate new standardized ways of how real-time text can be seamlessly integrated into instant messaging environments. Real-time text is text transmitted instantly while it is being typed or created. The recipient can immediately read the sender's text as it is written, without waiting.
Supporting augmented and alternative communication using a low-cost gestural device BIBAFull-Text 66
  Matt Wheeler; Flynn Wolf; Ravi Kuber
In this paper, we describe an exploratory study to determine the feasibility of using a low-cost gestural headset to support communication. Findings have shown tasks involving facial gestures, such as blinks and smiles, can be performed and detected by an Augmented and Alternative Communication (AAC) system within a shorter period of time compared to brow movements. As tasks increase in complexity, rates of accuracy and time taken remain relatively constant for blinking gestures, highlighting their potential in AAC interfaces. We aim to refine such a system to better address the needs of individuals with disabilities, by limiting input errors from involuntary movements and examining ways to reduce interface navigation time. Insights gained from the study offer promise to interface designers seeking to widen access to their interfaces using gestural input.
Surveying the accessibility of touchscreen games for persons with motor impairments: a preliminary analysis BIBAFull-Text 67
  Yoojin Kim; Nita Sutreja; Jon Froehlich; Leah Findlater
Touchscreen devices have become one of the most pervasive video game platforms in the world and, in turn, an integral part of popular culture; however, little work exists on comprehensively examining their accessibility. In this poster paper, we present initial findings from a survey and qualitative analysis of popular iPad touchscreen games with a focus on exploring factors relevant to persons with motor impairments. This paper contributes a novel qualitative codebook with which to examine the accessibility of touchscreen games for users with motor impairments and the results from applying this codebook to 72 iPad games.
The feasibility of eyes-free touchscreen keyboard typing BIBAFull-Text 68
  Keith Vertanen; Haythem Memmi; Per Ola Kristensson
Typing on a touchscreen keyboard is very difficult without being able to see the keyboard. We propose a new approach in which users imagine a Qwerty keyboard somewhere on the device and tap out an entire sentence without any visual reference to the keyboard and without intermediate feedback about the letters or words typed. To demonstrate the feasibility of our approach, we developed an algorithm that decodes blind touchscreen typing with a character error rate of 18.5%. Our decoder currently uses three components: a model of the keyboard topology and tap variability, a point transformation algorithm, and a long-span statistical language model. Our initial results demonstrate that our proposed method provides fast entry rates and promising error rates. On one-third of the sentences, novices' highly noisy input was successfully decoded with no errors.
The global public inclusive infrastructure (GPII) BIBAFull-Text 69
  Gregg C. Vanderheiden; Jutta Treviranus; Amrish Chourasia
The incidence of disabilities is increasing as our population ages and we find that access to ICT is becoming mandatory for meaningful participation, independence, and self-sustenance. However we are not only nowhere near providing access to everyone who needs it, but we are actually losing ground due to reasons such as technical proliferation across platforms, increasing product churn (breaking existing solutions), decreasing social resources to address it, and an inability to effectively serve the tails of these populations because of the higher cost to do so. This poster describes the Cloud4all and Prosperity4All projects and progress in building the Global Public Inclusive Infrastructure (GPII), an infrastructure based on cloud, web and platform technologies that can increase dissemination and international localization while lowering the cost to develop, deploy, market, and support a broad range of access solutions.
The today and tomorrow of Braille learning BIBAFull-Text 70
  J. Guerreiro; D. Gonçalves; D. Marques; T. Guerreiro; H. Nicolau; K. Montague
Despite the overwhelming emergence of accessible digital technologies, Braille still plays a role in providing blind people with access to content. Nevertheless, many fail to see the benefits of nurturing Braille, particularly given the time and effort required to achieve proficiency. Our research focuses on maximizing access and motivation to learn and use Braille. We present initial insights from 5 interviews with blind people, comprising of Braille instructors and students, where we characterize the learning process and usage of Braille. Based on our findings, we have identified a set of opportunities around Braille education. Moreover, we devised scenarios, and built hardware and software solutions to motivate discovery and retention of Braille literacy.
Toward accessible technology for music composers and producers with motor disabilities BIBAFull-Text 71
  Adam J. Sporka; Ben L. Carson; Paul Nauert; Sri H. Kurniawan
In an initial user study, three motor-impaired musicians -- a composer with a degenerative motor neuron disease, a guitarist who suffered a stroke, and a first-year college student with impaired finger movement -- identified prospective areas of research in assistive technology. Participants in the study made use of a range of technologies to adapt conventional software to their needs, and identified practical limitations and challenges in those adaptations, including suggestions for novel and intuitive interfaces, optimized control-surface layouts, and repurposing opportunities in text-input techniques.
Towards the development of haptic-based interface for teaching visually impaired Arabic handwriting BIBAFull-Text 72
  Abeer S. Bayousuf; Hend S. Al-Khalifa; AbdulMalik S. Al-Salman
This paper serves to introduce an initial haptic based system for teaching handwriting of Arabic letters to students with visual impairments. The proposed system provides full and partial guidance through haptic playback. In addition, the system automatically evaluates student progress.
Uncovering the role of expectations on perceived web accessibility BIBAFull-Text 73
  Amaia Aizpurua; Myriam Arrue; Markel Vigo
Compliance to accessibility standards does not guarantee a satisfying user experience on the Web. Both unmet content and functionality expectations have been identified as central factors on the lack of coverage shown by guidelines. We expand on this by examining the role played by subjective dimensions, and particularly expectations, on the perception that users have on web accessibility. We conducted a study with 11 blind users to explore how these expectations shape the perception of web accessibility. Our preliminary findings corroborate that expectations can affect the perception of web accessibility. Additionally, we find that expectations on the Web are built up on previous experiences and prejudices. What is more, we reveal that these expectations are not only shaped by previous Web usage, but also by real life experiences. Our outcomes suggest that user expectations should be considered in user tests.
VBGhost: a braille-based educational smartphone game for children BIBAFull-Text 74
  Lauren R. Milne; Cynthia L. Bennett; Richard E. Ladner
We present VBGhost: an accessible, educational smartphone game for people who are blind or low vision. It is based on the word game Ghost, in which players take turns adding letters to a word fragment while attempting to not complete a word. VBGhost uses audio and haptic feedback to reinforce Braille concepts. Players enter letters in the game by using Braille dot patterns on a touchscreen interface. Players can raise or lower dots to create Braille characters using taps and audio feedback from the phone. When a "raised" dot is touched on the screen, the phone vibrates. In VBGhost, a player can either play against the computer or against another person. We demonstrate the potential for the development of fun, accessible and educational games.
VisualComm: a tool to support communication between deaf and hearing persons with the Kinect BIBAFull-Text 75
  Xiujuan Chai; Guang Li; Xilin Chen; Ming Zhou; Guobin Wu; Hanjing Li
With the quickly increasing of the deaf community, how to communicate with the hearing persons is becoming a serious social problem. Furthermore, the investigation indicates that the deaf community is more self-enclosed and won't exchange ideas with the hearing. To address this challenge, we develop VisualComm, a tool to support communication between deaf and hearing persons with sign language recognition technology by using the Kinect. The main contribution of the system is a holistic solution of a two-way communication between deaf and hearings, and furthermore it is a seamless experience tailored for this particular activity. Currently we have implemented the basic communication based on 370 daily Chinese words for signer.
Web accessibility for older adults: effects of line spacing and text justification on reading web pages BIBAFull-Text 76
  Helen Petrie; Sorachai Kamollimsakul; Christopher Power
Numerous guidelines for making websites more accessible for older users have been proposed, but few provide evidence from such users for their recommendations This study investigated effects of line spacing and text justification on younger (24-31 years) and older (65-78 years) adults' performance and preferences in web reading tasks. The three levels of line spacing (single, 1.5, and double) and 2 types of text justification (left only and left-right) were studied. Neither variable had a significant effect on performance measures, although both younger and older adults preferred 1.5 or double spacing over single spacing. There were no significant differences in preferences for left versus left-right justification. These results suggest that contrary to common recommendations, 1.5 or double spacing should be recommended for all users, not only older users and that no recommendation is needed on text justification.
What did you say?: visually impaired students using bonephones in math class BIBAFull-Text 77
  Yee Chieh (Denise) Chew; Bruce N. Walker
Bone-conduction headphones were deployed along with audio splitters for use with an auditory graphing software program in a classroom for the visually impaired. In this paper, we give an overview of the impact of introducing this technology into the classroom. We discuss our observations on the bonephone and audio splitter usage, and present data gathered from focus group discussions with the students and teacher relating to the introduction and reception of this technology. A majority of students and the teacher prefer using bone-conduction headphones over air-conduction headphones. Further, providing audio splitters changes how quickly the teacher can assess problems a student is having with lessons given on a computer, and the frequency in which students are paired together to work on a problem.

Student competition posters

Adaptive click-and-cross: an interaction technique for users with impaired dexterity BIBAFull-Text 78
  Louis Li
Computer users with impaired dexterity face difficulties with traditional pointing methods, particularly on small, densely packed user interfaces. Past research in software-based solutions can usually be categorized as one of two approaches. They either modify the user interface to fit the users' needs or modify the user's interaction with the cursor. Each approach, however, has limitations. Modifying the user interface increases the navigation cost of some items by displacing them to other screens, while enhanced area cursors, a pointing technique for small, densely packed targets, require users to perform multiple steps to acquire a target. This study aims to minimize the costs of these two approaches through a new interaction technique, Adaptive Click-and-Cross. The technique was found to lower error rates relative to traditional pointing (8.5% vs 16.0%) with slightly faster acquisition times compared to two other techniques for modifying the user interface or cursor.
Blind guidance system using situation information and activity-based instruction BIBAFull-Text 79
  Eunjeong Ko
This study presents a situation-based wayfinding system for the visually impaired. The goal of our system is to guide visually impaired people to and from their destinations of choice. The proposed system was implemented on iPhone 4, which has embedded camera and inertial sensors. To assess the effectiveness of the proposed system, it was tested with 4 participants and the result confirmed the feasibility of the proposed system as a wayfinding aid for the visually impaired.