HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

Sixteenth International ACM SIGACCESS Conference on Computers and Accessibility

Fullname:Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility
Editors:Sri Kurniawan; John Richards
Location:Rochester, New York
Dates:2014-Oct-20 to 2014-Oct-22
Publisher:ACM
Standard No:ISBN: 978-1-4503-2720-6; ACM DL: Table of Contents; hcibib: ASSETS14
Papers:94
Pages:362
Links:Conference Website
  1. Keynote address
  2. Independence
  3. Practices and tools
  4. Information access
  5. Objects and interfaces
  6. Interaction
  7. Applications and games
  8. Communication
  9. Mobility
  10. Poster abstracts
  11. Demonstration abstracts
  12. Student research competition abstracts
  13. Text entry challenge abstracts

Keynote address

Computing for humans BIBAFull-Text 1
  Vicki L. Hanson
In his editorial on Computing for Humans, Vardi [1] discusses computing as being an "instrument of the human mind," having the primary goal to enhance what we as humans are able to do. Nowhere is such a computing goal more evident than in the field of accessibility where we seek to create devices and software to serve people with extreme needs. In creating novel accessibility tools, research has advanced the state of the art in many areas from design of environmental spaces, to physical interfaces, to software aspects of computing.
   There are several cross-cutting issues that accessibility research can address. For example, there are issues of language and communication. Communication is fundamental to being human. People who have hearing loss, aphasia, cerebral palsy, autism, or dyslexia are among those who experience communication difficulties. How can we drive computing forward to provide solutions for these communication problems? Mobility and independence are other important issues. People with visual impairments, cognitive disability, or physical impairment often face difficulties in independent navigation. How can technology help?
   The needs of users can and should inform the agenda for new research in areas such as augmented memory, physical interactions, and human communication.

Independence

The blind driver challenge: steering using haptic cues BIBAFull-Text 3-10
  Burkay Sucu; Eelke Folmer
Loss of vision significantly impairs mobility, with blind individuals often relying on sighted individuals or public transportation to get around. Self-driving vehicles could significantly improve the mobility of blind people, but current legislation often requires a legal driver to be present in the vehicle who can take over in case of a malfunction. To enable blind people to eventually use a self-driving car independently, we present a steering interface that allows for steering a vehicle using haptic cues. User studies with six blind and sighted subjects identify what accuracy is required and possible using our interface to steer a vehicle on a track using a simulator. We investigate whether driving experience affects haptic steering performance and perform a qualitative study into the usability of our haptic steering interface.
Where's my bus stop?: supporting independence of blind transit riders with StopInfo BIBAFull-Text 11-18
  Megan Campbell; Cynthia Bennett; Caitlin Bonnar; Alan Borning
Locating bus stops, particularly in unfamiliar areas, can present challenges to people who are blind or low vision. At the same time, new information technology such as smart phones and mobile devices have enabled them to undertake a much greater range of activities with increased independence. We focus on the intersection of these issues. We developed and deployed StopInfo, a system for public transit riders that provides very detailed information about bus stops with the goal of helping riders find and verify bus stop locations. We augmented internal information from a major transit agency in the Seattle area with information entered by the community, primarily as they waited at these stops. Additionally, we conducted a five week field study with six blind and low vision participants to gauge usage patterns and determine values related to independent travel. We found that StopInfo was received positively and is generally usable. Furthermore, the system supports tenets of independence; participants took public transit trips that they might not have attempted otherwise. Lastly, an audit of bus stops in three Seattle neighborhoods found that information from both the transit agency and the community was accurate.
Headlock: a wearable navigation aid that helps blind cane users traverse large open spaces BIBAFull-Text 19-26
  Alexander Fiannaca; Ilias Apostolopoulous; Eelke Folmer
Traversing large open spaces is a challenging task for blind cane users, as such spaces are often devoid of tactile features that can be followed. Consequently, in such spaces cane users may veer from their intended paths. Wearable devices have great potential for assistive applications for users who are blind as they typically feature a camera and support hands and eye free interaction. We present HEADLOCK; a navigation aid for an optical head-mounted display that helps blind users traverse large open spaces by letting them lock onto a salient landmark across the space, such as a door, and then providing audio feedback to guide the user towards the landmark. A user study with 8 blind users evaluated the usability and effectiveness of two types of audio feedback (sonification and text-to-speech) for guiding a user across an open space to a doorway. Qualitative results are reported, which may inform the design of assistive wearable technology for users who are blind.
Technology to reduce social isolation and loneliness BIBAFull-Text 27-34
  Ron Beacker; Kate Sellen; Sarah Crosskey; Veronique Boscart; Barbara Barbosa Neves
Large numbers of individuals, many of them senior citizens, live in social isolation. This typically leads to loneliness, depression, and vulnerability, and subsequently to other negative health consequences. We report on research focused on understanding the communication needs of people in environments associated with social isolation and loneliness, and how technology facilitates social connection. Our work consists of successive iterations of field studies and technology prototype design, deployment, and analysis. Particular attention is paid to seniors in retirement communities and in long-term care settings (nursing homes). We present design implications for technology to enable seniors' social connections, the "InTouch" prototype that satisfies most of the implications, and a report on one older adult's experience of InTouch.

Practices and tools

A large user pool for accessibility research with representative users BIBAFull-Text 35-42
  Marianne Dee; Vicki L. Hanson
A critical element of accessibility research is the exploration and evaluation of ideas with representative users. However, it is often difficult to recruit these users, particularly in a timely manner. In this paper we report on the establishment of a large user pool created to facilitate accessibility research through recruiting sizeable numbers of older adults potentially interested in taking part in research studies about technology. Lessons learned from creating and maintaining this pool of individuals are reported.
Verification of daily activities of older adults: a simple, non-intrusive, low-cost approach BIBAFull-Text 43-50
  Loïc Caroux; Charles Consel; Lucile Dupuy; Hélène Sauzéon
This paper presents an approach to verifying the activities of daily living of older adults at their home. We verify activities, instead of inferring them, because our monitoring approach is driven by routines, initially sketched by users in their environment. Monitoring is supported by a lightweight sensor infrastructure, comprising non-intrusive, low-cost, wireless devices. Verification is performed by applying a simple formula to sensor log data, for each activity of interest. The result value determines whether an activity has been performed.
   We have conducted an experimental study to validate our approach. To do so, four participants have been monitored during five days at their home, equipped with sensors. When applied to the log data, our formulas were able to automatically verify that a list of activities were performed. They produced the same interpretations, using Signal Detection Theory, as a third party, manually analyzing the log data.
How companies engage customers around accessibility on social media BIBAFull-Text 51-58
  Erin Brady; Jeffrey P. Bigham
Social media offers a targeted way for mainstream technology companies to communicate with people with disabilities about the accessibility problems that they face. While companies have started to engage with users on social media about accessibility, they differ greatly in terms of their approach and how well they support the ways in which their users want to engage. In this paper, we describe current use patterns of six corporate accessibility teams and their users on Twitter, and present an analysis of these interactions. We find that while many users want to interact directly with companies about accessibility, companies prefer to redirect them to other channels and use Twitter for broadcast messages promoting their accessibility work instead. Our analysis demonstrates that users want to use social media to become part of the process of improving accessibility of mainstream technology, and suggests the extent to which a company is able to leverage this input depends greatly on how they choose to present themselves and interact on social media.
Buildings and users with visual impairment: uncovering factors for accessibility using BIT-Kit BIBAFull-Text 59-66
  Lesley J. McIntyre; Vicki L. Hanson
In this paper, we report on the experiences of visually impaired users in navigating buildings. We focus on an investigation of the way-finding experiences by 10 participants with varying levels of visual ability, as they undertook a way-finding task in an unfamiliar public building. Through applying the BIT-Kit framework in this preliminary user study, we were able to uncover 54 enabling and disabling interactions within the case study building. While this building adhered to building legislation, our findings identified a number of accessibility problems including, issues associated with using doors, hazards caused by building finishes, and difficulty in knowing what to do in the case of an emergency evacuation. This user study has demonstrated a disparity between design guidance and the accessibility needs of building users. It has uncovered evidence to enable architects to begin to design for the real needs of users who have a range of visual impairment. Furthermore, it has instigated discussion of how BIT-Kit's evidence could be incorporated into digital modelling tools currently used in architectural practice.

Information access

From screen reading to aural glancing: towards instant access to key page sections BIBAFull-Text 67-74
  Prathik Gadde; Davide Bolchini
Whereas glancing at a web page is crucial for navigation, screen readers force users to listen to content serially. This hampers efficient browsing of complex pages and maintains an accessibility divide between sighted and screen-reader users. To address this problem, we adopt a three-pronged strategy: (1) in a user study, we identified key page-level navigation problems that screen-reader users face while browsing a complex site; (2) through a crowd-sourcing system, we prioritized the most relevant sections of different page types necessary to support basic tasks; (3) we introduced DASX, a navigation approach that augments the ability of screen-reader users to "aurally glance" at a complex page by accessing at any time the most relevant page sections. In a preliminary evaluation, DASX markedly reduced the gap in page navigation efficiency between screen-reader and sighted users. Our contribution provides the groundwork for rethinking access strategies that strongly tie aural navigation to user's tasks.
Tactile graphics with a voice: using QR codes to access text in tactile graphics BIBAFull-Text 75-82
  Catherine M. Baker; Lauren R. Milne; Jeffrey Scofield; Cynthia L. Bennett; Richard E. Ladner
Textbook figures are often converted into a tactile format for access by blind students. These figures are not truly accessible unless the text within the figures is also made accessible. A common solution to access text in a tactile image is to use embossed Braille. We have developed an alternative to Braille that uses QR codes for students who want tactile graphics, but prefer the text in figures be spoken, rather than in Braille. Tactile Graphics with a Voice (TGV) allows text within tactile graphics to be accessible by using a talking QR code reader app on a smartphone. To evaluate TGV, we performed a longitudinal study where ten blind and low vision participants were asked to complete tasks using three alternative picture taking guidance techniques: 1) no guidance, 2) verbal guidance, and 3) finger pointing guidance. Our results show that TGV is an effective way to access text in tactile graphics, especially for those blind users who are not fluent in Braille. In addition, guidance preferences varied with each of the guidance techniques being preferred by at least one participant.
Evaluating the accessibility of line graphs through textual summaries for visually impaired users BIBAFull-Text 83-90
  Priscilla Moraes; Gabriel Sina; Kathleen McCoy; Sandra Carberry
This paper presents the methodology for generating textual summaries of line graphs in the SIGHT (Summarizing Information GrapHics Textually) system and the evaluation of line graph summaries produced by SIGHT. The system is designed to deliver the high-level knowledge conveyed by informational graphics present in online popular media articles (newspaper and magazines) to individuals who do not have visual access to the image. It works by producing and delivering a concise summary of the graph's content including the most important visual features present in the graphic. The system is briefly described; the evaluation compares the utility of the generated textual summaries to visually viewing the graphic in order to answer important questions about the line graph.
Including blind people in computing through access to graphs BIBAFull-Text 91-98
  Suzanne P. Balik; Sean P. Mealin; Matthias F. Stallmann; Robert D. Rodman; Michelle L. Glatz; Veronica J. Sigler
Our goal in creating the Graph SKetching tool, GSK, was to provide blind screen reader users with a means to create and access graphs as node-link diagrams and share them with sighted people in real-time. Through this effort, we hoped to better include blind people in computing and other STEM disciplines in which graphs are important. GSK proved very effective for one blind computer science student in courses that involved graphs and graph structures such as automata, decision trees, and resource-allocation diagrams. In order to determine how well GSK works for other blind people, we carried out a user study with ten blind participants. We report on the results of the user study, which demonstrates the efficacy of GSK for the examination, navigation, and creation of graphs by blind users. Based on the study results, we improved the efficiency of GSK for blind users. We plan more enhancements to help meet the need for accessible graph tools as articulated by the blind community.

Objects and interfaces

Usability issues with 3D user interfaces for adolescents with high functioning autism BIBAFull-Text 99-106
  Chao Mei; Lee Mason; John Quarles
Most literature on the usability of 3D user interfaces (3DUI) in ASD therapy consists of a series of case studies based on games for rehabilitation. These games have been largely successful. However, it is difficult to generalize the results of these specific case studies. The usability of atomic 3DUI interactions (e.g., rotation, translation) with respect to adolescents with ASD has not yet been evaluated. Adolescents with ASD often have enhanced spatial cognitive abilities and less efficient hand-eye coordination. Our main research question is "Do adolescents with ASD perform 3DUI tasks differently than typically developed adolescents and if so, why?" To address this question, we present a matched pair user study including adolescents without ASD (i.e. as controls) paired with adolescents who had a high ASD severity score, but were still considered high functioning. Our results give insight into the usability of 3DUI for adolescents with ASD and provide generalizable guidelines for future 3DUI applications for children with autism.
ABC and 3D: opportunities and obstacles to 3D printing in special education environments BIBAFull-Text 107-114
  Erin Buehler; Shaun K. Kane; Amy Hurst
Consumer-grade digital fabrication such as 3D printing is on the rise, and we believe it can be leveraged to great benefit in the arena of special education. Although 3D printing is beginning to infiltrate mainstream education, little to no research has explored 3D printing in the context of students with special support needs. We present a formative study exploring the use of 3D printing at three locations serving populations with varying ability, including individuals with cognitive, motor, and visual impairments. We found that 3D design and printing performs three functions in special education: developing 3D design and printing skills encourages STEM engagement; 3D printing can support the creation of educational aids for providing accessible curriculum content; and 3D printing can be used to create custom adaptive devices. In addition to providing opportunities to students, faculty, and caregivers in their efforts to integrate 3D printing in special education settings, our investigation also revealed several concerns and challenges. We present our investigation at three diverse sites as a case study of 3D printing in the realm of special education, discuss obstacles to efficient 3D printing in this context, and offer suggestions for designers and technologists.

Interaction

Design of and subjective response to on-body input for people with visual impairments BIBAFull-Text 115-122
  Uran Oh; Leah Findlater
For users with visual impairments, who do not necessarily need the visual display of a mobile device, non-visual on-body interaction (e.g., Imaginary Interfaces) could provide accessible input in a mobile context. Such interaction provides the potential advantages of an always-available input surface, and increased tactile and proprioceptive feedback compared to a smooth touchscreen. To investigate preferences for and design of accessible on-body interaction, we conducted a study with 12 visually impaired participants. Participants evaluated five locations for on-body input and compared on-phone to on-hand interaction with one versus two hands. Our findings show that the least preferred areas were the face/neck and the forearm, while locations on the hands were considered to be more discreet and natural. The findings also suggest that participants may prioritize social acceptability over ease of use and physical comfort when assessing the feasibility of input at different locations of the body. Finally, tradeoffs were seen in preferences for touchscreen versus on-body input, with on-body input considered useful for contexts where one hand is busy (e.g., holding a cane or dog leash). We provide implications for the design of accessible on-body input.
Motor-impaired touchscreen interactions in the wild BIBAFull-Text 123-130
  Kyle Montague; Hugo Nicolau; Vicki L. Hanson
Touchscreens are pervasive in mainstream technologies; they offer novel user interfaces and exciting gestural interactions. However, to interpret and distinguish between the vast ranges of gestural inputs, the devices require users to consistently perform interactions inline with the predefined location, movement and timing parameters of the gesture recognizers. For people with variable motor abilities, particularly hand tremors, performing these input gestures can be extremely challenging and impose limitations on the possible interactions the user can make with the device. In this paper, we examine touchscreen performance and interaction behaviors of motor-impaired users on mobile devices. The primary goal of this work is to measure and understand the variance of touchscreen interaction performances by people with motor-impairments. We conducted a four-week in-the-wild user study with nine participants using a mobile touchscreen device. A Sudoku stimulus application measured their interaction performance abilities during this time. Our results show that not only does interaction performance vary significantly between users, but also that an individual's interaction abilities are significantly different between device sessions. Finally, we propose and evaluate the effect of novel tap gesture recognizers to accommodate for individual variances in touchscreen interactions.
Designing a text entry multimodal keypad for blind users of touchscreen mobile phones BIBAFull-Text 131-136
  Maria Claudia Buzzi; Marina Buzzi; Barbara Leporini; Amaury Trujillo
In this report, we share our experience and observations on the challenges blind people face with text entry on touch-based mobile phones, particularly from the perspective of one of the authors, who is blind. To better understand these issues we developed and tested Multimodal Text Input Touchscreen Keypad (MTITK), an audio-tactile text entry prototype based on multitap, which relies on a telephone keypad layout organized into five key groups with distinct audio-tactile feedback. Users explore the screen to identify the current selected key, tap to enter text, and gesture to edit it, while receiving the corresponding voice, audio, and tactile feedback; no additional equipment is necessary in our software-only approach. We implemented a prototype on Android and tested its usability with visually impaired participants; they welcomed its multimodality and the familiar layout, but also expressed the need to increase vibration pattern differentiation and refine the character selection mechanism.

Applications and games

BraillePlay: educational smartphone games for blind children BIBAFull-Text 137-144
  Lauren R. Milne; Cynthia L. Bennett; Richard E. Ladner; Shiri Azenkot
There are many educational smartphone games for children, but few are accessible to blind children. We present BraillePlay, a suite of accessible games for smartphones that teach Braille character encodings to promote Braille literacy. The BraillePlay games are based on VBraille, a method for displaying Braille characters on a smartphone. BraillePlay includes four games of varying levels of difficulty: VBReader and VBWriter simulate Braille flashcards, and VBHangman and VBGhost incorporate Braille character identification and recall into word games. We evaluated BraillePlay with a longitudinal study in the wild with eight blind children. Through logged usage data and extensive interviews, we found that all but one participant were able to play the games independently and found them enjoyable. We also found evidence that some children learned Braille concepts. We distill implications for the design of games for blind children and discuss lessons learned.
Tablet-based activity schedule for children with autism in mainstream environment BIBAFull-Text 145-152
  Charles Fage; Léonard Pommereau; Charles Consel; Émilie Balland; Hélène Sauzéon
Including children with Autism Spectrum Disorders (ASD) in mainstreamed environments creates a need for new interventions whose efficacy must be assessed in situ.
   This paper presents a tablet-based application for activity schedules that has been designed following a participatory design approach involving mainstream teachers, special-education teachers and school aides. This applications addresses two domains of activities: classroom routines and verbal communications.
   We assessed the efficiency of our application with a study involving 10 children with ASD in mainstream inclusion (5 children are equipped and 5 are not equipped). We show that (1) the use of the application is rapidly self-initiated (after two months for almost all the participants) and that (2) the tablet-supported routines are differently executed over time according to the activity domain conditions. Importantly, compared to the control children, the equipped children exhibited more classroom and communication routines correctly performed after three month of intervention.
A computer-based method to improve the spelling of children with dyslexia BIBAFull-Text 153-160
  Luz Rello; Clara Bayarri; Yolanda Otal; Martin Pielot
In this paper we present a method which aims to improve the spelling of children with dyslexia through playful and targeted exercises. In contrast to previous approaches, our method does not use correct words or positive examples to follow, but presents the child a misspelled word as an exercise to solve. We created these training exercises on the basis of the linguistic knowledge extracted from the errors found in texts written by children with dyslexia. To test the effectiveness of this method in Spanish, we integrated the exercises in a game for iPad, DysEggxia (Piruletras in Spanish), and carried out a within-subject experiment. During eight weeks, 48 children played either DysEggxia or Word Search, which is another word game. We conducted tests and questionnaires at the beginning of the study, after four weeks when the games were switched, and at the end of the study. The children who played DysEggxia for four weeks in a row had significantly less writing errors in the tests that after playing Word Search for the same time. This provides evidence that error-based exercises presented in a tablet help children with dyslexia improve their spelling skills.
Design and evaluation of a networked game to support social connection of youth with cerebral palsy BIBAFull-Text 161-168
  Hamilton A. Hernandez; Mallory Ketcheson; Adrian Schneider; Zi Ye; Darcy Fehlings; Lauren Switzer; Virginia Wright; Shelly K. Bursick; Chad Richards; T. C. Nicholas Graham
Youth with cerebral palsy (CP) can experience social isolation, in part due to mobility limitations associated with CP. We show that networked video games can provide a venue for social interaction from the home. We address the question of how to design networked games that enhance social play among people with motor disabilities. We present Liberi, a networked game custom-designed for youth with CP. Liberi is designed to allow frictionless group formation, to balance for differences in player abilities, and to support a variety of play styles. A ten-week home-based study with ten participants showed the game to be effective in fostering social interaction among youth with CP.

Communication

Text-to-speeches: evaluating the perception of concurrent speech by blind people BIBAFull-Text 169-176
  João Guerreiro; Daniel Gonçalves
Over the years, screen readers have been an essential tool for assisting blind users in accessing digital information. Yet, its sequential nature undermines blind people's ability to efficiently find relevant information, despite the browsing strategies they have developed. We propose taking advantage of the Cocktail Party Effect, which states that people are able to focus on a single speech source among several conversations, but still identify relevant content in the background. Therefore, oppositely to one sequential speech channel, we hypothesize that blind people can leverage concurrent speech channels to quickly get the gist of digital information. In this paper, we present an experiment with 23 participants, which aims to understand blind people's ability to search for relevant content listening to two, three or four concurrent speech channels. Our results suggest that it is easy to identify the relevant source with two and three concurrent talkers. Moreover, both two and three sources may be used to understand the relevant source content depending on the task intelligibility demands and user characteristics.
Analyzing the intelligibility of real-time mobile sign language video transmitted below recommended standards BIBAFull-Text 177-184
  Jessica J. Tran; Ben Flowers; Eve A. Risken; Richard J. Ladner; Jacob O. Wobbrock
Mobile sign language video communication has the potential to be more accessible and affordable if the current recommended video transmission standard of 25 frames per second at 100 kilobits per second (kbps) as prescribed in the International Telecommunication Standardization Sector (ITU-T) Q.26/16 were relaxed. To investigate sign language video intelligibility at lower settings, we conducted a laboratory study, where fluent ASL signers in pairs held real-time free-form conversations over an experimental smartphone app transmitting real-time video at 5 fps/25 kbps, 10 fps/50 kbps, 15 fps/75 kbps, and 30 fps/150 kbps, settings well below the ITU-T standard that save both bandwidth and battery life. The aim of the laboratory study was to investigate how fluent ASL signers adapt to the lower video transmission rates, and to identify a lower threshold at which intelligible real-time conversations could be held. We gathered both subjective and objective measures from participants and calculated battery life drain. As expected, reducing the frame rate/bit rate monotonically extended the battery life. We discovered all participants were successful in holding intelligible conversations across all frame rates/bit rates. Participants did perceive the lower quality of video transmitted at 5 fps/25 kbps and felt that they were signing more slowly to compensate; however, participants' rate of fingerspelling did not actually decrease. This and other findings support our recommendation that intelligible mobile sign language conversations can occur at frame rates as low as 10 fps/50 kbps while optimizing resource consumption, video intelligibility, and user preferences.
Enhancing caption accessibility through simultaneous multimodal information: visual-tactile captions BIBAFull-Text 185-192
  Raja S. Kushalnagar; Gary W. Behm; Joseph S. Stanislow; Vasu Gupta
Captions (subtitles) for television and movies have greatly enhanced accessibility for Deaf and hard of hearing (DHH) consumers who do not understand the audio, but can otherwise follow by reading the captions. However, these captions fail to fully convey auditory information, due to simultaneous delivery of aural and visual content, and lack of standardization in representing non-speech information.
   Viewers cannot simultaneously watch the movie scenes and read the visual captions; instead they have to switch between the two and inevitably lose information and context in watching the movies. In contrast, hearing viewers can simultaneously listen to the audio and watch the scenes.
   Most auditory non-speech information (NSI) is not easily represented by words, e.g., the description of a ring tone, or the sound of something falling. We enhance captions with tactile and visual-tactile feedback. For the former, we transform auditory NSI into its equivalent tactile representation and convey it simultaneously with the captions. For the latter, we visually identify the location of the NSI. This approach can benefit DHH viewers by conveying more aural content to the viewer's visual and tactile senses simultaneously than visual-only captions alone. We conducted a study, which compared DHH viewer responses between video with captions, tactile captions, and visual-tactile captions. The viewers significantly benefited from visual-tactile and tactile captions.
Age, technology usage, and cognitive characteristics in relation to perceived disorientation and reported website ease of use BIBAFull-Text 193-200
  Michael Crabb; Vicki L. Hanson
Comparative studies including older and younger adults are becoming more common in HCI, generally used to compare how these two different age groups will approach a task. However, it is unclear whether user "age" is the underlying factor that differentiates between these two groups. To address this problem, an examination into the relationship between users' age, previous technology experience, and cognitive characteristics is conducted. Measures of perceived disorientation and reported ease of use are used to understand links that exist between these user characteristics and their effect on browsing experience. This is achieved through a lab-based information retrieval task, where participants visited a selection of websites in order to find answers to a series of questions and then self reported their feelings of perceived disorientation and website ease of use through a Likert-scored questionnaire.
   The presented research found that age accounts for as little as 1% of user browsing experience when performing information retrieval tasks. Further, it showed that cognitive ability and previous technology experience significantly affected perceived disorientation in these searches. These results argue for the inclusion of metrics regarding cognitive ability and previous technology experience when analyzing user satisfaction and performance in Internet based-studies.

Mobility

The gest-rest: a pressure-sensitive chairable input pad for power wheelchair armrests BIBAFull-Text 201-208
  Patrick Carrington; Amy Hurst; Shaun K. Kane
Interacting with touch screen-based computing devices can be difficult for individuals with mobility impairments that affect their hands, arms, neck, or head. These problems may be especially difficult for power wheelchair users, as the frame of their wheelchair may obstruct the users' range of motion and reduce their ability to reach objects in the environment. The concept of chairable input devices refers to input devices that are designed to fit with the form of an individual's wheelchair, much like wearable technology fits with an individual's clothing. In this paper, we introduce a new chairable input device, the Gest-Rest, which provides a pressure-sensitive input surface that fits over a standard power wheelchair armrest. The Gest-Rest enables users to perform traditional touch screen gestures, such as press and flick, as well as pressure-based gestures such as squeezing and punching. The Gest-Rest enables multiple inputs, unlike most switches, and does not substantially change the shape of the wheelchair armrest. We present a formative evaluation in which nine wheelchair users and three clinicians tested multiple gestures using the Gest-Rest prototype, and provided recommendations for integrating the Gest-Rest with computing applications. Our study showed that our motor impaired participants were each able to perform multiple gestures using the prototype, but had some difficulty with the pre-set sensitivity settings, and would thus benefit from a more robust gesture recognizer.
Accessibility in context: understanding the truly mobile experience of smartphone users with motor impairments BIBAFull-Text 209-216
  Maia Naftali; Leah Findlater
Lab-based studies on touchscreen use by people with motor impairments have identified both positive and negative impacts on accessibility. Little work, however, has moved beyond the lab to investigate the truly mobile experiences of users with motor impairments. We conducted two studies to investigate how smartphones are being used on a daily basis, what activities they enable, and what contextual challenges users are encountering. The first study was a small online survey with 16 respondents. The second study was much more in depth, including an initial interview, two weeks of diary entries, and a 3-hour contextual session that included neighborhood activities. Four expert smartphone users participated in the second study and we used a case study approach for analysis. Our findings highlight the ways in which smartphones are enabling everyday activities for people with motor impairments, particularly in overcoming physical accessibility challenges in the real world and supporting writing and reading. We also identified important situational impairments, such as the inability to retrieve the phone while in transit, and confirmed many lab-based findings in the real-world setting. We present design implications and directions for future work.
"just let the cane hit it": how the blind and sighted see navigation differently BIBAFull-Text 217-224
  Michele A. Williams; Caroline Galbraith; Shaun K. Kane; Amy Hurst
Sighted people often have the best of intentions when they want to help a blind person navigate, but their well meaning is also often coupled with a lack of knowledge and understanding about how a person navigates without vision. As a result what sighted people think is the right feedback is too often the wrong feedback to give to a person with a visual impairment. Understanding how to provide feedback to blind navigators is crucial to the design of assistive technologies for navigation. In our research investigating the design of a personal pedestrian navigation device, we observed firsthand the ways that sighted people seemingly misunderstand how many blind people navigate when using a white cane mobility aid. Throughout our qualitative end user studies that included focus groups and observations (including couple-based observations with a close companion) we gathered data that explicitly shows how the language and understanding of sighted vs. blind pedestrians differs greatly and even how it can be dangerous when people interfere in the wrong way. From our findings we discuss why it is difficult for a blind person to navigate like a sighted person to ensure designers are aware of the difficulties and designing with new training in mind, not simply designing from their own point of view. We also want to encourage advocacy and empathy amongst the sighted community towards this activity of walking around independently.
Path-guided indoor navigation for the visually impaired using minimal building retrofitting BIBAFull-Text 225-232
  Dhruv Jain
One of the common problems faced by visually impaired people is of independent path-based mobility in an unfamiliar indoor environment. Existing systems do not provide active guidance or are bulky, expensive and hence are not socially apt. In this paper, we present the design of an omnipresent cellphone based active indoor wayfinding system for the visually impaired. Our system provides step-by-step directions to the destination from any location in the building using minimal additional infrastructure. The carefully calibrated audio, vibration instructions and the small wearable device helps the user to navigate efficiently and unobtrusively. Results from a formative study with five visually impaired individuals informed the design of the system. We then deployed the system in a building and field tested it with ten visually impaired users. The comparison of the quantitative and qualitative results demonstrated that the system is useful and usable, but can still be improved.

Poster abstracts

An accelerated scanning communication system with adaptive automatic error correction mechanism BIBAFull-Text 233-234
  Hiroki Mori
This paper describes a novel automatic error correction method for scanning communication, whose mechanism is basically analogous to that of continuous speech recognition. It has two core components: one is the switch timing model, and the other is the statistical language model. By employing these models, the proposed system can estimate most probable sequence of input syllables for a given sequence of switch timing, with taking user characteristics into account. Thirteen subjects without disabilities and an ALS subject participated in a text input experiment using the proposed scanning communication system. For the ALS subject, the system improved the character correct rate from 77.7% to 97.7%, allowing dramatically fast input.
Accessible mHealth for patients with dexterity impairments BIBAFull-Text 235-236
  Daihua X. Yu; Bambang Parmanto; Brad E. Dicianno; Valerie J. Watzlaf; Katherine D. Seelman
A novel mobile health (mHealth) system to support self-care and adherence to self-care regimens for patients with chronic disease, called iMHere (interactive mobile health and rehabilitation), has been developed at the University of Pittsburgh. However, the existing design of the iMHere system was initially not suitable for users with a myriad of dexterity impairments. The goal of this study was to design and transform an iMHere system that is usable and accessible for this particular group of users. We approached the accessibility from two essential components of user interface design and development: physical presentation and navigation. Six participants with dexterity impairments were included in this study. User satisfaction significantly improved when using revised apps (P<0.01). The importance of personalization for accessibility and the identified strategies to approach personalization for the mHealth system are discussed.
An accessible robotics programming environment for visually impaired users BIBAFull-Text 237-238
  Stephanie Ludi Ludi; Lindsey Ellis; Scott Jordan
Despite advances in assistive technology, challenges remain in pre-college computer science outreach and university programs for visually impaired students. The use of robotics has been popular in pre-college classrooms and outreach programs, including those that serve underrepresented groups. This paper describes the specific accessibility features implemented in software that provides an accessible Lego Mindstorms NXT programming environment for teenage students who are visually impaired. JBrick is designed to support students with diverse visual acuity and who use needed assistive technology. Field tests over several days showed that JBrick has the potential to accommodate students who are visually impaired as they work together to program Lego Mindstorms NXT robots.
Assessing technology use in aphasia BIBAFull-Text 239-240
  Abi Roper; Jane Marshall; Stephanie M. Wilson
We report a novel and accessible questionnaire designed to examine levels of technology use in adults with severe aphasia and to assess the impact of a co-designed computer-delivered gesture therapy (GeST) on participants' wider technology use. The questionnaire is currently being used in a group study of 30 participants with severe aphasia. Early outcomes indicate that people with severe aphasia are able to use the questionnaire effectively to report levels of technology use. Data from 11 participants suggests low levels of use for many items of everyday technology prior to therapy. Presented work will further examine the effect of GeST therapy on individuals' reported technology-use and also examine correlations between questionnaire outcomes and three other factors: performance on measures of cognition; and the amount and diversity of GeST-use.
Automatically identifying trouble-indicating speech behaviors in alzheimer's disease BIBAFull-Text 241-242
  Frank Rudzicz; Leila Chan Currie; Andrew Danks; Tejas Mehta; Shunan Zhao
Alzheimer's disease (AD) deteriorates executive, linguistic, and functional capacity and is rapidly becoming more prevalent. In particular, AD leads to an inability to follow simple dialogues. In this paper, we annotate two databases of dyad conversations, that include individuals with AD, with trouble indicating behaviors (TIBs). We then extract lexical/syntactic and acoustic features from all utterances and identify those that are most indicative of TIB (which include speech rate and utterance likelihoods in a standard language model) and classify utterances as having TIB or not with up to 79.5% accuracy. This will allow us to build automated dialogue systems and assessment tools that are sensitive to confusion in people with AD.
A braille writing training device with voice feedback BIBAFull-Text 243-244
  Fumihito Aizawa; Tetsuya Watanabe
A device incorporating both voice feedback and tactile sensation has been developed to help blind children practice the use of a Braille stylus. A microcomputer and voice-synthesis LSI are used to give voice feedback, and a switch under each hole representing a Braille dot is used to give tactile feedback when the stylus is inserted in a hole. A switch is pressed by the user to hear the voice feedback. A one-cell prototype was used to gather feedback from potential users. Four improvements were identified from this feedback and applied to a six-cell version of the device. Testing of the six-cell device by six school children revealed that, although the device received positive scores for four aspects, further improvements are necessary.
Color-via-pattern: distinguishing colors of confusion without affecting perceived brightness BIBAFull-Text 245-246
  Matthew Herbst; Bo Brinkman
In this poster we describe pilot work on a new visualization technique, Color-via-Pattern (CvP), to help individuals with color deficiency distinguish between colors of confusion, as well as correctly determine relative perceived brightness among all colors. Existing assistive technologies tend to distort the hue and brightness of colors. CvP is designed to address this flaw while being just as effective. Human subject testing was performed to evaluate our approach.
Designing exergames combining the use of fine and gross motor exercises to support self-care activities BIBAFull-Text 247-248
  Karina Caro; Ana I. Martínez-García; Mónica Tentori; Iván Zavala-Ibarra
Motor coordination problems are common in different developmental disorders including autism and dyspraxia. Gross and fine motor coordination skills are critical to the appropriate motor coordination development that is relevant to support individuals' independence. Exergames are a good tool to help children practice motor skills as they find them engaging. In this work, we present how FroggyBobby an exergame designed for practicing gross motor coordination skills, can be extended to combine gross and fine motor exercises for supporting children with motor problems to practice self-care activities that require motor coordination.
Development of accessible toolset to enhance social interaction opportunities for people with cerebral palsy in India BIBAFull-Text 249-250
  Manjira Sinha; Tirthankar Dasgupta; Anupam Basu
In this paper we have developed a toolset that will allow people with severe spastic cerebral palsy (CP) and highly restricted motor movement skills to access popular social-networking and communication mediums like, Facebook and E-mails. To understand the requirements of the intended users we have performed a number of surveys that acted as basis of our system design. The developed tools use special access switch based scanning technique for easy navigation in different applications. We have evaluated the toolset with six target users. The preliminary results demonstrate a positive response.
Development of tactile graph generation software using the R statistics software environment BIBAFull-Text 251-252
  Kosuke Araki; Tetsuya Watanabe; Kazunori Minatani
We have worked on the development of software that uses the R statistics software environment to automatically generate tactile graphs -- i.e., graphs that can be read by blind people using their sense of touch. We released this software as a Web application to make it available to anyone, from anywhere. This Web application can automatically generate images for tactile graphs from numerical data in a CSV file. This report discusses the Web application's functions and operating procedures.
Digital tools for physically impaired visual artists BIBAFull-Text 253-254
  Chris Creed; Russell Beale; Paula Dower
We present work-in-progress that is exploring the potential for visual artists with physical impairments to use new non-intrusive mid-air gesturing sensors to enhance and extend their practice. We highlight the key results from an initial informal user evaluation with two disabled and two non-disabled visual artists examining use of the Leap Motion sensor as an artistic tool. Future work will explore how related technologies can be better utilized to support disabled artists in their practice.
Enhancing multi-touch table accessibility for wheelchair users BIBAFull-Text 255-256
  Chris Creed; Russell Beale
Wheelchair users can find accessing digital content on large multi-touch tables particularly difficult and frustrating due to their limited reach. We present work in progress that is exploring the potential of enhancing touch table accessibility through the use of mid-air gesturing technology. An overview of an experimental prototype is provided along with the key findings from an evaluation conducted with fifteen wheelchair users at a public library and heritage centre.
Evaluating the accessibility of crowdsourcing tasks on Amazon's mechanical turk BIBAFull-Text 257-258
  Rocío Calvo; Shaun K. Kane; Amy Hurst
Crowd work web sites such as Amazon Mechanical Turk enable individuals to work from home, which may be useful for people with disabilities. However, the web sites for finding and performing crowd work tasks must be accessible if people with disabilities are to use them. We performed a heuristic analysis of one crowd work site, Amazon's Mechanical Turk, using the Web Content Accessibility Guidelines 2.0. This paper presents the accessibility problems identified in our analysis and offers suggestions for making crowd work platforms more accessible.
Gestures with speech for hand-impaired persons BIBAFull-Text 259-260
  Darren Guinness; G. Michael Poor; Alvin Jude
Mid-air hand-gestural interaction generally causes a fatigue due to implementations that require the user to hold their arm out during this interaction. Recent research has discovered a new approach to reduce fatigue related to gestural interaction, by allowing users to rest their elbow on a surface, and calibrate their interaction space from this rested position[1]. Additionally, this approach reduced stress on the hand and wrist compared to the mouse, by shifting much of the load to the forearm and shoulder muscles. In this paper we evaluated gesture and speech multimodal interaction as a form of assistive interaction for those with hand impairments. Two participants with hand impairments were recruited to perform the evaluation. We collected qualitative and quantitative data, which showed promising results in using this method for assistive interaction.
Implementation and evaluation of animation controls sufficient for conveying ASL facial expressions BIBAFull-Text 261-262
  Hernisa Kacorri; Matt Huenerfauth
Technology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) from an easy-to-update script would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf. We are investigating the synthesis of ASL facial expressions, which are grammatically required and essential to the meaning of sentences. To support this research, we have enhanced a virtual human character with face controls following the MPEG-4 Facial Action Parameter standard. In a user-study, we determined that these controls were sufficient for conveying understandable animations of facial expressions.
Increasing the bandwidth of crowdsourced visual question answering to better support blind users BIBAFull-Text 263-264
  Walter S. Lasecki; Yu Zhong; Jeffrey P. Bigham
Many of the visual questions that blind people ask cannot be easily answered with a single image or a short response, especially when questions are of an exploratory nature, e.g. what is in this area, or what tools are available on this work bench? We introduce RegionSpeak to allow blind users to capture large areas of visual information, identify all of the objects within them, and explore their spatial layout with fewer interactions. RegionSpeak helps blind users capture all of the relevant visual information using an interface designed to support stitching multiple images together. We use a parallel crowdsourcing workflow that asks workers to define and describe regions of interest, allowing even complex images to be described quickly. The regions and descriptions are displayed on an auditory touchscreen interface, allowing users to know what is in a scene and how it is laid out.
Moving from entrenched structure to a universal design BIBAFull-Text 265-266
  Leyla Zhuhadar; Bryan Carson; Jerry Daday
Section 508 of the Rehabilitation Act [1] requires electronic and information technology communications to be in made available in an accessible format by alternative means perceptible by people with disabilities. These provisions apply to all entities-including colleges and universities that receive Federal money (17 U.S.C. § 794). The Accessible Educational STEM Videos Project aims to transform learning and teaching for students with disabilities through integrating synchronized captioned educational videos into undergraduate and graduate STEM disciplines. This Universal Video Captioning (UVC) platform will serve as a repository for uploading videos and scripts. The proposed infrastructure is a web-based platform that uses the lasted WebDAV technology (Web-based Distributed Authoring and Versioning) to identify resources, users, contents, etc. It consists of three layers: i) an administrative management system; ii) a faculty/staff user interface; and iii) a transcriber user interface.
"OK glass?": a preliminary exploration of Google Glass BIBAFull-Text 267-268
  Meethu Malu; Leah Findlater
Head-mounted displays such as Google Glass offer potential advantages for persons with motor impairments (MI). For example, they are always available and offer relatively hands-free interaction compared to a mobile phone. Despite this potential, there is little prior work examining the accessibility of such devices. In this poster paper, we perform a preliminary assessment of the accessibility of Google Glass for users with MI and the potential impacts of a head-mounted interactive computer. Our findings show that, while the touchpad is particularly difficult to use-impossible for three participants-advantages over a phone include that it is relatively hands free, does not require looking down at the display, and cannot be easily dropped.
Older adults interaction with broadcast debates BIBAFull-Text 269-270
  Rolando Medellin-Gasque; Chris Reed; Vicki Hanson
The constant emergence and change of current technologies in the form of digital products and services can cause certain groups of the population to feel excluded. Older adults represent one such group. Our research combines computational models of argument and human-centric computing to impact the way in which older adults interact with broadcast debates. We present a preliminary user study where older adults interact with a debate and propose an application which uses speech recognition to classify spoken utterances and related them to segmented debates. Moreover, we discuss preliminary results on older adults interacting with the application in pilot experiments.
Online learning system for teaching basic skills to people with developmental disabilities BIBAFull-Text 271-272
  Luke Buschmann; Lourdes Morales; Sri Kurniawan
We report an online learning system for adults with developmental disabilities (DD) developed in collaboration with Imagine!, a Colorado based organization that provides support services to people with developmental and cognitive disabilities. Our HTML5 online application includes lessons to teach adults with DD of all ages about numbers, letters and currency. We implemented the application on an iPad to take advantage of the simplicity of touch-based interactions. Our preliminary user evaluation suggests that the system is well-received by its intended users, and unlike competitor systems that teach basic skills, is not considered childish and boring.
Pilot evaluation of a path-guided indoor navigation system for visually impaired in a public museum BIBAFull-Text 273-274
  Dhruv Jain
One of the common problems faced by visually impaired people is of independent path-based mobility in an unfamiliar indoor environment. Existing systems do not provide active guidance or are bulky, expensive and hence are not socially apt. Consequently, no system has found wide scale deployment in a public place. Our system is an omnipresent cellphone based indoor wayfinding system for the visually impaired. It provides step-by-step directions to the destination from any location in the building using minimal additional infrastructure. The carefully calibrated audio, vibration instructions and the small wearable device helps the user to navigate efficiently and unobtrusively. In this paper, we present the results from pilot testing of the system with one visually impaired user in a national science museum.
Tactile aids for visually impaired graphical design education BIBAFull-Text 275-276
  Samantha McDonald; Joshua Dutterer; Ali Abdolrahmani; Shaun K. Kane; Amy Hurst
In this demonstration, we describe our exploration in making graphic design theory accessible to a visually impaired student with the use of rapid prototyping tools. We created over 10 novel aids with the use of a laser cutter and 3D printer to demonstrate tangible examples of color theory, type face, web page layouts, and web design. These tactile aids were inexpensive and fabricated in a relatively small amount of time, suggesting the feasibility of our approach. The participant's feedback concluded an increased understanding of the class material and confirmed the potential of tactile aids and rapid prototyping in an educational environment.
Tongue-able interfaces: evaluating techniques for a camera based tongue gesture input system BIBAFull-Text 277-278
  Shuo Niu; Li Liu; D. Scott McCrickard
Tongue-computer interaction techniques create a new pathway between the human and the computer, with particular utility for people with upper limb impairment. This study investigated the usability problems of camera-based tongue computer interface reflected through the user behavior and participants' feedback; specifically the exploration of referential techniques to make users aware of their tongue position and help adjust their gesture. Pros and cons of the referential strategies are discussed to foster future assistive tongue-computer interface design.
Tracking mental engagement: a tool for young people with ADD and ADHD BIBAFull-Text 279-280
  Robert Beaton; Ryan Merkel; Jayanth Prathipati; Andrew Weckstein; Scott McCrickard
This paper describes a reflective mobile application to help young people with ADD and ADHD better understand their engagement levels during their daily tasks. The mobile application, paired with an electroencephalographic (EEG) device, collects data about user-specific task engagement and pairs it with geographic and temporal data to provide insights into the degree to which a user is engaged in tasks based on time and location. The paper describes two mobile prototypes developed to investigate the problem space of contextual visualizations of engagement information, and it presents future directions for development and study.
Using computer vision to access appliance displays BIBAFull-Text 281-282
  Giovanni Fusco; Ender Tekin; Richard E. Ladner; James M. Coughlan
People who are blind or visually impaired face difficulties accessing a growing array of everyday appliances, needed to perform a variety of daily activities, because they are equipped with electronic displays. We are developing a "Display Reader" smartphone app, which uses computer vision to help a user acquire a usable image of a display, to address this problem. The current prototype analyzes video from the smartphone's camera, providing real-time feedback to guide the user until a satisfactory image is acquired, based on automatic estimates of image blur and glare. Formative studies were conducted with several blind and visually impaired participants, whose feedback is guiding the development of the user interface. The prototype software has been released as a Free and Open Source (FOSS) project.
Visually impaired orientation techniques in unfamiliar indoor environments: a user study BIBAFull-Text 283-284
  Abdulrhman A. Alkhanifer; Stephanie Ludi
Individuals who are visually impaired encounter a number of challenges when attempting to orientate their own position or the position of others in relation to them within an unfamiliar indoor environment. To design an orientation assistive technology, it is crucial to understand the factors that reduce the user's sense of orientation. In this work, we discuss the disorientation factors that resulted from three user studies, which were conducted to formulate the basic requirements for an orientation assistive technology to assist visually impaired individuals in unfamiliar indoor areas. Using the feedback we elicited from one survey and two interview studies, we shed light on the factors that reduce the user's sense of orientation in unfamiliar buildings such as noise and traffic levels.
Web browsing interface for people with severe speech and motor impairment in India BIBAFull-Text 285-286
  Tirthankar Dasgupta; Manjira Sinha; Anupam Basu
We present design and development of a web browser that allow easy dissemination of information through World Wide Web for people with cerebral palsy in India. Our focus user group comprises people with severe form of spastic cerebral palsy and highly restricted motor movement skills. Throughout the development process we have interacted with the target users to understand their requirements and to get design advises. The browser is augmented with an intelligent auto-scanning mechanism through which the web contents and browser GUI controls can be accessed with less time and effort. We have field tested the browser with the target users where preliminary evaluation results suggests that the proposed browser is quite effective in terms of task execution time, cognitive effort and overall usability.

Demonstration abstracts

AVD-LV: an accessible player for captioned STEM videos BIBAFull-Text 287-288
  Raja S. Kushalnagar; John J. Rivera; Warrance Yu; Daniel S. Steed
The Americans with Disabilities Act requires online lecture creators to caption the videos for deaf and hard of hearing students, or for deaf and low vision (DLV) students who request these accommodations. While current captioned lecture video interfaces are usually accessible to deaf students, it is more challenging to provide full accessibility to DLV viewers who have restricted vision, as they cannot see both the lecture and captions simultaneously. We present an enhanced interface for YouTube lectures (Accessible View Device interface for Low Vision) that provides more accessibility for DLV viewers. This interface provides the ability to pause either the video or the captions with a single key-press, so that the viewer can follow simultaneous audio and video information. This interface is available to anyone and can be used with any captioned lecture on YouTube.
Building keyboard accessible drag and drop BIBAFull-Text 289-290
  Rucha Somani; Jiahang Xin; Bijay Bhaskar Deo; Yun Huang
Drag and Drop (DnD) web design has been widely used by E-learning systems. However, it may take a lot of effort for web developers who have limited knowledge of web accessibility to build complex keyboard accessible DnD components. In this demo, we present our conceptual design of keyboard accessible DnD, and explain how web developers can leverage the design to implement their own pages. We further discuss how to extend this design to enable different DnD scenarios.
Coming to grips: 3D printing for accessibility BIBAFull-Text 291-292
  Erin Buehler; Amy Hurst; Megan Hofmann
In this demonstration, we discuss a case study involving a student with limited hand motor ability and the process of exploring consumer grade, Do-It-Yourself (DIY) technology in order to create a viable assistive solution. This paper extends our previous research into DIY tools in special education settings [1] and presents the development of a unique tool, GripFab, for creating 3D-printed custom handgrips. We offer a description of the design process for a handgrip, explain the motivation behind the creation of GripFab, and explain current and planned features of this tool.
Evaluation of non-visual panning operations using touch-screen devices BIBAFull-Text 293-294
  HariPrasath Palani; Nicholas A. Giudice
This paper summarizes the implementation, evaluation, and usability of non-visual panning operations for accessing graphics rendered on touch screen devices. Four novel non-visual panning techniques were implemented and experimentally evaluated on our experimental prototype, called a Vibro-Audio Interface (VAI), which provides completely non-visual access to graphical information using vibration, audio, and kinesthetic cues on a commercial touch screen device. This demonstration will provide an overview of our system's functionalities and will discuss the necessity for developing non-visual panning operations enabling visually-impaired people access to large-format graphics (such as maps and floor plans).
Expression: a google glass based assistive solution for social signal processing BIBAFull-Text 295-296
  ASM Iftekhar Anam; Shahinur Alam; Mohammed Yeasin
Assistive technology solutions can help people with disability from social isolation, depression, facilitate social interaction and enhance the quality of life. Limited access to non-verbal cues hinders the social interaction of people who are blind or visually impaired. This paper presents 'Expression' -- an integrated assistive solution using Google Glass. The key function of the system is to enable the user to perceive social signals during a natural face-to-face conversation. Subjective evaluation of Expression was performed using a five (5) point Likert Scale and was found to be excellent (4:383).
A google glass app to help the blind in small talk BIBAFull-Text 297-298
  Mohammad Iftekhar Tanveer; Mohammed Ehsan Hoque
In this paper we present a wearable prototype that can automatically recognize affective cues such as number of people present, their age and gender distributions given an image. We customize the prototype in the context of helping people with visual impairments to better navigate social scenarios. Running an experiment to validate this technology in social scenarios remains part of our future work.
An immersive physical therapy game for stroke survivors BIBAFull-Text 299-300
  Conor Kaminer; Kevin LeBras; Jordan McCall; Tan Phan; Paul Naud; Mircea Teodorescu; Sri Kurniawan
We report a system for gamifying physical therapy for stroke survivors that provides a fully immersive 3D environment. The system consists of a Kinect, an Oculus Rift immersive goggle and a pair of haptic gloves. The Kinect is used to provide bodily gesture for the game as well as to record data to allow assessment of therapy progress regarding body postures. The haptic gloves operate as a controller for the player's in-game hands as well as to record data that provides therapy progress regarding finger flexibility.
Immersive simulation of visual impairments using a wearable see-through display BIBAFull-Text 301-302
  Halim Cagri Ates; Alexander Fiannaca; Eelke Folmer
Simulation of a visual impairment may lead to a better understanding of how individuals with visual impairments perceive the world around them and could be a useful design tool for interface designers to identify accessibility barriers. Current simulation tools, however, suffer from a number of limitations, pertaining cost, accuracy and immersion. We present a simulation tool (SimViz) that mounts a wide angle camera on a head-mounted display to create a see-through stereoscopic display that simulates various types and levels of visual impairments. SimViz enables quick accessibility inspections during iterative software development.
Legion scribe: real-time captioning by non-experts BIBAFull-Text 303-304
  Walter S. Lasecki; Raja Kushalnagar; Jeffrey P. Bigham
The promise of affordable, automatic approaches to real-time captioning imagines a future in which deaf and hard of hearing (DHH) users have immediate access to speech in the world around them my simply picking up their phone or other mobile device. While the challenges of processing highly variable natural language has prevented automated approaches from completing this task reliably enough for use in settings such as classrooms or workplaces [4], recent work in crowd-powered approaches have allowed groups of non-expert captionists to provide a similarly-flexible source of captions for DHH users. This is in contrast to current human-powered approaches, which use highly-trained professional captionists who can type up to 250 words per minute (WPM), but also can cost over $100/hr. In this paper, we describe a real-time demo of Legion:Scribe (or just "Scribe"), a crowd-powered captioning system that allows untrained participants and volunteers to provide reliable captions with less than 5 seconds of latency by computationally merging their input into a single collective answer that is more accurate and more complete than any one worker could have generated alone.
littleBits go LARGE: making electronics more accessible to people with learning disabilities BIBAFull-Text 305-306
  Nicholas D. Hollinworth; Faustina Hwang; Kate Allen; Gosia Kwiatkowska; Andy Minnion
The "littleBits go LARGE" project extends littleBits electronic modules, an existing product that is aimed at simplifying electronics for a wide range of audiences. In this project we augment the littleBits modules to make them more accessible to people with learning disabilities. We will demonstrate how we have made the modules easier to handle and manipulate physically, and how we are augmenting the design of the modules to make their functions more obvious and understandable.
M.I.D.A.S. touch: magnetic interactive device for alternative sight through touch BIBAFull-Text 307-308
  Taylan K. Sen; Morgan W. Sinko; Alex T. Wilson; Mohammed E. Hoque
This paper describes the development of a working prototype of a novel free-space haptic human-computer interface called MIDAS Touch. MIDAS Touch works by applying physical forces to a user's finger through the production of a dynamic magnetic field. The magnetic field strength is adjusted in real-time based on the user's movement of his/her finger. A user's hand/finger motion in the real world is mapped to movement of a virtual finger in a virtual world through the use of a Leap Motion 3D tracking sensor. As a person's virtual finger collides with objects in the virtual world, the magnetic field strength is varied. In this demo, we present a case of MIDAS Touch coupled to a standard PC as a computer drawing viewer and drawing application for helping individuals with visual impairment feel what they or others have drawn.
Motion history to improve communication and switch access for people with severe and multiple disabilities BIBAFull-Text 309-310
  Guang Yang; Mamoru Iwabuchi; Rumi Hirabayashi; Kenryu Nakamura; Kimihiko Taniguchi; Syoudai Sano; Takamitsu Aoki
In this study, a computer-vision based technique called Motion History that visualizes the history of movement of the user, was applied to support communication and switch access for people with severe and multiple disabilities. Seven non-speaking children with severe physical and intellectual disabilities participated in the study, and Motion History successfully helped to investigate their voluntary movement and cognition. In addition, based on the feedback comments of the study, a new system was developed, which used the built-in camera of the tablet PC to observe Motion History, and made the system easier and more mobile to use. One of the features of the system could convert the recognized body movement into a switch control, where a good switch fitting was automatically established based on the motion history.
Move&flick: design and evaluation of a single-finger and eyes-free kana-character entry method on touch screens BIBAFull-Text 311-312
  Ryosuke Aoki; Ryo Hashimoto; Akihiro Miyata; Shunichi Seko; Masahiro Watanabe; Masayuki Ihara
We introduce a Japanese Kana-character entry method on touch screens for visually impaired people. Our proposal, "Move&Flick," allows the user to move a single finger in any of eight directions twice without lifting the finger on the screen to select a character; lifting the finger decides the character. The method uses radial areas with dead zones to detect each of the eight movement directions. An experiment shows that the dead zones and voice feedback let the user acquire proper finger directions in an easy learning process. The method also has an algorithm to correctly detect the change point from the first movement direction to the second movement direction. We evaluate Move&Flick in an experiment with visually impaired subjects and confirm that Move&Flick works well.
New tools for automating tactile geographic map translation BIBAFull-Text 313-314
  Nizar Bouhlel; Anis Rojbi
We present a new software program that converts a geographic map given in a formatted image file to a tactile form suitable for blind students. The software is designed to semi-automate the translation from visual maps to tactile versions, and to help tactile graphics specialists to be faster and more efficient in producing the tactile geographic map. The developed tools cover a wide variety of image processing techniques, optical character recognition and Braille translation.
OS-level surface haptics for touch-screen accessibility BIBAFull-Text 315-316
  Suhong Jin; Joe Mullenbach; Craig Shultz; J. Edward Colgate; Anne Marie Piper
The TPad Tablet combines an Android tablet with a variable friction haptic touch-screen and offers many novel interaction possibilities. For example, unique textures may be associated with different user interface elements, such as text boxes and buttons. This paper presents an Android AccessibilityService that was created to give operating system-wide (OS) access to haptic effects. Prior to this work, the haptic feedback of the TPad could be controlled only from within specific applications. With the new implementation, all applications and primary user interfaces (e.g. home screen) will have access to the TPad. Rather than focus on specific elements or applications, we seek to provide a high fidelity haptic experience that elevates the TPad's accessibility to the standard of Talkback and Voiceover, Android's and Apple's accessibility programs respectively. The code for the application is available on our website.
Real-time caption challenge: C-print BIBAFull-Text 317-318
  Michael S. Stinson; Pamela Francis; Lisa B. Elliot; Donna Easton
This poster/demonstration session showcases C-Print, a typing-based transcription system. This form of real-time captioning will be provided for approximately one half day during the ASSETS 2014 Conference and will be part of a real-time caption challenge. The C-Print system requires a trained transcriptionist who uses computerized abbreviations and condensing strategies to produce the text display of spoken information. This spoken information appears as text on a computer or mobile device for viewing by the consumer approximately two seconds later.
SpeechOmeter: heads-up monitoring to improve speech clarity BIBAFull-Text 319-320
  Mansoor Pervaiz; Rupal Patel
Individuals with neuromotor speech disorders due to conditions such as Multiple Sclerosis, Parkinson Disease and Cerebral Palsy have soft and slurred speech. These individuals receive speech training to increase vocal loudness and to speak slowly and clearly. Although successful in clinical settings, generalizability of these techniques to daily conversation requires technological innovation. To address this issue we designed SpeechOmeter, a Google Glass application that provides unobtrusive real-time visual feedback on vocal loudness relative to the ambient noise level. The system also provides clinicians with treatment adherence and performance statistics in order to further personalize speech training regimes. In a longitudinal usability study 12 individuals with MS increased vocal loudness when provided with feedback. A live demonstration of SpeechOmeter will enable attendees to experience the system.
Tactile graphics with a voice demonstration BIBAFull-Text 321-322
  Catherine M. Baker; Lauren R. Milne; Jeffrey Scofield; Cynthia L. Bennett; Richard E. Ladner
Textbook images are converted into tactile graphics to be made accessible to blind and low vision students. The text labels on these graphics are an important part of the image and must be made accessible as well. The graphics usually have the labels embossed in Braille. However, there are some blind and low vision students who cannot read Braille and need to be able to access the labels in a different manner. We present Tactile Graphics with a Voice, a system that encodes the labels in QR codes, which can be read aloud using the application, TGV, we developed. TGV provides feedback to support the user in scanning the QR code and allows the user to select which QR code to scan when multiple are close together.
Headlock: a wearable navigation aid that helps blind cane users traverse large open spaces BIBAFull-Text 323-324
  Alexander Fiannaca; Ilias Apostolopoulous; Eelke Folmer
Traversing large open spaces is a challenging task for blind cane users, as such spaces are often devoid of tactile features that can be followed. Consequently, in such spaces cane users may veer from their intended paths. Wearable devices have great potential for assistive applications for users who are blind as they typically feature a camera and support hands and eye free interaction. We present HEADLOCK; a navigation aid for an optical head-mounted display that helps blind users traverse large open spaces by letting them lock onto a salient landmark across the space, such as a door, and then providing audio feedback to guide the user towards the landmark. HEADLOCK consists of interface modes for discovering landmarks, guiding a user towards a landmark, and recovering from an error state if a landmark is lost. HEADLOCK is designed with two forms of audio feedback: sonification and text-to-speech.

Student research competition abstracts

Accessible web chat interface BIBAFull-Text 325-326
  Valentyn Melnyk
Screen readers generally do not recognize widgets that dynamically appear on the screen; as a result, blind users cannot benefit from the convenience of using them. This work describes the development of accessible web chats, a pervasive "go-to" tool in web applications for real-time communication. The results of a user study with 18 blind screen-reader users are presented to demonstrate the utility of the accessible web chat interface.
Capti-Speak: a speech-enabled accessible web interface BIBAFull-Text 327-328
  Vikas Ashok
People with severe vision impairments generally interact with web pages via screen readers that provide keyboard shortcuts for navigating through the content. However, this traditional press- and-listen mode of interaction has several drawbacks, notably: time wasted listening to irrelevant content, extensive use of the keyboard to navigate the content, and the need to remember numerous keyboard shortcuts and browsing strategies. Augmenting traditional screen reading with a speech interface has the potential to alleviate many of the above limitations. This work describes Capti-Speak, an accessible web-browsing interface that supports both speech commands and standard screen-reader shortcuts. A user study with a dozen blind participants showed that Capti-Speak was significantly more usable and efficient compared to the conventional screen readers, especially for ad-hoc browsing, searching, and navigating to the content of interest.
Does it look beautiful?: communicating aesthetic information about artwork to the visually impaired BIBAFull-Text 329-330
  Caroline Marie Galbraith
"A picture says a thousand words" is a fitting phrase to express the difficulty in communicating visual information to a sighted individual. But what happens when the person the art work is being described to doesn't have any vision? What features of the artwork enhance their comprehension of the aesthetics, and what details are excluded from the description? This paper will explore the ways in which artwork is described to visually impaired individuals, and the details the visually impaired express interest in knowing about the artwork. Specifically, we focus on the value of drawing from shared experiences and prompts. These findings are based on observational data that was gathered from a study involving a visually impaired participant and a sighted companion exploring a gallery of artwork together and conversing about the artwork.
Improving programming interfaces for people with limited mobility using voice recognition BIBAFull-Text 331-332
  Xiomara Figueroa Fontánez; Patricia Ordóñez
Programming is an arduous task for individuals with motor impairments who rely on independent tools to interact with their digital environment. Providing a bimodal Integrated Development Environment is key to tackling a program's complex syntax and to improving the programming interface. This project is an effort to facilitate the interaction between programmers with motor impairments in their hands and Integrated Development Environments (IDE's) through the integration of modified versions of open source assistive technology software. We are working on the prototype for a specific user, who is a computer scientist with spinal muscular atrophy (SMA) that can no longer physically attend classes and can only type with one finger. The user is a crucial part of this project providing invaluable input into the design of the interface.
Introducing web accessibility to localization students: implications for a universal web BIBAFull-Text 333-334
  Silvia Rodríguez Vázquez
The importance of web accessibility has spread throughout close technical disciplines, leading to new forms of collaboration between that area of study and other related fields, such as internationalization and web localization. Recent investigations have illustrated that web accessibility experts support the involvement of localization professionals in the achievement of a more accessible web for all, especially in the case of the multilingual web. However, most training institutions do not teach yet the basic technical competence on the matter. Within such research framework, over the last two years, a series of seminars on web accessibility have been taught both for undergraduate and graduate translation students at two European universities. The relevance of acquiring web accessibility knowledge and know how was generally welcomed by all participants, who showed a high level of interest and motivation. Data gathered up to date have helped to develop a better informed theoretical framework about the participation of localizers in the web development cycle and their contribution to a universal web.
Strategies: an inclusive authentication framework BIBAFull-Text 335-336
  Natã M. Barbosa
This paper briefly describes a proposed interaction workflow that is currently being developed as part of a research effort towards providing better solutions for accessible authentication, strongly guided by contextual inquiry and evidence-based guidelines. The approach described herein is being developed and tested to be foundations for tests and findings of the research, consequently evolving along the research progress towards providing a scalable, deployable, secure, usable for everyone, and last, but not least, privacy preserving platform for web authentication.
VisAural: a wearable sound-localisation device for people with impaired hearing BIBAFull-Text 337-338
  Benjamin M. Gorman
Although our sense of hearing, smell, and vision allow us to perceive things at a distance, the detection of many day-to-day events relies exclusively on our hearing. For example, finding a ringing phone lost in a sofa, hearing a child cry in another room, and use of a car alarm to locate a vehicle in a car park. However, individuals with total or partial hearing loss have difficulty detecting the audible signals in these situations. We have developed VisAural, a system that converts audible signals into visual cues. Using an array of head-mounted microphones, VisAural detects the direction of a sound, and places LEDs at the periphery of the user's visual field to guide them to the source of the sound. We tested VisAural with nine people with hearing impairments and found that this approach holds great promise but needs to be made more responsive before it can be truly helpful.
Web accessibility evaluation with the crowd: using glance to rapidly code user testing video BIBAFull-Text 339-340
  Mitchell Gordon
Evaluating the results of user accessibility testing on the web can take a significant amount of time, training, and effort. Some of this work can be offloaded to others through coding video data from user tests to systematically extract meaning from subtle human actions and emotions. However, traditional video coding methods can take a considerable amount of time. We have created Glance, a tool that uses the crowd to allow researchers to rapidly query, sample, and analyze large video datasets for behavioral events that are hard to detect automatically. In this abstract, we discuss how Glance can be used to quickly code video of users with special needs interacting with a website by coding for whether or not websites conform with accessibility guidelines, in order to evaluate how accessible a website is and where potential problems lie.

Text entry challenge abstracts

AccessBraille: tablet-based braille entry BIBAFull-Text 341-342
  Stephanie Ludi; Michael Timbrook; Piper Chester
This paper outlines the development of the AccessBraille framework, an iOS framework designed to provide a Braille keyboard to an iOS application. The proof-of-concept app developed with this framework is presented as an example of how the framework can be utilized, demonstrating its use across multiple contexts where Braille entry is used. The AccessBraille keyboard framework provides a natural way for blind users to enter US Type 1 or Type 2 Braille text into an app. The keyboard allows for users to customize finger placement for comfort and hand size.
DigitCHAT: enabling AAC input at conversational speed BIBAFull-Text 343-344
  Karl Wiegand; Rupal Patel
Augmentative and alternative communication (AAC) systems are used by many different types of people. While almost all AAC users have speech impairments that preclude the use of verbal communication, they may also have varying levels of vision or motor impairments, perhaps due to age or the particular nature of their disorder. Speed, expressiveness, and ease of communication are key factors in choosing an appropriate system; however, there are social considerations that are often overlooked. AAC systems are increasingly being used on mobile devices with smaller screens, in part because ambulatory AAC users may feel uncomfortable carrying around large or unusual machines. DigitCHAT is a prototype AAC system designed for fast and expressive communication by literate AAC users with minimal upper limb motor impairments. DigitCHAT's interface was designed to be used discretely on a mobile phone and supports continuous motion input using a small set of visually separated buttons.
Explorations on breathing based text input for mobile devices BIBAFull-Text 345-346
  Jackson F. Filho; Thiago Valle; Wilson Prata
This work proposes progresses on the use of a breathing based text input software for mobile as an alternative interaction technology for people with motor disabilities. It attempts to explore the processing of the audio from the microphone in mobile phones to select characters from dynamically generated keyboard. A proof of concept of this work is demonstrated by the implementation and experimentation of a mobile application prototype that enables users to perform text entry through "puffing" interaction.
KeyGlasses: semi-transparent keys on soft keyboard BIBAFull-Text 347-349
  Mathieu Raynal
This paper presents the KeyGlass system: a text entry system with dynamic addition of characters based on the previously entered ones. This system is optimized by the use of a prediction algorithm based on a lexicographic tree and bigrams.
Phoneme-based predictive text entry interface BIBAFull-Text 351-352
  Ha Trinh; Annalu Waller; Keith Vertanen; Per Ola Kristensson; Vicki L. Hanson
Phoneme-based text entry provides an alternative typing method for nonspeaking individuals who often experience difficulties in orthographic spelling. In this paper, we investigate the application of rate enhancement strategies to improve the user performance of phoneme-based text entry systems. We have developed a phoneme-based predictive typing system, which employs statistical language modeling techniques to dynamically reduce the phoneme search space and offer accurate word predictions. Results of a case study with a nonspeaking participant demonstrated that our rate enhancement strategies led to improved text entry speed and error rates.
Speech dasher: a demonstration of text input using speech and approximate pointing BIBAFull-Text 353-354
  Keith Vertanen; David J. C. MacKay
Speech Dasher is a novel text entry interface in which users first speak their desired text and then use the zooming interface Dasher to confirm and correct the recognition result. After several hours of practice, users wrote using Speech Dasher at 40 (corrected) words per minute. They did this using only speech and the direction of their gaze (obtained via an eye tracker). Despite an initial recognition word error rate of 22%, users corrected virtually all recognition errors.
Text entry by raising the eyebrow with HaMCoS BIBAFull-Text 355-356
  Torsten Felzer; Stephan Rinderknecht
This demo actually showcases not one but four different text entry methods for persons with very severe physical disabilities. All four rely on the same kind of input signals: tiny contractions of the brow muscle which are captured using electromechanical coupling. The input device involves a piezoelectric element which is attached to the user's forehead and kept in place with the help of an elastic sports headband. The switch-like signals are used in combination with the HaMCoS system -- which allows to fully emulate a two-button mouse -- and various auxiliary applications to enter text. This proposal talks about the input hardware in more detail and describes the text entry ideas shown in the demo.
Text entry using a compact keypad with OSDS BIBAFull-Text 357-358
  Torsten Felzer; Stephan Rinderknecht
This demo is about OSDS, a very powerful tool allowing computer users who cannot use a standard keyboard to replace that with a compact number-pad-like device, the so-called DualPad. The idea is to offer efficient text entry despite the smaller number of keys. The tool basically implements two input methods: One involving the selection of row and column of a character in a two-dimensional virtual keyboard and the other one using an ambiguous keyboard with dictionary-based disambiguation. A special focus is on its applicability to "real-world" use.
Text entry using single-channel analog puff input BIBAFull-Text 359-360
  Adam J. Sporka
The purpose of this prototype is to demonstrate a use of puff input for text entry. The entry method is based on hierarchical scanninglike selection of characters located in a static table organized according too the letter frequency. The cursor is moved in a way similar to operating a claw crane machine with two buttons. To move the cursor to the target position the user needs to produce two puffs, the first selects the column, and the second selects the row. With a brief training the method is capable of entry rate of 5 WPM.
Text entry via discrete and analog myoelectric signals BIBAFull-Text 361-362
  Adam J. Sporka; Antonín Posusta; Ondrej Poláek; Tomá Flek; Jakub Otáhal
The purpose of this prototype is to demonstrate the feasibility of text entry via detection of the surface myoelectric signals on user's body. Our system is capable to detect the movement of two fingers and the entire palm. The detector produces discrete signals as well as continuous quantifications of the exerted force. Two mappings of the signals on text entry were implemented, Scanning LetterWise and 2-FOCL, in two modes, discrete and continuous.