HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

Seventeenth International ACM SIGACCESS Conference on Computers and Accessibility

Fullname:Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility
Editors:Yeliz Yesilada; Jeffrey P. Bigham
Location:Lisbon, Portugal
Dates:2015-Oct-26 to 2015-Oct-28
Publisher:ACM
Standard No:ISBN: 978-1-4503-3400-6; ACM DL: Table of Contents; hcibib: ASSETS15
Papers:106
Pages:448
Links:Conference Website
  1. Keynote Address
  2. Speech In and Out
  3. Reading and Language
  4. Perspectives on Accessibility Research
  5. Non-Visual Access to Graphics
  6. Sign Language and the Third Dimension
  7. Accessibility and Work
  8. Exercise and Physical Activity
  9. Making Speech Accessible and Usable
  10. Non-Visual Access
  11. Text Input
  12. Cognitive Disabilities
  13. Poster Session 1
  14. Poster Session 2
  15. Demo Session

Keynote Address

Enabling the Future: Crowdsourced 3D-printed Prostheticsas a Model for Open Source Assistive TechnologyInnovation and Mutual Aid BIBAFull-Text 1
  Jon Schull
e-NABLE -- an online community that designs, customizes, fabricates, and disseminates 3D-printed prosthetic hands and arms, for free -- extends the methods and philosophies of Open Source software to hardware, assistive technology development, and human resources. I'll tell the story of this remarkable community, and discuss emerging trends and opportunities for academic research, and trans-academic collaboration.

Speech In and Out

Faster Text-to-Speeches: Enhancing Blind People's Information Scanning with Faster Concurrent Speech BIBAFull-Text 3-11
  João Guerreiro; Daniel Gonçalves
Blind people rely mostly on the auditory feedback of screen readers to consume digital information. Still, how fast can information be processed remains a major problem. The use of faster speech rates is one of the main techniques to speed-up the consumption of digital information. Moreover, recent experiments have suggested the use of concurrent speech as a valid alternative when scanning for relevant information. In this paper, we present an experiment with 30 visually impaired participants, where we compare the use of faster speech rates against the use of concurrent speech. Moreover, we combine these two approaches by gradually increasing the speech rate with one, two and three voices. Results show that concurrent voices with speech rates slightly faster than the default rate, enable a significantly faster scanning for relevant content, while maintaining its comprehension. In contrast, to keep-up with concurrent speech timings, One-Voice requires larger speech rate increments, which cause a considerable loss in performance. Overall, results suggest that the best compromise between efficiency and the ability to understand each sentence is the use of Two-Voices with a rate of 1.75*default-rate (approximately 278 WPM).
Towards Large Scale Evaluation of Novel Sonification Techniques for Non Visual Shape Exploration BIBAFull-Text 13-21
  Andrea Gerino; Lorenzo Picinali; Cristian Bernareggi; Nicolò Alabastro; Sergio Mascetti
There are several situations in which a person with visual impairment or blindness needs to extract information from an image. Examples include everyday activities, like reading a map, as well as educational activities, like exercises to develop visuospatial skills. In this contribution we propose a set of 6 sonification techniques to recognize simple shapes on touchscreen devices. The effectiveness of these sonification techniques is evaluated though Invisible Puzzle, a mobile application that makes it possible to conduct non-supervised evaluation sessions. Invisible Puzzle adopts a gamification approach and is a preliminary step in the development of a complete game that will make it possible to conduct a large scale evaluation with hundreds or thousands of blind users. With Invisible Puzzle we conducted 131 tests with sighted subjects and 18 tests with subjects with blindness. All subjects involved in the process successfully completed the evaluation session, with high engagement, hence showing the effectiveness of the evaluation procedure. Results give interesting insights on the differences among the sonification techniques and, most importantly, show that, after a short training, subjects are able to identify many different shapes.
Getting Smartphones to Talkback: Understanding the Smartphone Adoption Process of Blind Users BIBAFull-Text 23-32
  André Rodrigues; Kyle Montague; Hugo Nicolau; Tiago Guerreiro
The advent of system-wide accessibility services on mainstream touch-based smartphones has been a major point of inclusion for blind and visually impaired people. Ever since, researchers aimed to improve the accessibility of specific tasks, such text-entry and gestural interaction. However, little work aimed to understand and improve the overall accessibility of these devices in real world settings. In this paper, we present an eight-week long study with five novice blind participants where we seek to understand major concerns, expectations, challenges, barriers, and experiences with smartphones. The study included pre-adoption and weekly interviews, weekly controlled task assessments, and in-the wild system-wide usage. Our results show that mastering these devices is an arduous and long task, confirming the users' initial concerns. We report on accessibility barriers experienced throughout the study, which could not be encountered in task-based laboratorial settings. Finally, we discuss how smartphones are being integrated in everyday activities and highlight the need for better adoption support tools.
A System for Controlling Assisted Living Environments Using Mobile Devices BIBAFull-Text 33-38
  Paulo A. Condado; Fernando G. Lobo
We present EasyHouse, a system developed to allow people with disabilities and their family members to control their home environment (e.g., turn on/off a light, turn on/off the TV) using a smartphone. The user interface was designed to be adjustable to the needs of each user.
   The development of EasyHouse followed an iterative user-centered design approach. A paper-based low-fidelity prototype was built based on user requirements and from the first author's own experience as a researcher with cerebral palsy. After several design and evaluation iterations, a functional prototype has been evaluated in a real context with real users. Preliminary results indicate that the proposed system is suitable for both people with and without motor disabilities, and provides an adequate method for controlling home appliances when there is a family member with motor disabilities.

Reading and Language

A Spellchecker for Dyslexia BIBAFull-Text 39-47
  Luz Rello; Miguel Ballesteros; Jeffrey P. Bigham
Poor spelling is a challenge faced by people with dyslexia throughout their lives. Spellcheckers are therefore a crucial tool for people with dyslexia, but current spellcheckers do not detect real-word errors, which are a common type of errors made by people with dyslexia. Real-word errors are spelling mistakes that result in an unintended but real word, for instance, form instead of from. Nearly 20% of the errors that people with dyslexia make are real-word errors. In this paper, we introduce a system called Real Check that uses a probabilistic language model, a statistical dependency parser and Google n-grams to detect real-world errors. We evaluated Real Check on text written by people with dyslexia, and showed that it detects more of these errors than widely used spellcheckers. In an experiment with 34 people (17 with dyslexia), people with dyslexia corrected sentences more accurately and in less time with Real Check.
Accessible Texts for Autism: An Eye-Tracking Study BIBAFull-Text 49-57
  Victoria Yaneva; Irina Temnikova; Ruslan Mitkov
People with Autism Spectrum Disorder (ASD) are known to experience difficulties in reading comprehension, as well as to have unusual attention patterns, which makes the development of user-centred tools for this population a challenging task. This paper presents the first study to use eye-tracking technology with ASD participants in order to evaluate text documents. Its aim is two-fold. First, it evaluates the use of images in texts and provides evidence of a significant difference in the attention patterns of participants with and without autism. Sets of two types of images, photographs and symbols, are compared to establish which ones are more useful to include in simple documents. Second, the study evaluates human-produced easy-read documents, as a gold standard for accessible documents, on 20 adults with autism. The results provide an understanding of the perceived level of difficulty of easy-read documents according to this population, as well as the preferences of autistic individuals in text presentation. The results are synthesized as set of guidelines for creating accessible text for autism.

Perspectives on Accessibility Research

Usage of Subjective Scales in Accessibility Research BIBAFull-Text 59-67
  Shari Trewin; Diogo Marques; Tiago Guerreiro
Accessibility research studies often gather subjective responses to technology using Likert-type items, where participants respond to a prompt statement by selecting a position on a labeled response scale. We analyzed recent ASSETS papers, and found that participants in non-anonymous accessibility research studies gave more positive average ratings than those in typical usability studies, especially when responding to questions about a proposed innovation. We further explored potential positive response bias in an experimental study of two telephone information systems, one more usable than the other. We found that participants with visual impairment were less sensitive to usability problems than participants in a typical student sample, and that their subjective ratings didn't correlate as strongly with objective measures of performance. A deeper understanding of the mechanism behind this effect would help researchers to design better accessibility studies, and to interpret subjective ratings with more accuracy.
User Participation When Users have Mental and Cognitive Disabilities BIBAFull-Text 69-76
  Stefan Johansson; Jan Gulliksen; Ann Lantz
Persons with cognitive or mental disabilities have difficulties participating in or are excluded from IT development and assessments exercises due to the problems finding good ways to efficiently collaborate on equal terms. In this paper we describe how we worked closely together with persons that have mental and cognitive disabilities in order to test and develop methods for participation in assessments and in processes for developing, Information and Communication Technology (ICT) products and services. More than 100 persons with mental and cognitive disabilities participated in the study (people with diagnoses such as depression, anxiety disorder, bipolarity, and schizophrenia). To explore the conditions for a more equal and fair participation we have developed and elaborated a set of methods, tools and approaches. By combining scientific research methods with well-established methods for empowerment and participation we have developed methods that are cost effective and that easily can be incorporated in existing processes. We believe that our approach have taken steps to implement possibilities for persons with mental and cognitive disabilities to take part where user participation is needed in order not to discriminate or exclude but also to improve the overall quality of the end result. The results clearly show that it is possible to include persons with mental and cognitive disabilities. A mixed method -- mixed tool approach can increase the possibility for participation. The results also show that the quality of the analysis phase increases if the collaborative approach is extended to also embrace the data analysis phase.
A Unifying Notification System To Scale Up Assistive Services BIBAFull-Text 77-87
  Charles Consel; Lucile Dupuy; Hélène Sauzéon
Aging creates needs for assistive technology to support all activities of daily living (meal preparation, dressing, social participation, stove monitoring, etc.). These needs are mostly addressed by a silo-based approach that requires a new assistive service (e.g., a reminder system, a pill prompter) to be acquired for every activity to be supported. In practice, these services manifest their silo-based nature in their user interactions, and more specifically, in the heterogeneity of their notification system. This heterogeneity incurs a cognitive cost that prevents scaling up assistive services and compromises adoption by older adults. This paper presents an approach to scaling up the combination of technology-based, assistive services by proposing a unifying notification system. To do so, (1) we propose a decomposition of assistive services to expose their needs in notification; (2) we introduce a notification framework, allowing heterogeneous assistive services to homogeneously notify users; (3) we present how this notification framework is carried out in practice for an assisted living platform.
   We successfully applied our approach to a range of existing and new assistive services. We used our notification framework to implement an assistive platform that combines a variety of assistive services. This platform has been deployed and used 24/7 at the home of 15 older adults for up to 6 months. This study provides empirical evidence of the effectiveness and learnability of the notification system of our platform, irrespective of the cognitive and sensory resources of the user. Additional results show that our assisted living platform achieved high user acceptance and satisfaction.
Disability and Technology: A Critical Realist Perspective BIBAFull-Text 89-96
  Christopher Frauenberger
Assistive technology (AT) as a field explores the design, use and evaluation of computing technology that aims to benefit people with disabilities. The majority of the work consequently takes the functional needs of people with disabilities as starting point and matches those with technological opportunity spaces. With this paper, we argue that the underlying philosophical position implied in this approach can be seen as reductionist as the disabled experience is arguably richer and often more complex as can be projected from the functional limitations of people. Thinkers and activists in Disability Studies have conceptualised disability in various ways and more recently, critical realism was proposed as a philosophical position through which the many different facets of the disabled experience could be incorporated. In this paper, we explore the possibility of using a critical realist perspective to guide designers in developing technology for people with disabilities and thereby aim to contribute to the philosophical underpinnings of AT. After a brief review of historical conceptualisations of disability, we introduce the critical realist argument and discuss its appeal for understanding disability and the possible roles technology can have in this context. Subsequently, we aim to translate this philosophical and moral debate into a research agenda for AT and exemplify how it can be operationalised by presenting the OutsideTheBox project as a case study.

Non-Visual Access to Graphics

The Tactile Graphics Helper: Providing Audio Clarification for Tactile Graphics Using Machine Vision BIBAFull-Text 97-106
  Giovanni Fusco; Valerie S. Morash
Tactile graphics use raised lines, textures, and elevations to provide individuals with visual impairments access to graphical materials through touch. Tactile graphics are particularly important for students in science, technology, engineering, and mathematics (STEM) fields, where educational content is often conveyed using diagrams and charts. However, providing a student who has a visual impairment with a tactile graphic does not automatically provide the student access to the graphic's educational content. Instead, the student may struggle to decipher subtle differences between textures or line styles, and must deal with cramped and confusing placement of lines and braille. These format-related issues prevent students with visual impairments from accessing educational content in graphics independently, because they necessitate the students ask for sighted clarification. We propose a machine-vision based "tactile graphics helper" (TGH), which tracks a student's fingers as he/she explores a tactile graphic, and allows the student to gain clarifying audio information about the tactile graphic without sighted assistance. Using an embedded mixed-methods case study with three STEM university students with visual impairments, we confirmed that format-related issues prevent these students from accessing some graphical content independently, and established that TGH provides a promising approach for overcoming tactile-graphic format issues.
ChartMaster: A Tool for Interacting with Stock Market Charts using a Screen Reader BIBAFull-Text 107-116
  Hong Zou; Jutta Treviranus
Stock market charts guide investors in making financial decisions. Online stock market charts are largely interactive, driven by real-time financial data. However, these are not easily accessible via a screen reader. To enable screen reader users to query and effectively use interactive online stock market charts, we are developing a tool called ChartMaster. In this paper, we describe an early study conducted with sixteen visually impaired persons, most of whom were financial novices, for co-designing the interaction interface for C hartMaster. An inclusive design exercise was undertaken to discover alternative interfaces using non-visual modalities to interact with stock market charts. A user-centered process of co-design using HCI methods was carried out to iteratively evaluate and refine three input solutions: audio input, text input and dropdown menu. While the users ultimately declared the dropdown menu to be the most useful of the three solutions, they wanted all possible options to choose from based on task contexts and personal preferences. User feedback confirmed that a one-size-fits-all design is not ideal for accommodating diverse user needs within the widest possible range of contexts. It was also found that the ChartMaster tool with dropdown menu interface holds potential educational value for financial novices.
Collaborative Creation of Digital Tactile Graphics BIBAFull-Text 117-126
  Jens Bornschein; Denise Prescher; Gerhard Weber
The Tangram Workstation is a collaborative system for creating tactile graphics. A transcriber composing a tactile graphic from a visual source is supported by a non-visual reviewer on a two-dimensional tactile pin-matrix device on which he can observe and adapt the work of his sighted team member. We present the results of an evaluation with eight teams, each consisting of a transcriber and a blind reviewer. Overall, quality of tactile graphics can be improved by a collaborative approach. In most cases blind users recommended changes on tactile graphics even when they have been prepared by professional sighted editors. The study also showed that the blind reviewer is able to do simple editing tasks independently with our workstation.
Transcribing Across the Senses: Community Efforts to Create 3D Printable Accessible Tactile Pictures for Young Children with Visual Impairments BIBAFull-Text 127-137
  Abigale Stangl; Chia-Lo Hsu; Tom Yeh
The design of 3D printable accessible tactile pictures (3DP-ATPs) for young children with visual impairments has the potential to greatly increase the supply of tactile materials that can be used to support emergent literacy skill development. Many caregivers and stakeholders invested in supporting young children with visual impairments have shown interest in using 3D printing to make accessible tactile materials. Unfortunately, the task of designing and producing 3DP-ATPs is far more complex than simply learning to use personal fabrication tools. This paper presents formative research conducted to investigate how six caregiver stakeholder-groups, with diverse skillsets and domain interests, attempt to create purposeful 3DP-ATPs with amateur-focused 3D modeling programs. We expose the experiences of these stakeholder groups as they attempt to design 3DP-ATG for the first time. We discuss how the participant groups practically and conceptually approach the task and focus their design work. Each group demonstrated different combinations of skillsets. In turn, we identify the common activities required of the design task as well how different participants are well suited and motivated to preform those activities. This study suggests that the emerging community of amateur 3DP-ATP designers may benefit from an online creativity support tool to help offset the challenges of designing purposeful 3DP-ATPs that are designed to meet individual children with VI's emergent literacy needs.

Sign Language and the Third Dimension

Comparing Methods of Displaying Language Feedback for Student Videos of American Sign Language BIBAFull-Text 139-146
  Matt Huenerfauth; Elaine Gale; Brian Penly; Mackenzie Willard; Dhananjai Hariharan
Deaf children benefit from early exposure to language, and higher levels of written language literacy have been measured in deaf adults who were raised in homes using American Sign Language (ASL). Prior work has established that new parents of deaf children benefit from technologies to support learning ASL. As part of a project to design a tool to automatically analyze a video of a students' signing and provide immediate feedback about fluent and non-fluent aspects of their movements, we conducted a study to compare multiple methods of conveying feedback to ASL students, using videos of their signing. Through a Wizard-of-Oz study, we compared three types of feedback in regard to users' subjective judgments of system quality and the degree students' signing improved (as judged by an ASL instructor who analyzed recordings of students' signing before and after they viewed each type of feedback). We found that displaying videos to students of their signing, augmented with feedback messages about their errors or correct ASL usage, yielded higher subjective scores and greater signing improvement. Students gave higher subjective scores to a version in which pop-up messages appeared overlaid on the student's video to indicate errors or correct ASL usage.
Demographic and Experiential Factors Influencing Acceptance of Sign Language Animation by Deaf Users BIBAFull-Text 147-154
  Hernisa Kacorri; Matt Huenerfauth; Sarah Ebling; Kasmira Patel; Mackenzie Willard
Technology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) from an easy-to-update script would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf. Researchers evaluate their sign language animation systems by collecting subjective judgments and comprehension-question responses from deaf participants. Through a survey (N=62) and multiple regression analysis, we identified relationships between (a) demographic and technology experience/attitude characteristics of participants and (b) the subjective and objective scores collected from them during the evaluation of sign language animation systems. This finding suggests that it would be important for researchers to collect and report these characteristics of their participants in publications about their studies, but there is currently no consensus in the field. We present a set of questions in ASL and English that can be used by researchers to measure these participant characteristics; reporting such data would enable researchers to better interpret and compare results from studies with different participant pools.
How 3D Virtual Humans Built by Adolescents with ASD Affect Their 3D Interactions BIBAFull-Text 155-162
  Chao Mei; Lee Mason; John Quarles
Training games have many potential benefits for autism spectrum disorder (ASD) intervention, such as increasing motivation and improving the abilities of performing daily living activities, due to their ability to simulate real world scenarios. A more motivating game may stimulate users to play the game more, and it may also result in users performing better in the game. Incorporating users' interests into the game could be a good way to build a motivating game, especially for users with ASD. We propose a Customizable Virtual Human (CVH) which enables users with ASD to easily customize a virtual human and then interact with the CVH in a 3D interaction task. Previous work has shown that users with ASD may have less efficient hand-eye coordination in performing 3D interaction tasks than users without ASD. We developed a hand-eye coordination training game -- Imagination Soccer -- and presented a user study on adolescents with high functioning ASD to investigate the effects of CVHs. We compare the differences of participants' 3D interaction performances, game performances and user experiences (i.e. presence, involvement, and flow) under CVH and Non-customizable Virtual Human (with randomly generated appearances) conditions. As expected, the results indicated that for users with ASD, CVHs could effectively motivate them to play the game more, and offer a better user experience. Surprisingly, results also showed that the CVHs improved performance in the hand-eye-coordination task users had higher success rate and blocked more soccer balls with the CVH than with a non-customizable virtual human.

Accessibility and Work

The Invisible Work of Accessibility: How Blind Employees Manage Accessibility in Mixed-Ability Workplaces BIBAFull-Text 163-171
  Stacy M. Branham; Shaun K. Kane
Over the past century, people who are blind and their allies have developed successful public policies and technologies in support of creating more accessible workplaces. However, simply creating accessible technologies does not guarantee that these will be available or adopted. Because much work occurs within shared workspaces, decisions about assistive technology use may be mediated by social interactions with, and expectations of, sighted coworkers. We present findings from a qualitative field study of five workplaces from the perspective of blind employees. Although all participants were effective employees, they expressed that working in a predominantly sighted office environment produces impediments to a blind person's independence and to their integration as an equal coworker. We describe strategies employed by our participants to create and maintain an accessible workplace and present suggestions for future technology that better supports blind workers as equal peers in the workplace.
Understanding the Challenges Faced by Neurodiverse Software Engineering Employees: Towards a More Inclusive and Productive Technical Workforce BIBAFull-Text 173-184
  Meredith Ringel Morris; Andrew Begel; Ben Wiedermann
Technology workers are often stereotyped as being socially awkward or having difficulty communicating, often with humorous intent; however, for many technology workers with atypical cognitive profiles, such issues are no laughing matter. In this paper, we explore the hidden lives of neurodiverse technology workers, e.g., those with autism spectrum disorder (ASD), attention deficit hyperactivity disorder (ADHD), and/or other learning disabilities, such as dyslexia. We present findings from interviews with 10 neurodiverse technology workers, identifying the challenges that impede these employees from fully realizing their potential in the workplace. Based on the interview findings, we developed a survey that was taken by 846 engineers at a large software company. In this paper, we reflect on the differences between the neurotypical (N = 781) and neurodiverse (N = 59) respondents. Technology companies struggle to attract, develop, and retain talented software developers; our findings offer insight into how employers can better support the needs of this important worker constituency.
Using In-Situ Projection to Support Cognitively Impaired Workers at the Workplace BIBAFull-Text 185-192
  Markus Funk; Sven Mayer; Albrecht Schmidt
Today's working society tries to integrate more and more impaired workers into everyday working processes. One major scenario for integrating impaired workers is in the assembly of products. However, the tasks that are being assigned to cognitively impaired workers are easy tasks that consist of only a small number of assembly steps. For tasks with a higher number of steps, cognitively impaired workers need instructions to help them with assembly. Although supervisors provide general support and assist new workers while learning new assembly steps, sheltered work organizations often provide additional printed pictorial instructions that actively guide the workers. To further improve continuous instructions, we built a system that uses in-situ projection and a depth camera to provide context-sensitive instructions. To explore the effects of in-situ instructions, we compared them to state-of-the-art pictorial instructions in a user study with 15 cognitively impaired workers at a sheltered work organization. The results show that using in-situ instructions, cognitively impaired workers can assemble more complex products up to 3 times faster and with up to 50% less errors. Further, the workers liked the in-situ instructions provided by our assistive system and would use it for everyday assembly.

Exercise and Physical Activity

"But, I don't take steps": Examining the Inaccessibility of Fitness Trackers for Wheelchair Athletes BIBAFull-Text 193-201
  Patrick Carrington; Kevin Chang; Helena Mentis; Amy Hurst
Wearable fitness devices have demonstrated the capacity to improve overall physical activity, which can lead to physical and mental health improvements as well as quality of life gains. Although wheelchair athletes who participate in adaptive sports are interested in using wearable fitness trackers to capture their activity, we have observed low adoption of wearable fitness trackers among wheelchair athletes. We interviewed five wheelchair athletes and three physical and occupational therapists to explore fitness activities, experience with wearable technology, and potential uses for wearable fitness devices. None of the wheelchair athletes we interviewed had previously used any wearable fitness devices, however four out of five were interested in tracking their physical activity. We present five thematic areas helpful for thinking about wearable computing systems and accessibility challenges that arise based on incorrect assumptions about the athletic community. We highlight opportunities for improving the impact and accessibility of fitness tracking technologies for wheelchair athletes. These opportunities include improving the analysis of data from existing sensors, instrumenting the custom equipment used by adaptive sport athletes, and revising the language used in the presentation of fitness data to create a more inclusive community of users.
Exploring the Opportunities and Challenges with Exercise Technologies for People who are Blind or Low-Vision BIBAFull-Text 203-214
  Kyle Rector; Lauren Milne; Richard E. Ladner; Batya Friedman; Julie A. Kientz
People who are blind or low-vision may have a harder time participating in exercise due to inaccessibility or lack of experience. We employed Value Sensitive Design (VSD) to explore the potential of technology to enhance exercise for people who are blind or low-vision. We conducted 20 semi-structured interviews about exercise and technology with 10 people who are blind or low-vision and 10 people who facilitate fitness for people who are blind or low-vision. We also conducted a survey with 76 people to learn about outsider perceptions of hypothetical exercise with people who are blind or low-vision. Based on our interviews and survey, we found opportunities for technology development in four areas: 1) mainstream exercise classes, 2) exercise with sighted guides, 3) rigorous outdoors activity, and 4) navigation of exercise spaces. Design considerations should include when and how to deliver auditory or haptic information based on exercise and context, and whether it is acceptable to develop less mainstream technologies if they enhance mainstream exercise. The findings of this work seek to inform the design of accessible exercise technologies.

Making Speech Accessible and Usable

Online News Videos: The UX of Subtitle Position BIBAFull-Text 215-222
  Michael Crabb; Rhianne Jones; Mike Armstrong; Chris J. Hughes
Millions of people rely on subtitles when watching video content. The current change in media viewing behaviour involving computers has resulted in a large proportion of people turning to online sources as opposed to regular television for news information. This work analyses the user experience of viewing subtitled news videos presented as part of a web page. A lab-based user experiment was carried out with frequent subtitle users, focusing on determining whether changes in video dimension and subtitle location could affect the user experience attached to viewing subtitled content. A significant improvement in user experience was seen when changing the subtitle location from the standard position of within a video at the bottom to below the video clip. Additionally, participants responded positively when given the ability to change the position of subtitles in real time, allowing for a more personalised viewing experience. This recommendation for an alternative subtitle positioning that can be controlled by the user is unlike current subtitling practice. It provides evidence that further user-based research examining subtitle usage outside of the traditional television interface is required.
Tracked Speech-To-Text Display: Enhancing Accessibility and Readability of Real-Time Speech-To-Text BIBAFull-Text 223-230
  Raja S. Kushalnagar; Gary W. Behm; Aaron W. Kelstone; Shareef Ali
Deaf and Hard of Hearing (DHH) students are under-served and under-represented in education in part because they miss spoken classroom information, even with aural-to-visual accommodations, such as a real-time speech to text Display (SD). Most SD systems utilize a trained typist to transcribe the speech into text (speech-to-text) onto a display. Still, these students encounter significant but subtle barriers in following speech-to-text displays, especially when detailed visuals are used or when the speaker is fast or uses uncommon words. Hearing students can simultaneously watch the visuals and listen to the spoken explanation, while DHH students constantly look away from the SD to search and observe details in the classroom visuals. As a result, they spend less time watching the visuals and gain less information than their hearing peers. They can also fall behind in reading the speech-text.
   We discuss the implementation and evaluation of a real-time Tracked Speech-To-Text Display (TSD), that addresses these subtle barriers presented by SD systems. The TSD system minimizes the student's viewing distance between the Speech-to-Text Display and the speaker. This is done through tracking the presenter and displaying the speech-text at a fixed distance above the presenter. Our evaluation showed that students significantly preferred TSD over SD and reported that it was easier to follow the lecture. They liked being able to see both the teacher and speech-to-text, and being able to set the number of displayed lines.
Evaluating Alternatives for Better Deaf Accessibility to Selected Web-Based Multimedia BIBAFull-Text 231-238
  Brent N. Shiver; Rosalee J. Wolfe
The proliferation of video and audio media on the Internet has created a distinct disadvantage for deaf Internet users. Despite technological and legislative milestones in recent decades in making television and movies more accessible, there has been less progress with online access. A major obstacle to providing captions for Internet media is the high cost of captioning and transcribing services. This paper reports on two studies that focused on multimedia accessibility for Internet users who were born deaf or became deaf at an early age. An initial study attempted to identify priorities for deaf accessibility improvement. A total of 20 deaf and hard-of-hearing participants were interviewed via videophone about their Internet usage and the issues that were the most frustrating. The most common theme was concern over a lack of accessibility for online news. In the second study, a total of 95 deaf and hard-of-hearing participants evaluated different caption styles, some of which were generated through automatic speech recognition.
   Results from the second study confirm that captioning online videos makes the Internet more accessible to the deaf users, even when the captions are automatically generated. However color-coded captions used to highlight confidence levels were found neither to be beneficial nor detrimental; yet when asked directly about the benefit of color-coding, participants strongly favored the concept.

Non-Visual Access

ForeSee: A Customizable Head-Mounted Vision Enhancement System for People with Low Vision BIBAFull-Text 239-249
  Yuhang Zhao; Sarit Szpiro; Shiri Azenkot
Most low vision people have functional vision and would likely prefer to use their vision to access information. Recently, there have been advances in head-mounted displays, cameras, and image processing technology that create opportunities to improve the visual experience for low vision people. In this paper, we present ForeSee, a head-mounted vision enhancement system with five enhancement methods: Magnification, Contrast Enhancement, Edge Enhancement, Black/White Reversal, and Text Extraction; in two display modes: Full and Window. ForeSee enables users to customize their visual experience by selecting, adjusting, and combining different enhancement methods and display modes in real time. We evaluated ForeSee by conducting a study with 19 low vision participants who performed near- and far-distance viewing tasks. We found that participants had different preferences for enhancement methods and display modes when performing different tasks. The Magnification Enhancement Method and the Window Display Mode were popular choices, but most participants felt that combining several methods produced the best results. The ability to customize the system was key to enabling people with a variety of different vision abilities to improve their visual experience.
Zebra Crossing Spotter: Automatic Population of Spatial Databases for Increased Safety of Blind Travelers BIBAFull-Text 251-258
  Dragan Ahmetovic; Roberto Manduchi; James M. Coughlan; Sergio Mascetti
In this paper we propose a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Knowing the location of crosswalks is critical for a blind person planning a trip that includes street crossing. By augmenting existing spatial databases (such as Google Maps or OpenStreetMap) with this information, a blind traveler may make more informed routing decisions, resulting in greater safety during independent travel. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm could also be complemented by a final crowdsourcing validation stage for increased accuracy.
Social Media Platforms for Low-Income Blind People in India BIBAFull-Text 259-272
  Aditya Vashistha; Edward Cutrell; Nicola Dell; Richard Anderson
We present the first analysis of the use and non-use of social media platforms by low-income blind users in rural and peri-urban India. Using a mixed-methods approach of semi-structured interviews and observations, we examine the benefits received by low-income blind people from Facebook, Twitter and WhatsApp and investigate constraints that impede their social media participation. We also present a detailed analysis of how low-income blind people used a voice-based social media platform deployed in India that received significant traction from low-income people in rural and peri-urban areas. In eleven-weeks of deployment, fifty-three blind participants in our sample collectively placed 4784 voice calls, contributed 1312 voice messages, cast 33,909 votes and listened to the messages 46,090 times. Using a mixed-methods analysis of call logs, qualitative interviews, and phone surveys, we evaluate the strengths and weaknesses of the platform and benefits it offered to low-income blind people.

Text Input

Typing Performance of Blind Users: An Analysis of Touch Behaviors, Learning Effect, and In-Situ Usage BIBAFull-Text 273-280
  Hugo Nicolau; Kyle Montague; Tiago Guerreiro; André Rodrigues; Vicki L. Hanson
Non-visual text-entry for people with visual impairments has focused mostly on the comparison of input techniques reporting on performance measures, such as accuracy and speed. While researchers have been able to establish that non-visual input is slow and error prone, there is little understanding on how to improve it. To develop a richer characterization of typing performance, we conducted a longitudinal study with five novice blind users. For eight weeks, we collected in-situ usage data and conducted weekly laboratory assessment sessions. This paper presents a thorough analysis of typing performance that goes beyond traditional aggregated measures of text-entry and reports on character-level errors and touch measures. Our findings show that users improve over time, even though it is at a slow rate (0.3 WPM per week). Substitutions are the most common type of error and have a significant impact on entry rates. In addition to text input data, we analyzed touch behaviors, looking at touch contact points, exploration movements, and lift positions. We provide insights on why and how performance improvements and errors occur. Finally, we derive some implications that should inform the design of future virtual keyboards for non-visual input.

Cognitive Disabilities

Inclusion and Education: 3D Printing for Integrated Classrooms BIBAFull-Text 281-290
  Erin Buehler; William Easley; Samantha McDonald; Niara Comrie; Amy Hurst
Over 60% of adults with intellectual disabilities (ID) in the U.S. are unemployed; this is more than twice the unemployment rate of the general population [19]. Of the adults with ID who are employed, only half receive competitive wages alongside co-workers without disabilities. While the enactment of IDEA [20] has helped to promote access to education for people with ID and other disabilities, there are still obstacles to employment. Misconceptions about ability and lack of opportunities to learn and practice employability skills contribute to this problem.
   Our research explores employability and integration through the lens of 3D printing, an innovative technology touted as a means to self-employment. We successfully taught young adults with intellectual disabilities many technical skills required for 3D printing through an integrated, post-secondary course on 3D printing for entrepreneurship. In this paper we report on our methods for designing this course and discuss the benefits, challenges, and strategies for teaching 3D printing to an integrated cohort of students. We offer recommendations for educators and describe technology obstacles unique to this user demographic, and the impact of integrated, postsecondary courses on employment outcomes for students with ID.
Towards Efficacy-Centered Game Design Patterns For Brain Injury Rehabilitation: A Data-Driven Approach BIBAFull-Text 291-299
  Jinghui Cheng; Cynthia Putnam; Doris C. Rusch
Games are often used in brain injury (BI) therapy sessions to help motivate patients to engage in rehabilitation activities. In this paper, we explore game design patterns as a mechanism to help game designers understand needs in BI therapy. Design patterns, originating from the work of Christopher Alexander, aim to provide a common language to support the creative work of designers by documenting solutions that have successful addressed recurring design problems. Through analyzing data we gathered on the use of commercial games in BI therapy, we generated a list of 14 'efficacy-centered game design patterns' that focused on game design considerations when addressing therapeutic goals in BI rehabilitation. We argue that our patterns can serve as a common language to support the design of BI rehabilitation games; additionally, our data-driven approach sets up a paradigm for generating game design patterns in related areas.

Poster Session 1

A Pilot Study about the Smartwatch as Assistive Device for Deaf People BIBAFull-Text 301-302
  Matthias Mielke; Rainer Brück
In the last years the smartphone became an important tool for deaf and hard of hearing people. It's no wonder that many different smartphone based assistive tools were introduced recently, among them tools for environmental sound awareness. Even though smartphones seem to be a good way to implement such tools, with the smartwatch a new class of mobile computing devices became available. In this paper results from interviews with six deaf people about the use of a smartwatch as environmental sound alert are presented. The interviews showed that a smartwatch based environmental sound alert is promising as the participants were highly interested in using such a device.
Using OnScreenDualScribe to Support Text Entry and Targeting among Individuals with Physical Disabilities BIBAFull-Text 303-304
  Sidas Saulynas; Lula Albar; Ravi Kuber; Torsten Felzer
This paper describes a study examining the usability of OnScreenDualScribe (OSDS), a tool to support individuals with physical disabilities with text entry and cursor movement. A portable numeric keypad is used to interact with the OSDS, which can either be held by the user, or can be affixed to a surface for interaction. A study to determine the feasibility of the system was conducted with three individuals with physical disabilities. While it was noted that the time taken was higher to complete a task compared to their existing methods of computer-based input, findings also indicate that the system offers potential for tasks involving a combination of text entry and cursor movement (e.g., completing online forms). Furthermore, as the keypad is smaller in size compared with a traditional keyboard, participants suggested that it offered potential to reduce effort spent in the fatiguing process of traversal.
Developing SAGAT Probes to Evaluate Blind Individuals' Situation Awareness when Traveling Indoor Environments BIBAFull-Text 305-306
  Abdulrhman A. Alkhanifer; Stephanie Ludi
Assessing a user's situation awareness can provide a great way to learn about the user's mental model when performing related tasks. Situation Awareness Global Assessment Technique (SAGAT) is one widely accepted measure to objectively capture a user's SA. In this work, we present our developed SAGAT probes to assess the situation awareness of blind individuals during indoor travel tasks. This work is part of our ongoing work to develop an objective method to facilitate blind travelers' situation awareness when traveling unfamiliar indoor environments. We present examples of our probes that were derived from our previously developed situation awareness requirements. Also, we briefly illustrate pilot study results with blindfolded participants.
Dytective: Toward a Game to Detect Dyslexia BIBAFull-Text 307-308
  Luz Rello; Abdullah Ali; Jeffrey P. Bigham
Detecting dyslexia is crucial so that people who have dyslexia can receive training to avoid associated high rates of academic failure. In this paper we present Dytective, a game designed to detect dyslexia. The results of a within-subjects experiment with 40 children (20 with dyslexia) show significant differences between groups who played Dytective. These differences suggest that Dytective could be used to help identify those likely to have dyslexia.
Design and Evaluation of a Simplified Online Banking Interface for People with Cognitive Disabilities BIBAFull-Text 309-310
  Mario Erazo; Gottfried Zimmermann
This paper is intended to inform readers about the development and evaluation of a simplified online banking interface for people with cognitive disabilities; this includes a description of the development process as well as the evaluation of the interface with users. Moreover, thoughts and experiences that went into the development will be discussed.
GraCALC: An Accessible Graphing Calculator BIBAFull-Text 311-312
  Cagatay Goncu; Kim Marriott
We present a new approach for providing mathematical and statistical graphics to people who are blind or have severe vision impairment and describe the implementation and initial evaluation of an accessible graphing calculator, GraCALC. GraCALC automatically creates a graphic from a specification of the function or a tabular data using a web based service. The graphic is then presented on a touch device, and a mixture of speech, non-speech audio and optional tactile feedback are provided to allow the users explore the screen. An overview containing a sonification of the graphic and a description similar to that created by a human transcriber is also automatically generated so as to help the user in their initial navigation.
ARCoach 2.0: Optimizing a Vocational Prompting System Based on Augmented Reality for People with Cognitive Impairments BIBAFull-Text 313-314
  Yao-Jen Chang; Ya-Shu Kang; Yao-Sheng Chang; Hung-Huan Liu
This study improved the ARCoach system that provided vocational training for individuals with cognitive impairments using an Augmented Reality (AR) based task prompting technology. Using cyclic codes to enhance AR tag recognition rates, the proposed system not only provided picture cues but also identified incorrect task steps on the fly and helped make corrections. Experimental data showed that the participant considerably increased the target response, thus improving vocational job skills during the intervention phases. In addition to assistive technology, our approach can apply to other AR applications where tag recognition rates need to be improved.
BendableSound: a Fabric-based Interactive Surface to Promote Free Play in Children with Autism BIBAFull-Text 315-316
  Deysi Helen Ortega; Franceli Linney Cibrian; Mónica Tentori
Children with autism found free play difficult. Free play is important for children with autism to help them develop social, communication, and expression skills. Interactive surfaces (IS) offer a casual and natural collaborative and engaging experience adequate to promote free play for children with autism. In this poster, we present the design and development of BendableSound, a fabric-based IS that allows children play music when tapping and touching on digital elements appearing on top of the fabric. To design BendableSound, we followed a user-centered design process involving interviewing, observation, and design sessions with caregivers. We close discussing directions for future work.
Designing Kinect2Scratch Games to Help Therapists Train Young Adults with Cerebral Palsy in Special Education School Settings BIBAFull-Text 317-318
  Yao-Jen Chang; Ya-Shu Kang; Yao-Sheng Chang; Hung-Huan Liu; Cheng-Chieh Wang; Chia Chun Kao
Children with cerebral palsy (CP) may need to undergo long-term physical rehabilitation to enhance neural development. Currently, commercial rehabilitation products cannot be customized easily and are expensive; consequently, public special-education schools generally cannot afford purchasing these products. This study employs the Microsoft Kinect technology and image recognition technology to create a rehabilitation system which may be applicable for people with CP. To motivate people with CP to engage in exercise training, we gamified the movement training of self-feeding and self-dressing for young adults with CP in special education school settings. By leveraging the Scratch language and Kinect2Scratch tool, the physical therapists (PTs) who will use the system may be able to do the customization without technical support. Preliminary results of 5 healthy college students testing the exercise games are presented. The precision and reliability of the Kinect2Scratch Games are 100% and 97.5%.
Model-Based Automated Accessibility Testing BIBFull-Text 319-320
  Giorgio Brajnik; Chiara Pighin; Sara Fabbro
Usability for Accessibility: A Consolidation of Requirements for Mobile Applications BIBAFull-Text 321-322
  Clauirton A. Siebra; Tatiana B. Gouveia; Anderson Filho; Walter Correia; Marcelo Penha; Marcelo Anjos; Fabiana Florentin; Fabio Q. B. Silva; Andre L. M. Santos
Differently of the Web Accessibility Guidelines, which are already consolidated as a reference, the initiatives to develop guidelines for accessible mobile applications are recent and several approaches present only suggestions rather than a concrete list of functional requirements. This work analyzed 247 scientific articles to identify requirements that are being considered to different types of impairments. The collected information was consolidated and classified according to groups of impairments. As result, this paper presents the main points of a checklist proposal for functional requirements, which should be considered by mobile applications to ensure accessibility with usability.
An Investigation into Appropriation of Portable Smart Devices by Users with Aphasia BIBAFull-Text 323-324
  Gennaro Imperatore; Mark D. Dunlop
As part of ongoing research we analysed user experience of a group of people with aphasia by applying the Technology Appropriation Model. Appropriation is defined as the way in which users adapt the functionality of technology to suit their needs, often in ways the designers would not have predicted. Currently over 250,000 people in the UK and 1,000,000 people in the US have aphasia. We discovered that Appropriation analysis can be a useful tool for requirements analysis of software, especially in cases where the user has trouble communicating about abstract or imagined scenarios, as is the case for many with Aphasia. We also discovered that Appropriation often stems from the user not knowing the full capabilities of the device or what applications are already available.
Japanese Sentence Handwriting Learning System for Special Needs Education BIBAFull-Text 325-326
  Iwao Kobayashi; Nami Nemoto; Kiwamu Sato; Kaito Watanabe; Hiroshi Nunokawa; Naohiro Ogasawara
We examined a learning system to study handwriting of Japanese sentences in special needs education. A prototype was constructed with a tablet PC. Evaluation of the system was conducted by expert reviewers and by a student who has handwriting difficulties.
DroneNavigator: Using Drones for Navigating Visually Impaired Persons BIBAFull-Text 327-328
  Mauro Avila; Markus Funk; Niels Henze
Even after decades of research about navigation support for visually impaired people, moving independently still remains a major challenge. Previous work in HCI explored a large number of navigation aids, including auditory and tactile guidance systems. In this poster we propose a novel approach to guide visually impaired people. We use small lightweight drones that can be perceived through the distinct sound and the airflow they naturally produce. We describe the interaction concept we envision, first insights from proof-of-concept tests with a visually impaired participant, and provide an overview of potential application scenarios.
Calendars for Individuals with Cognitive Disabilities: A Comparison of Table View and List View BIBAFull-Text 329-330
  Frode Eika Sandnes; Maud Veronica Lundh
Calendars can be important memory aids for individuals with cognitive disabilities. This study compared the effect of the popular two-dimensional table calendar view and the simpler list view. A controlled experiment was conducted involving 10 individuals with cognitive disabilities and 10 controls. The results show that the list view gave significantly fewer errors and shorter searching times, while editing took longer with the list view.
Mobile Phone Access to a Sign Language Dictionary BIBAFull-Text 331-332
  Michael D. Jones; Harley Hamilton; James Petmecky
We have built a functional prototype of a mobile phone app that allows children who are deaf to look up American Sign Language (ASL) definitions of printed English words using the camera on the mobile phone. In the United States, 90% of children who are deaf are born to parents who are not deaf and who do not know sign language [3]. In many cases, this means that the child will not be exposed to fluent sign language in the home and this can delay the child's acquisition of both their first signed language and a secondary written language [1]. Another consequence is that outside of school the child may not have easy access to people or services that can translate written English words into ASL signs. We have developed a prototype phone app that allows children who are deaf and their parents to look up ASL definitions of English words in printed books. The user aims the phone camera at the printed text, takes a picture and then clicks on a word to access the ASL definition. Our next steps are to explore the idea with children who are deaf and their parents, develop design guidelines for sign language dictionary apps, build the app using those guidelines and then to test the app with children who are deaf and their hearing parents.
Teaching Accessibility, Learning Empathy BIBAFull-Text 333-334
  Cynthia Putnam; Maria Dahman; Emma Rose; Jinghui Cheng; Glenn Bradford
As information and communication technologies (ICTs) become more diffuse, the diversity of users that designers need to consider is growing; this includes people with disabilities and aging populations. As a result, computing education must provide students the means and inspiration to learn about inclusive design. This poster presents top-level findings from 18 interviews with professors from some of the top universities in the US. Our analysis yielded four categories of findings: (1) important student learning outcomes (the most common was for students to embrace diversity); (2) exercises and teaching materials (almost all focused on inclusion of people with disabilities in discovery and evaluation of ICTs); (3) frustrations and challenges (largely focused on how to engage students in accessibility topics); and (4) the importance of instructor initiative to include the topic of accessibility in their teaching. The unifying theme was the high importance of cultivating empathy with end users.
Road Sensing: Personal Sensing and Machine Learning for Development of Large Scale Accessibility Map BIBAFull-Text 335-336
  Yusuke Iwasawa; Koya Nagamine; Yutaka Matsuo; Ikuko Eguchi Yairi
This paper proposes a methodology for developing large scale accessibility map with personal sensing by using smart phone and machine learning technologies. The strength of the proposed method is its low cost data collection, which is a key to break through stagnations of accessibility map that currently applied to limited areas. This paper developed and evaluated a prototype system that estimates types of ground surfaces by applying supervised learning techniques to activity sensing data of wheelchair users recorded by a three-axis accelerometer, focusing on knowledge extraction and visualization. As a result of evaluation using nine wheelchair users' data with Support Vector Machine, three ground surface types, curb, tactile indicator, and slope, were detected with f-score (and accuracy) of 0.63 (0.92), 0.65 (0.85), and 0.54 (0.97) respectively.
Adaptive Assistance to Support and Promote Performance-Impaired People in Manual Assembly Processes BIBFull-Text 337-338
  Manuel Kölz; Darrell Jordon; Peter Kurtz; Thomas Hörz
Issues influencing the Uptake of Smartphone Reminder apps for People with Acquired Brain Injury BIBAFull-Text 339-340
  Matthew Jamieson; Marilyn McGee-Lennon; Breda Cullen; Stephen Brewster; Jonathan Evans
Smartphone reminder applications (apps) have the potential to help people with memory impairment after acquired brain injury (ABI) to perform everyday tasks. Issues impacting the uptake of reminder apps for this group are still poorly understood. To address this, three focus groups were held with people with memory impairments after ABI and ABI caregivers (N=12). These involved a discussion about perceptions of, and attitudes towards, reminder apps combined with usability reflections during a user-centred design session (Keep Lose Change) after a walkthrough of an existing reminder app -- Google Calendar. Framework analysis revealed six key themes that impact uptake of reminder apps; Perceived Need, Social Acceptability, Experience/Expectation, Desired Content and Functions, Cognitive Accessibility and Sensory/Motor Accessibility. Analysis of themes revealed issues that should be considered by designers and researchers when developing and testing reminding software for people with memory impairment following ABI.
"Read That Article": Exploring Synergies between Gaze and Speech Interaction BIBAFull-Text 341-342
  Diogo Vieira; João Dinis Freitas; Cengiz Acartürk; António Teixeira; Luís Sousa; Samuel Silva; Sara Candeias; Miguel Sales Dias
Gaze information has the potential to benefit Human-Computer Interaction (HCI) tasks, particularly when combined with speech. Gaze can improve our understanding of the user intention, as a secondary input modality, or it can be used as the main input modality by users with some level of permanent or temporary impairments. In this paper we describe a multimodal HCI system prototype which supports speech, gaze and the combination of both. The system has been developed for Active Assisted Living scenarios.
Exploring the Use of Massive Open Online Courses for Teaching Students with Intellectual Disability BIBAFull-Text 343-344
  Rodrigo Laiola Guimarães; Andrea Britto Mattos
In this paper, we report on a qualitative study that investigates the impact of using a popular Massive Open Online Course (MOOC) to complement the vocational training of students with intellectual disability (ID). We have been investigating this problem for several months in partnership with a Brazilian NGO (Non-Governmental Organization) for people with ID. Our methodology integrates different aspects of human-computer interaction (i.e., requirement gathering sessions and observation of real subjects). Potential users were involved since the beginning of this research, starting with focus groups and interviews with experts, followed by the observation of a traditional vocational training session, and then the assessment of a popular MOOC in the classroom. In this paper, we discuss the process and present our preliminary results, providing some indications on how MOOCs could better support instructors and students with ID.
VITHEA-Kids: a Platform for Improving Language Skills of Children with Autism Spectrum Disorder BIBAFull-Text 345-346
  Vânia Mendonça; Luísa Coheur; Alberto Sardinha
In this work, we present a platform designed for children with Autism Spectrum Disorder to develop language and generalization skills, in response to the lack of applications tailored for the unique abilities, symptoms, and challenges of the autistic children. This platform allows caregivers to build customized multiple choice exercises while taking into account specific needs/characteristics of each child. We also propose a module for the automatic generation of exercises, aiming to ease the task of exercise creation for caregivers.
The Development of a Framework for Understanding the UX of Subtitles BIBAFull-Text 347-348
  Michael Crabb; Rhianne Jones; Mike Armstrong
Approximately 10% of the television audience use subtitles (captioning) to support their viewing experience. Subtitles enable viewers to participate in an experience that is often taken for granted by the general audience. However, when reviewing subtitle literature, it is uncommon to find work that examines the user experience of subtitle users. This paper presents work on the development of a framework analysing the user experience of watching subtitled content. The framework is introduced, its usage discussed, and the overall framework is then reflected on.
An Empirical Study for Examining the Performance of Visually Impaired People in Recognizing Shapes through a Vibro-tactile Feedback BIBAFull-Text 349-350
  Waseem Safi; Fabrice Maurel; Jean-Marc Routoure; Pierre Beust; Gaël Dias
In this paper, we present results of an empirical study for examining the performance of blind individuals in recognizing shapes through a vibro-tactile feedback. The suggested vibro-tactile system maps different shades of grey to one pattern low-frequencies tactical vibrations. Performance data is reported, including number of errors, and qualitative understanding of the displayed shapes.
Breaking Barriers with Assistive Macros BIBAFull-Text 351-352
  André Rodrigues
People with disabilities rely on assistive technology (AT) software to interact with their mobile device. The overall functionality of AT depends on a set of requirements that are not always fulfilled by application developers, often resulting in cumbersome and slow interactions, or even render content inaccessible. To address these issues we present Assistive Macros, an accessibility service developed to enable users to perform a sequence of commands with a single selection. Macros can be created manually by the user or automatically by detecting repeated sequences of interactions. In this paper, we report a preliminary case study with our prototype involving a motor impaired user and his caregivers.
Making Connections: Modular 3D Printing for Designing Assistive Attachments to Prosthetic Devices BIBAFull-Text 353-354
  Megan Kelly Hofmann
In this abstract, we present a modular design methodology for prototyping and 3D printing affordable, highly customized, assistive technology. This methodology creates 3D printed attachments for prosthetic limbs that perform a diverse group of tasks. We demonstrate the methodology with two case studies where two participants with upper limb amputations help design devices to play the cello and use a hand-cycle.
Sensory Substitution Training for Users Who Are Blind with Dynamic Stimuli, Games and Virtual Environments BIBAFull-Text 355-356
  Shachar Maidenbaum
Sensory Substitution Devices (SSDs) offer people who are blind access to visual information via other senses. One of the main bottlenecks to widespread adoption of sensory substitution is the difficulty of learning to use these devices -- both mastering the device and learning to properly interpret visual information. We have recently upgraded the training offered in our lab to congenitally blind EyeMusic users from a static training paradigm to an interactive dynamic one in an attempt to address both challenges mentioned above. This offered us a unique opportunity to explore the effect of this change on both the users and on their sighted personal instructors. We explored users' ability to play simple interactive games and learn visual principles, and explored the feelings and opinions of both the users and instructors during the shift. We found that all users were able to successfully complete these tasks utilizing visual principles such as depth-size, reported a high level of enjoyment and satisfaction for them, viewed these sessions as more effective and highlighted their feelings of a higher sense of independence and control. The instructors were enthusiastic as well, mirrored the users' answers and especially highlighted the flexibility advantage.
SlidePacer: Improving Class Pace for Deaf and Hard-of-Hearing Students BIBAFull-Text 357-358
  Alessandra Brandao
Following multimedia lectures in mainstream classrooms in university education is challenging for deaf and hard-of-hearing (DHH) students even when they are provided accommodations to best address their individual needs. Due to multiple visual sources of information (teacher, slides, interpreter), these students struggle to divide their attention among several simultaneous sources of input, which may result in their missing important parts of the lecture content; as a result, DHH students' access to information can be limited in comparison to that of their hearing peers. This paper introduces SlidePacer, a tool aimed at improving coordination between the instructor's speech, the sign language interpretation of the lecture and the slide projection change. The goal of SlidePacer is to prevent DHH students' loss of information by promoting an adequate pace of the lecture, which can contribute to their learning and academic achievements.
Sequential Gestural Passcodes on Google Glass BIBAFull-Text 359-360
  Abdullah Ali
This paper details an authentication mechanism aimed to aid individuals with visual impairments in accessing online accounts on desktop computers, and decrease the security threat of observers, or shoulder surfer, by providing an alternative to entering typical alphanumeric passwords on keyboards. The mechanism uses a head-mounted wearable device, Google Glass, because it provides feedback directly to the wearer obscuring it from observers, and it's easy-to-locate and interact with gesture pad, which accepts input from the user. The mechanism is in preparation to enter the testing phase with individuals of the targeted demographic.
Talkabel: A Labeling Method for 3D Printed Models BIBAFull-Text 361-362
  Lei Shi
Three-dimensional printed models are important learning tools for blind people. Unlike tactile graphics, there is no standard accessible way to label these models. We present Talkabel, a labeling method that enables blind people to access information in a model by simply tapping tactile markers that can be added almost anywhere on the model. We use a mobile phone that is propped on an attached scaffold to detect and classify the user's taps and then speak the appropriate label. We evaluated the feasibility of Talkabel with four 3D printed models, and our results showed that Talkabel could be implemented quickly and maintained a high accuracy with different models. We conclude with future work.
Haptic Gloves Prototype for Audio-Tactile Web Browsing BIBAFull-Text 363-364
  Andrii Soviak
Blind people rely on screen readers to interact with the Web. Since screen readers narrate the digital content serially, blind users can only form a one-dimensional mental model of the web page and, hence, cannot enjoy the benefits inherently offered by the 2-D layout; e.g., understanding the spatial relations between objects in a webpage or their locations on the screen helps navigate webpages. Haptic interfaces could provide blind people with a tactile "feel" for the 2-D layout and help them navigate web pages more efficiently. Haptic Displays, capable of high resolution tactile feedback, could render any webpage in a tactile form enabling blind people to exploit the aforementioned spatial relations and focus screen reading on specific parts of the webpage. In this paper, I report on the preliminary work toward the development of FeelX -- a haptic gloves system that will enable tactile web browsing. FeelX will be used alongside regular screen readers, and will provide blind screen-reader users with the ability explore web pages by touch and audio.
Exploring New Paradigms for Accessible 3D Printed Graphs BIBAFull-Text 365-366
  Michele Hu
We explore new paradigms of representing bar graphs with 3D printing technology. Our goal is to make graphs more accessible to blind people. We co-designed with one blind participant and two teachers for the visually impaired (TVI) to refine our paradigms and created prototypes of bar graphs of varying levels of complexity with each paradigm. The paradigms we developed include: embossed graphs that are similar to traditional tactile graphics, freestanding graphs with guidelines connecting bars to the y-axis, and freestanding graphs with tick marks on each bar. We developed prototypes of a bar graph displaying one dataset, and a bar graph displaying two datasets.
The Effects of Automatic Speech Recognition Quality on Human Transcription Latency BIBAFull-Text 367-368
  Yashesh Gaur
Converting speech to text quickly is the fundamental task for making aural content accessible to deaf and hard of hearing. Despite high cost, this is done by human captionists, as automatic speech recognition (ASR) does not give satisfactory performance in real world settings. Offering ASR output to captionists as a starting point seems more facile and economical, yet the effectiveness of this approach is clearly dependent on the quality of ASR because fixing inaccurate ASR output may take longer than producing the transcriptions without ASR support. In this paper, we empirically study how the time required by captionists to produce transcriptions from partially correct ASR output varies based on the accuracy of the ASR output. Our studies with 160 participants recruited on Amazon's Mechanical Turk indicate that starting with the ASR output is worse unless it is sufficiently accurate (Word Error Rate (WER) is under 30%).

Poster Session 2

Web Search Credibility Assessment for Individuals who are Blind BIBAFull-Text 369-370
  Ali Abdolrahmani; Ravi Kuber; William Easley
While screen reading technologies offer considerable promise to individuals who are blind by providing an accessible overview of web-based content, difficulties can be faced determining the credibility of sites and their respective contents. This can impact the user's behavior, particularly if sensitive information needs to be entered (e.g. into a web-based form). In this paper, we describe an exploratory study examining the criteria which blind screen reader users utilize to assess credibility. More specifically, we have focused on the common task of web searching and exploring search results. Findings from the study have suggested that mismatches between the title of the search results and their respective snippets, along with the richness and accessibility of the content when search results are selected, can lead to users determining whether sites are indeed credible.
A Voting System for AAC Symbol Acceptance BIBAFull-Text 371-372
  E. A. Draffan; Mike Wald; Nawar Halabi; Amatullah Kadous; Amal Idris; Nadine Zeinoun; David Banes; Dana Lawand
This paper aims to illustrate how an innovative voting system has been developed to allow AAC users, their therapists, carers and families show their degree of acceptance for newly developed symbols and their referents. The approach, taking a participatory model of research, occurs via an online symbol management system using a set of criteria that provide instant feedback to the developers and the project team. Scores and comments regarding the symbols are collated and where a majority vote has occurred, symbols are added to the Arabic Symbol Dictionary with lexical entries in both Arabic and English.
A Direct TeX-to-Braille Transcribing Method BIBAFull-Text 373-374
  Andreas Papasalouros; Antonis Tsolomitis
The TeX/LaTeX typesetting system is the most widespread system for creating documents in Mathematics and Science. However, no reliable tool exists to this day for automatically transcribing documents from the above formats into Braille code. Thus, blind students who study related fields do not have access to the bulk of studying materials available in LaTeX format. We develop a tool, named latex2nemeth, for transcribing directly LaTeX documents to Nemeth Braille, thus facilitating the access of students with blindness to Science.
Spoken Language, Conversational Assistive Systems for People with Cognitive Impairments?: Yes, If BIBAFull-Text 375-376
  Ramin Yaghoubzadeh; Stefan Kopp
We analyzed autonomous conversational spoken interaction as a modality for assistive systems in initial groups of older adults and people with cognitive impairments; we had previously explored this in a WOz setup. Solving a simple task in the domain of week planning, subjects readily interacted with the system. Performance was generally good, but dependent on successful adherence to a specific terse conversational style. Enforcing these patterns in a socially acceptable way during conversation is the next goal for the system.
Correction of the Fundamental Voice Frequency Using Tactile and Visual Feedback BIBAFull-Text 377-378
  Rodrigo Leone Alves; Carlos M. G. S. Lima; Raimundo C. S. Freire
People who are deaf or hard of hearing require correction of the fundamental frequency that usually is achieved by help of a phonodiologist. The work of the phonodiologist can be however strongly reduced by using appropriate devices that can help the patient to train the voice positioning. This paper reports the development of such a system based on FPGA's technology that providing tactile and visual feedback. Contrary to the feedback from the phonodiologist that is usually relative, the feedback provided by a device can be provided in absolute terms. This feature can improve the learning process that however needs the initial learning provided by the phonodiologist.
TinyBlackBox: Supporting Mobile In-The-Wild Studies BIBAFull-Text 379-380
  Kyle Montague; André Rodrigues; Hugo Nicolau; Tiago Guerreiro
Most work investigating mobile HCI is carried out within controlled laboratory settings; these spaces are not representative of the real-world environments for which the technology will predominantly be used. The result of which can produce a skewed or inaccurate understanding of interaction behaviors and users' abilities. While mobile in-the-wild studies provide more realistic representations of technology usage, there are additional challenges to conducting data collection outside of the lab. In this paper we discuss these challenges and present TinyBlackBox, a standalone data collection framework to support mobile in-the-wild studies with today's smartphone and tablet devices.
APLUS: A 3D Corpus of French Sign Language BIBAFull-Text 381-382
  Annelies Braffort; Mohamed Benchiheub; Bastien Berret
In this paper, we present APlus, a corpus of French Sign Language that includes motion capture, video and eye-tracker data. We first explain the motivation for the creation of this resource; secondly we describe the preparation work to design the best setup regarding our constraints and aims, and then, the content of the corpus at this moment of the project.
Supporting Everyday Activities for Persons with Visual Impairments Through Computer Vision-Augmented Touch BIBAFull-Text 383-384
  Leah Findlater; Lee Stearns; Ruofei Du; Uran Oh; David Ross; Rama Chellappa; Jon Froehlich
The HandSight project investigates how wearable micro-cameras can be used to augment a blind or visually impaired user's sense of touch with computer vision. Our goal is to support an array of activities of daily living by sensing and feeding back non-tactile information (e.g., color, printed text, patterns) about an object as it is touched. In this poster paper, we provide an overview of the project, our current proof-of-concept prototype, and a summary of findings from finger-based text reading studies. As this is an early-stage project, we also enumerate current open questions.
A haptic device to support the use of a glucose monitor by individuals who are blind or visually impaired BIBAFull-Text 385-386
  Matthew Standard; Jacob Park; Dianne T. V. Pawluk
We present a simple and affordable device developed in order to address difficulties that diabetics who are blind or visually impaired encounter when attempting to independently operate a glucometer.
Exploring Interface Design for Independent Navigation by People with Visual Impairments BIBAFull-Text 387-388
  Erin L. Brady; Daisuke Sato; Chengxiong Ruan; Hironobu Takagi; Chieko Asakawa
Most user studies of navigation applications for people with visual impairments have been limited by existing localization technologies, and appropriate instruction types and information needs have been determined through interviews. Using Wizard-of-Oz navigation interfaces, we explored how people with visual impairments respond to different instruction intervals, precision, output modalities, and landmark use during in situ navigation tasks. We present the results of an experimental study with nine people with visual impairments, and provide direction and open questions for future work on adaptive navigation interfaces.
MoviTouch: Mobile Movement Capability Configurations BIBAFull-Text 389-390
  Jan David Smeddinck; Jorge Hey; Nina Runge; Marc Herrlich; Christine Jacobsen; Jan Wolters; Rainer Malaka
Strong adaptability is a major requirement and challenge in the physiotherapeutic use of motion-based games for health. For adaptation tool development, tablets are a promising platform due to their similarity in affordance compared to traditional clipboards. In a comparative study, we examined three different input modalities on the tablet that allow for configuring joint angles: direct-touch, classic interface components (e.g. buttons and sliders), and a combination of both. While direct touch emerged as the least preferable modality, the results highlight the benefits of the combination of direct-touch and classic interface components as the most accessible modality for configuring joint angle ranges. Furthermore, the importance of configuring joint angles along three distinct axes and the interesting use-case of configuration tools as communication support emerged.
Feel the Web: Towards the Design of Haptic Screen Interfaces for Accessible Web Browsing BIBAFull-Text 391-392
  Andrii Soviak; Vikas Ashok; Yevgen Borodin; Yury Puzis; I. V. Ramakrishnan
Web browsing with screen readers is tedious and frustrating, largely due to the inability of blind screen-reader users to get spatial information about the structure of web pages and utilize it for effective navigation. Haptic interfaces have the potential to provide blind users with a tactile 'feel' for the 2-D layout of web pages and help them focus screen reading on specific parts of the webpage. In this preliminary work, we explore the utility of a simple haptic web-browsing interface -- tactile overlays, and report on a preliminary user study with 10 blind participants who performed various web-browsing tasks with and without these overlays. We also analyzed the user-interaction behavior and explored the appropriate design choices and their tradeoffs in the space of haptic-interface design for accessible web browsing.
Multiomodal Application for the Perception of Spaces (MAPS) BIBAFull-Text 393-394
  Richard Adams; Dianne Pawluk; Margaret Fields; Ryan Clingman
MAPS (Multimodal Application for the Perception of Spaces) is a tablet App, with associated hardware, for providing on-the-go access to audio-tactile maps of unfamiliar indoor venues to individuals who are blind or visually impaired. Performance of the system was assessed using a survey knowledge task in which participants were exposed to three different cue combinations: audio only, audio with haptic cues provided by the tablet's built-in vibrational motor (built-in tactile), and audio with haptic feedback provided by special vibrating rings worn on two fingers (stereo-tactile). Of the three conditions, the combination of audio and built-in tactile feedback resulted in superior user performance in judging the relative direction to points of interest. Results indicate that the audio-tactile display improves survey knowledge both when used for a priori (pre-visit) map learning and on-the-go (within the environment) to provide just-in-time information.
Opportunity: Inclusive Design for Interactive Simulations BIBAFull-Text 395-396
  Emily B. Moore; Clayton Lewis
Interactive simulations are becoming increasingly important in education, but these tools are not accessible to many learners. Here, we share some goals to guide the development of accessible simulations, and summarize early findings in development efforts. We also highlight challenges and opportunities where the unique expertise of the ASSETS community is needed. We hope to interest readers in participating in the growing community designing, developing, and researching interactive simulations for learning.
Development of BCI Based Softwares to Assist People with Mobility Limitations in the School Inclusion Process BIBAFull-Text 397-398
  Regina de Oliveira Heidrich; Marsal A. Branco; João Batista Mossmann; Anderson Rodrigo Schuh; Emely Jensen
Inclusive Education has been the agenda at all levels and modalities of the education system, which discusses how the school will face the diversity and teach students who have disabilities. According to the 2010 Census, in Brazil, people with physical disabilities account for almost 7% of the population. This project aims to design Digital Learning Constructs (DLC) as an inclusive education model implemented by use of Brain Computer Interface (BCI). Thus, it is intended to assist people with mobility limitations in the process of school inclusion. A DLC is any entity or device devised or built in a multidisciplinary way in the form of an educational game, helping players to build or redesign their knowledge. It has as features, dual nature, as an object of learning and also a game, working procedurally the contributions of each area without any subordination between them. Thus, via BCI, the user will not use mouse, but a specific low cost hardware, which will command the computer via brain waves. Future studies with the user will be conducted through a qualitative, quantitative and ergonomic design approach. The study subjects are people with motor and communication problems, of the municipal education network. The importance of studies linked to the autonomy of these subjects is due to the fact that many do not have freedom of movement, because of motor impairments. The aim of this project is to open paths for people with motor impairments and degenerative diseases, so they can use the computer in the classroom context.
The Impact of Stuttering: How Can a Mobile App Help? BIBAFull-Text 399-400
  Iva Demarin; Ljubica Leko; Maja Škrobo; Helena Germano; Patrícia Macedo; Rui Neves Madeira
The mobile application BroiStu was developed due to the need of having a better insight on the impact of stuttering on people who stutter's everyday life. This paper presents a study on this application, verifying which features of impact of stuttering found in scientific literature are included in the application and making a comparison with a similar application. Furthermore, it also summarises a user study conducted with a first panel of experts. New findings and obtained results are discussed.
Exploring the Potential of Wearables to Support Employment for People with Mild Cognitive Impairment BIBAFull-Text 401-402
  Victor Dibia; Shari Trewin; Maryam Ashoori; Thomas Erickson
We interviewed adults with mild cognitive impairment working in a sheltered workshop and three support professionals about competitive employment in the community, and what concerns or barriers they may have. While individuals were most concerned about managing their health, professionals also described personal safety and productivity challenges. Families' worries about the safety of their family members were reported to be a barrier to competitive employment. Using a smart watch as a technology probe, we identified three potential applications of wearable technology in tackling these challenges: health support, family support, and productivity support.
Predictive Link Following for Accessible Web Browsing BIBAFull-Text 403-404
  Jiri Roznovjak; John J. Magee
Users of mouse-replacement interfaces can have difficulty navigating web pages. Small links may be difficult to click, and cluttered pages can result in following an incorrect link. We propose a predictive link-following approach for web browsing that intercepts mouse click signals and instead follows links based on analysis of the user's mouse movement behavior. Our system can be integrated into web page scripting to provide accessibility without needing to install separate software. In a preliminary experiment, users were able to click particularly small targets more accurately with our system compared to traditional mouse clicks.
Appliance Displays: Accessibility Challenges and Proposed Solutions BIBAFull-Text 405-406
  Giovanni Fusco; Ender Tekin; Nicholas A. Giudice; James M. Coughlan
People who are blind or visually impaired face difficulties using a growing array of everyday appliances because they are equipped with inaccessible electronic displays. We report developments on our 'Display Reader' smartphone app, which uses computer vision to help a user acquire a usable image of a display and have the contents read aloud, to address this problem. Drawing on feedback from past and new studies with visually impaired volunteer participants, as well as from blind accessibility experts, we have improved and simplified our user interface and have also added the ability to read seven-segment digit displays. Our system works fully automatically and in real time, and we compare it with general-purpose assistive apps such as Be My Eyes, which recruit remote sighted assistants (RSAs) to answer questions about video captured by the user. Our discussions and preliminary experiment highlight the advantages and disadvantages of fully automatic approaches compared with RSAs, and suggest possible hybrid approaches to investigate in the future.
People with Parkinson's Disease Using Computers BIBAFull-Text 407-408
  Maria Hartikainen; Saila Ovaska
Parkinson's disease is a neurological, progressive disease that affects the ability of using computer input devices. We present findings from a small-scale qualitative study on how the Finnish participants with Parkinson's disease experience their everyday computer use, and describe the solutions they have applied in their difficulties.
Mobile Integrated Assistance to Empower People Coping with Parkinson's Disease BIBAFull-Text 409-410
  Carla Pereira; Patrícia Macedo; Rui Neves Madeira
In this paper, we present the design of a mobile application to empower an integrated assistance to support users of the triad: people with Parkinson's disease, their carers and healthcare professionals. Starting from a deep study of this triad's needs, a mobile application was designed to support not just the communication between the triad users, but also to help them finding relevant knowledge to support their clinical issues as well as allowing the supervision of patients' daily routine and the recommendation for daily exercises. This application aims to facilitate the access to knowledge and provide professional support for both people with Parkinson's disease and carers in their homes to improve the remote healthcare assistance.
SECPT Framework: A Travelling Experience Based Framework for Visually Impaired People BIBAFull-Text 411-412
  Jing Guan; Clifford Sze-Tsan Choy
The right to fully participate into the community and enjoy the life is the same for people with or without a visual impairment. However, many existing research for assistive products or technology for visually impaired people are piecemeal, because there is no holistic and systemic research focus on the visually impaired people's travelling experience. We propose the SECPT Framework which based on the visually impaired people's travelling experience, can be a starting point of the design process for designers to encounter visually impaired people, and enable them to quick understanding the target group and determine their research area.
Depicting the Keypad Mouse Interface: Exploring Pointing Behaviors of Three Input Devices BIBAFull-Text 413-414
  J. Eduardo Pérez; Xabier Valencia; Myriam Arrue; Julio Abascal
Using standard keyboard for pointing and clicking interactions seems to be an affordable option for people who have difficulties handling a standard mouse due to a lack of dexterity or temporal disability conditions. The keypad mouse uses discrete input based on keystrokes to perform cursor movements. This distinctive feature along with others related to cursor motion behavior, make the keypad mouse a unique pointing interface when compared with alternatives. We have carried out a study with expert users of different pointing devices. Results of the study revealed significant differences in the performance values among participants in the three groups. Direction on aimed cursor movements proved to be a crucial factor to be taken into account for keyboard-only users. The findings presented in this paper have a broad application when it comes to designing effective predictive models as well as to proposing design guidelines, novel interaction techniques and adaptation techniques for this collective.
A Model-Based Tool to develop an Accessible Media Player BIBAFull-Text 415-416
  María González-García; Lourdes Moreno; Paloma Martínez
Despite the myriad media content, there are users who cannot access it. Accessibility standards indicate that both the media content as the user agent that provides access to the media content must be accessible to everyone, with or without a disability. To address this issue, this work presents a Model-Based tool, which gives support to the development process of an accessible media player according to accessibility standards. Requirements and adaptation rules regarding the accessibility of a media player and video content are included in the tool design following a methodological approach. This authoring tool is a graphical editor which is aimed at being used by any professional without having a high level of expertise in accessibility.
Video Analysis for Includification Requirements BIBAFull-Text 417-418
  Alexander Hofmann; Helmut Hlavacs
In extensive trials we asked participants with disabilities to play a set of computer games, demanding a variety of cognitive and physical skills. In particular we presented a Tetris variant we developed ourselves, which follows various includification guidelines. Participants were recorded on video in order to identify strengths and weaknesses in coping with the presented game challenges. In the videos we recorded the game screen, a frontal video of the participants showing emotional responses, and eye gaze. Post processing tasks are used to analyze and retrieve data from the videos. This helps to build up a video database for later analysis and answer questions raised after the recordings were taken.

Demo Session

Accessible Mobile Biometrics for Elderly BIBAFull-Text 419-420
  Ramon Blanco-Gonzalo; Raul Sanchez-Reillo; Löic Martinez-Normand; Belen Fernandez-Saavedra; Judith Liu-Jimenez
Biometric recognition provides means to guarantee security in a wide range of services. Unfortunately, their usability is limited and they are not convenient for people with accessibility issues. To prevent the rejection of the technology, those systems should be improved by using a user-centric-design approach and by applying current accessibility standards. This work describes the design, development and evaluation of a mobile app for making payments in Points of Sale (PoS) using biometric recognition. The system has been tested by elderly persons. Our findings show accessibility issues and the most critical requirements to be covered. This is one of the first researches based on the European standard EN 301 549.
Can We Make Dynamic, Accessible and Fun One-Switch Video Games? BIBAFull-Text 421-422
  Sebastián Aced López; Fulvio Corno; Luigi De Russis
This paper presents two one-switch games designed for children with severe motor disabilities, based on the GNomon framework. These mini games demonstrate that it is possible to make dynamic video games with time-dependent game mechanics and flexible layout configurations while being accessible and playable with a single switch. The games were designed in close collaboration with a team of speech therapists, physiotherapists, and psychologists from one of the Local Health Agencies in Turin, Italy. Moreover, the games have been already evaluated with a group of children with different motor impairments through a series of trials with encouraging results.
ChatWoz: Chatting through a Wizard of Oz BIBAFull-Text 423-424
  Pedro Fialho; Luísa Coheur
Several cases of autistic children successfully interacting with virtual assistants such as Siri or Cortana have been recently reported. In this demo we describe ChatWoz, an application that can be used as a Wizard of Oz, to collect real data for dialogue systems, but also to allow children to interact with their caregivers through it, as it is based on a virtual agent. ChatWoz is composed of an interface controlled by the caregiver, which establishes what the agent will utter, in a synthesised voice. Several elements of the interface can be controlled, such as the agent's face emotions. In this paper we focus on the scenario of child-caregiver interaction and detail the features implemented in order to couple with it.
Eyes-free Exploration of Shapes with Invisible Puzzle BIBAFull-Text 425-426
  Andrea Gerino; Lorenzo Picinali; Cristian Bernareggi; Sergio Mascetti
Recent contributions proposed sonification techniques to allow people with visual impairment or blindness to extract information from images on touchscreen devices. In this contribution we introduce Invisible Puzzle Game, an application that is aimed at performing an instrumented remote evaluation of these sonification techniques. The aim is to reach a wide audience of both sighted and visually impaired users and to engage them, thanks to game elements, in playing over time, so that it is possible to evaluate how the performance of each sonification technique is affected by practice.
TactileMaps.net: A Web Interface for Generating Customized 3D-Printable Tactile Maps BIBAFull-Text 427-428
  Brandon T. Taylor; Anind K. Dey; Dan P. Siewiorek; Asim Smailagic
Tactile maps are useful, but not commonly available, tools for providing visually impaired individuals with knowledge about their environment. We have developed a web tool to allow visually impaired users to specify locations and customize 3D map models for production with 3D printers. Our tool uses available online map data and encodes features such as roads and waterways into 3D-printable tactile features. We present a preliminary overview of our web interface and tactile map-generating software with a focus on the design choices that were informed by our pilot studies. We will also discuss additional findings from our focus group interviews and future plans for this work.
ARMStrokes: A Mobile App for Everyday Stroke Rehabilitation BIBAFull-Text 429-430
  Jin Guo; Ted Smith; David Messing; Ziying Tang; Sonia Lawson; Jinjuan Heidi Feng
In this paper, we present a novel smartphone-based rehabilitation approach called ARMStrokes that provides real-time support for stroke survivors to complete rehabilitation exercises for upper extremity recovery. ARMStrokes allows stroke survivors to exercise through interactive games anytime and anywhere and receive instant feedback about the quality of their performance. Stroke survivors can also communicate with their therapists or physicians through the supporting web-based platform. Focus groups involving stroke survivors, caregivers, and therapists have been conducted to evaluate the system and the feedback is highly positive.
READ: A (Research) Platform for Evaluating Non-visual Access Methods to Digital Documents BIBAFull-Text 431-432
  Laurent Sorin; Julie Lemarié; Mustapha Mojahid
READ (which stands for Documents Architecture REstitution, in French), is a software allowing augmented access of tagged documents content. It was originally developed for a project aiming at evaluating informational and functional equivalents to text visual formatting and was very recently tested in a user study (results unpublished yet). The design flexibility of this platform allows an easy implementation of input and output modalities, as well as navigation functions. We wish to publish this application for noncommercial reuse in the short run; the main goal is here to easily allow researchers to evaluate non-visual access methods to documents, since current assistive technologies are known to raise multiple issues. A live demonstration of READ will enable attendees to experience the implemented output modalities and navigation functions, as well as the platform configuration and extension potential.
Using Dynamic Audio Feedback to Support Peripersonal Reaching in Visually Impaired People BIBAFull-Text 433-434
  Graham Wilson; Stephen A. Brewster
Blind children engage with their immediate environment much less than sighted children, particularly through self-initiated movement or exploration. Research has suggested that providing dynamic feedback about the environment and the child's actions within/against it may help to encourage reaching activity and support spatial cognitive learning. This paper presents an initial study suggesting the accuracy of peripersonal reaching can be improved by the use of dynamic sound from both the objects to reach for and the reaching hand itself (via a worn speaker) that changes based on the proximity of the hand to the object. The demonstration will let attendees try the interaction and feedback designs
An Enhanced Electrolarynx with Automatic Fundamental Frequency Control based on Statistical Prediction BIBAFull-Text 435-436
  Kou Tanaka; Tomoki Toda; Graham Neubig; Sakriani Sakti; Satoshi Nakamura
An electrolarynx is a type of speaking aid device which is able to mechanically generate excitation sounds to help laryngectomees produce electrolaryngeal (EL) speech. Although EL speech is quite intelligible, its naturalness suffers from monotonous fundamental frequency patterns of the mechanical excitation sounds. To make it possible to generate more natural excitation sounds, we have proposed a method to automatically control the fundamental frequency of the sounds generated by the electrolarynx based on a statistical prediction model, which predicts the fundamental frequency patterns from the produced EL speech in real-time. In this paper, we develop a prototype system by implementing the proposed control method in an actual, physical electrolarynx and evaluate its performance.
Nonvisual Access to an Interactive Electric Field Simulation: Work in Progress BIBAFull-Text 437-438
  Clayton Lewis; Derek Riemer
We describe progress in creating an inclusive version of an interactive simulation that presents the dynamic electric field produced by a moving charge, using an audio presentation. The results illustrate opportunities and challenges in making this simulation usable for blind users.
Balance Assessment in Fall-Prevention Exergames BIBAFull-Text 439-440
  Carlos Miguel Dias de Brito; João Tiago Pinheiro Neto Jacob; Rui Pedro Silva Nóbrega; António Manuel Nogueira Santos
To assess the success of fall-prevention oriented exergames two digital games were developed taking advantage of the Wii Balance Board (WBB) capabilities. The objective was to evaluate the exergames' potential to address elderly adults' declining motivation to exercise regularly despite the benefits related to fall-prevention. The system uses the WBB to keep track of the player's center of pressure and computes important balance assessment measures related to it to eventually provide a mean to monitor their patients. The presented demo will feature the two exergames.
ASL CLeaR: STEM Education Tools for Deaf Students BIBAFull-Text 441-442
  Jeanne Reis; Erin T. Solovey; Jon Henner; Kathleen Johnson; Robert Hoffmeister
In this paper, we introduce the American Sign Language STEM Concept Learning Resource (ASL CLeaR), an educational application demo. The ASL CLeaR addresses a need for quality ASL STEM resources by featuring expertly presented STEM content in ASL, and employing an ASL-based search function and a visuocentric search interface. This paper discusses the main objectives of the ASL CLeaR, describes the components of the application, and suggests future work that could lead to improved educational outcomes for deaf and hard of hearing students in STEM topics.
EnTable: Rewriting Web Data Sets as Accessible Tables BIBAFull-Text 443-444
  Steven Gardiner; Anthony Tomasic; John Zimmerman
Today, many data-driven web pages present information in a way that is difficult for blind and low vision users to navigate and to understand. EnTable addresses this challenge. It re-writes confusing and complicated template-based data sets as accessible tables. EnTable allows blind and low vision users to submit requests for pages they wish to access. The system then employs sighted informants to markup the desired page with semantic information, allowing the page to be re-written using straightforward tags. Screen reader users who browse the web using the EnTable browser extension can report data sets that are confusing, and utilize data sets re-written with the tag based on their own requests or on the requests of other users.
A Game to Target the Spelling of German Children with Dyslexia BIBAFull-Text 445-446
  Maria Rauschenberger; Silke Fuechsel; Luz Rello; Clara Bayarri; Azuki Gòrriz
Playing error-based exercises presented in a computer game was found to significantly improve the spelling skills of children with dyslexia in Spanish. Since there are no similar error-based exercises for German, we adapted the method to German and created 2,500 new word exercises. Since dyslexia manifestations are language dependent, the replication of the method required (i) collecting new texts written by German children with dyslexia; (ii) the annotation and the linguistic analysis of the errors; and (iii) the creation of exercises as well as their integration in the tool.
The Implementation of a Vocabulary and Grammar for an Open-Source Speech-Recognition Programming Platform BIBAFull-Text 447-448
  Jean K. Rodriguez-Cartagena; Andrea C. Claudio-Palacios; Natalia Pacheco-Tallaj; Valerie Santiago González; Patricia Ordonez-Franco
Speech recognition software technology provides people with limited hand mobility or visual impairments the opportunity to work with computers through an alternative approach. However, when it comes to programming, voice recognition systems leave much to be desired. To tackle this problem, we have developed a generic vocabulary and grammar to help a user to program by voice in any C-based language. With the purpose of validating the aforementioned vocabulary and grammar, receiving feedback from the programming community and determining future features for enhancement, we administered a survey and distributed it among faculty members and advanced computer science and engineering students. Our objective is to create an assistive technology tool that will serve the community with limited hand mobility or visual impairments yet will be accepted and embraced as the spoken programming language and model of choice for implementation.