HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies

Fullname:The 13th International ACM SIGACCESS Conference on Computers and Accessibility
Editors:Kathleen F. McCoy; Yeliz Yesilada
Location:Dundee, Scotland, United Kingdom
Dates:2011-Oct-24 to 2011-Oct-26
Publisher:ACM
Standard No:ISBN 1-60558-881-4, 978-1-60558-881-0; ACM DL: Table of Contents hcibib: ASSETS11
Papers:85
Pages:334
Links:Conference Home Page
Summary:Welcome to Dundee, Scotland -- The City of Discoveries -- for ASSETS 2011, the Thirteenth International ACM SIGACCESS Conference on Computers and Accessibility. We come to the city that is said to be built on Jute, Jam, and Journalism to try to make some discoveries of our own. We are fortunate to be close to the University of Dundee whose school of computing has such a rich history of accessibility work. It is an ideal location for the international ASSETS community to come together to collaborate and share innovative research on the design and use of both mainstream and specialized assistive technologies.
    We are delighted to welcome Professor Alan Dix, who specializes in Human Computer Interaction at Lancaster University, as our keynote speaker for 2011. He will discuss the changing face of our world brought about by the data-centric web and what it means to interactions and accessibility.
    The technical program of 27 podium presentations and 45 posters & demonstrations has been selected through peer-review by a distinguished international program committee. This committee had the very difficult job of assembling a conference program from the diverse set of very highquality submissions. We received submissions from more than 20 different countries. The podium presentations were selected from 90 full-length submissions (a 30% acceptance rate), and have been organized into 9 themes including design issues for assistive technologies, comprehension studies, interfaces for mobile & ubiquitous systems, and web accessibility. The accepted papers address a variety of assistive technology users including older adults, people who use sign language, and people with visual, intellectual, mobility, and severe speech impairments. The program committee was also involved in the poster and demonstration program which was chaired by Leo Ferres. These 45 presentations were selected from 79 submissions (a 57% acceptance rate). The posters and demonstrations provide an opportunity to showcase late-breaking results as well as work in progress and practical implementations.
    Posters & demonstrations and selected ACM Student Research Competition entries, chaired by Krzysztof Gajos, are represented by abstracts in these proceedings and in two poster sessions during the conference. The winners of the ACM Student Research Competition (sponsored by Microsoft) will go on to compete in the ACM-wide grand finals, where ASSETS entrants have established a strong track record including last year's third place winner in the undergraduate category.
    The poster sessions will also showcase participants in the Doctoral Consortium. This one-day workshop preceding the main conference was chaired by Professors Clayton Lewis and Faustina Hwang, and generously sponsored by the U.S. National Science Foundation. It brought together 11 emerging researchers working on accessibility to discuss their ideas with a panel of established experts. A special edition of the SIGACCESS newsletter will feature extended abstracts from these doctoral students.
    We are pleased that this year's conference also hosted the 2nd International Workshop on Sign Language Translation and Avatar Technology (SLTAT), which featured a variety of presentations focused on symbolic translation of sign language, animation of sign language using avatars, and usability evaluation of practical translation and animation systems.
    We come to the City of Discoveries for an exciting and diverse program centered on accessibility for all. We thank the many members of the research community who have contributed to this conference and hope that it will result in some discoveries of our own leading to new perspectives in assistive technologies and making a positive difference in peoples' lives.
  1. Keynote address
  2. Assistive technology design paradigms
  3. Navigation and wayfinding
  4. Understanding users
  5. User-centric design
  6. Sign language comprehension
  7. Multimedia and TV
  8. Web accessibility
  9. Mobile and ubiquitious UI
  10. Supporting visual interaction
  11. Posters and demonstrations
  12. Student research competition

Keynote address

Living in a world of data BIBAFull-Text 1-2
  Alan John Dix
The web is an integral part of our daily lives, and has had profound impacts on us all, not least both positive and negative impacts on accessibility, inclusivity and social justice. However, the web is constantly changing. Web2.0 has brought the web into the heart of social life, and has had mixed impact on accessibility. More recently the rise in API access to web services and various forms of open, linked or semantic data is creating a more data/content face to the media web. As with all technology, this new data web poses fresh challenges and offers new opportunities

Assistive technology design paradigms

The design of human-powered access technology BIBAFull-Text 3-10
  Jeffrey P. Bigham; Richard E. Ladner; Yevgen Borodin
People with disabilities have always overcome accessibility problems by enlisting people in their community to help. The Internet has broadened the available community and made it easier to get on-demand assistance remotely. In particular, the past few years have seen the development of technology in both research and industry that uses human power to overcome technical problems too difficult to solve automatically. In this paper, we frame recent developments in human computation in the historical context of accessibility, and outline a framework for discussing new advances in human-powered access technology. Specifically, we present a set of 13 design principles for human-powered access technology motivated both by historical context and current technological developments. We then demonstrate the utility of these principles by using them to compare several existing human-powered access technologies. The power of identifying the 13 principles is that they will inspire new ways of thinking about human-powered access technologies.
Empowering individuals with do-it-yourself assistive technology BIBAFull-Text 11-18
  Amy Hurst; Jasmine Tobias
Assistive Technologies empower individuals to accomplish tasks they might not be able to do otherwise. Unfortunately, a large percentage of Assistive Technology devices that are purchased (35% or more) end up unused or abandoned [7,10], leaving many people with Assistive Technology that is inappropriate for their needs. Low acceptance rates of Assistive Technology occur for many reasons, but common factors include 1) lack of considering user opinion in selection, 2) ease in obtaining devices, 3) poor device performance, and 4) changes in user needs and priorities [7]. We are working to help more people gain access to the Assistive Technology they need by empowering non-engineers to "Do-It-Yourself" (DIY) and create, modify, or build. This paper illustrates that it is possible to custom-build Assistive Technology, and argues why empowering users to make their own Assistive Technology can improve the adoption process (and subsequently adoption rates). We discuss DIY experiences and impressions from individuals who have either built Assistive Technology before, or rely on it. We found that increased control over design elements, passion, and cost motivated individuals to make their own Assistive Technology instead of buying it. We discuss how a new generation of rapid prototyping tools and online communities can empower more individuals. We synthesize our findings into design recommendations to help promote future DIY-AT success.
Towards a framework to situate assistive technology design in the context of culture BIBAFull-Text 19-26
  Fatima A. Boujarwah; A Nazneen; Hwajung Hong; Gregory D. Abowd; Rosa I. Arriaga
We present the findings from a cross-cultural study of the expectations and perceptions of individuals with autism and other intellectual disabilities (AOID) in Kuwait, Pakistan, South Korea, and the United States. Our findings exposed cultural nuances that have implications for the design of assistive technologies. We develop a framework, based on three themes; 1) lifestyle; 2) socio-technical infrastructure; and 3) monetary and informational resources within which the cultural implications and opportunities for assistive technology were explored. The three key contributions of this work are: 1) the development of a framework that outlines how culture impacts perceptions and expectations of individuals with social and intellectual disabilities; 2) a mapping of how this framework leads to implications and opportunities for assistive technology design; 3) the presentation of concrete examples of how these implications impact the design of three emerging assistive technologies.

Navigation and wayfinding

Supporting spatial awareness and independent wayfinding for pedestrians with visual impairments BIBAFull-Text 27-34
  Rayoung Yang; Sangmi Park; Sonali R. Mishra; Zhenan Hong; Clint Newsom; Hyeon Joo; Erik Hofer; Mark W. Newman
Much of the information designed to help people navigate the built environment is conveyed through visual channels, which means it is not accessible to people with visual impairments. Due to this limitation, travelers with visual impairments often have difficulty navigating and discovering locations in unfamiliar environments, which reduces their sense of independence with respect to traveling by foot. In this paper, we examine how mobile location-based computing systems can be used to increase the feeling of independence in travelers with visual impairments. A set of formative interviews with people with visual impairments showed that increasing one's general spatial awareness is the key to greater independence. This insight guided the design of Talking Points 3 (TP3), a mobile location-aware system for people with visual impairments that seeks to increase the legibility of the environment for its users in order to facilitate navigating to desired locations, exploration, serendipitous discovery, and improvisation. We conducted studies with eight legally blind participants in three campus buildings in order to explore how and to what extent TP3 helps promote spatial awareness for its users. The results shed light on how TP3 helped users find destinations in unfamiliar environments, but also allowed them to discover new points of interest, improvise solutions to problems encountered, develop personalized strategies for navigating, and, in general, enjoy a greater sense of independence.
Situation-based indoor wayfinding system for the visually impaired BIBAFull-Text 35-42
  Eunjeong Ko; Jin Sun Ju; Eun Yi Kim
This paper presents an indoor wayfinding system to help the visually impaired finding their way to a given destination in an unfamiliar environment. The main novelty is the use of the user's situation as the basis for designing color codes to explain the environmental information and for developing the wayfinding system to detect and recognize such color codes. Actually, people would require different information according to their situations. Therefore, situation-based color codes are designed, including location-specific codes and guide codes. These color codes are affixed in certain locations to provide information to the visually impaired, and their location and meaning are then recognized using the proposed wayfinding system. Consisting of three steps, the proposed wayfinding system first recognizes the current situation using a vocabulary tree that is built on the shape properties of images taken of various situations. Next, it detects and recognizes the necessary codes according to the current situation, based on color and edge information. Finally, it provides the user with environmental information and their path through an auditory interface. To assess the validity of the proposed wayfinding system, we have conducted field test with four visually impaired, then the results showed that they can find the optimal path in real-time with an accuracy of 95%.
Navigation and obstacle avoidance help (NOAH) for older adults with cognitive impairment: a pilot study BIBAFull-Text 43-50
  Pooja Viswanathan; James J. Little; Alan K. Mackworth; Alex Mihailidis
Many older adults with cognitive impairment are excluded from powered wheelchair use because of safety concerns. This leads to reduced mobility, and in turn, higher dependence on caregivers. In this paper, we describe an intelligent wheelchair that uses computer vision and machine learning methods to provide adaptive navigation assistance to users with cognitive impairment. We demonstrate the performance of the system in a user study with the target population. We show that the collision avoidance module of the system successfully decreases the number of collisions for all participants. We also show that the wayfinding module assists users with memory and vision impairments. We share feedback from the users on various aspects of the intelligent wheelchair system. In addition, we provide our own observations and insights on the target population and their use of intelligent wheelchairs. Finally, we suggest directions for future work.

Understanding users

Understanding the computer skills of adult expert users with down syndrome: an exploratory study BIBAFull-Text 51-58
  Jonathan Lazar; Libby Kumin; Jinjuan Heidi Feng
Recent survey research suggests that individuals with Down syndrome use computers for a variety of educational, communication, and entertainment activities. However, there has been no analysis of the actual computer knowledge and skills of employment-aged computer users with Down syndrome. We conducted an ethnographic observation that aims at examining the workplace-related computer skills of expert users with Down syndrome. The results show that expert users with Down syndrome have the ability to use computers for basic workplace tasks such as word processing, data entry, and communication.
The vlogging phenomena: a deaf perspective BIBAFull-Text 59-66
  Ellen S. Hibbard; Deb I. Fels
Highly textual websites present barriers to Deaf people, primarily using American Sign Language for communication. Deaf people have been posting ASL content in form of vlogs to YouTube and specialized websites such as Deafvideo.TV. This paper presents some of the first insights into the use of vlogging technology and techniques among the Deaf community. The findings suggest that there are differences between YouTube and Deafvideo.TV due to differences between mainstream and specialized sites. Vlogging technology seems to influence use of styles that are not found or are used differently in face-to-face communications. Examples include the alteration of vloggers' signing space to convey different meanings on screen.
Leveraging large data sets for user requirements analysis BIBAFull-Text 67-74
  Maria K. Wolters; Vicki L. Hanson; Johanna D. Moore
In this paper, we show how a large demographic data set that includes only high-level information about health and disability can be used to specify user requirements for people with specific needs and impairments. As a case study, we consider adapting spoken dialogue systems (SDS) to the needs of older adults. Such interfaces are becoming increasingly prevalent in telecare and home care, where they will often be used by older adults.
   As our data set, we chose the English Longitudinal Survey of Ageing (ELSA), a large representative survey of the health, wellbeing, and socioeconomic status of English older adults. In an inclusion audit, we show that one in four older people surveyed by ELSA might benefit from SDS due to problems with dexterity, mobility, vision, or literacy. Next, we examine the technology that is available to our target users (technology audit) and estimate factors that might prevent older people from using SDS (exclusion audit). We conclude that while SDS are ideal for solutions that are delivered on the near ubiquitous landlines, they need to be accessible for people with mild to moderate hearing problems, and thus multimodal solutions should be based on the television, a technology even more widespread than landlines.

User-centric design

Humsher: a predictive keyboard operated by humming BIBAFull-Text 75-82
  Ondrej Polacek; Zdenek Mikovec; Adam J. Sporka; Pavel Slavik
This paper presents Humsher -- a novel text entry method operated by the non-verbal vocal input, specifically the sound of humming. The method utilizes an adaptive language model for text prediction. Four different user interfaces are presented and compared. Three of them use dynamic layout in which n-grams of characters are presented to the user to choose from according to their probability in given context. The last interface utilizes static layout, in which the characters are displayed alphabetically and a modified binary search algorithm is used for an efficient selection of a character. All interfaces were compared and evaluated in a user study involving 17 able-bodied subjects. Case studies with four disabled people were also performed in order to validate the potential of the method for motor-impaired users. The average speed of the fastest interface was 14 characters per minute, while the fastest user reached 30 characters per minute. Disabled participants were able to type at 14 -- 22 characters per minute after seven sessions.
ACES: aphasia emulation, realism, and the Turing test BIBAFull-Text 83-90
  Joshua Hailpern; Marina Danilevsky; Karrie Karahalios
To an outsider it may appear as though an individual with aphasia has poor cognitive function. However, the problem resides in the individual's receptive and expressive language, and not in their ability to think. This misperception, paired with a lack of empathy, can have a direct impact on quality of life and medical care. Hailpern's 2011 paper on ACES demonstrated a novel system that enabled users (e.g., caregivers, therapists, family) to experience first hand the communication-distorting effects of aphasia. While their paper illustrated the impact of ACES on empathy, it did not validate the underlying distortion emulation. This paper provides a validation of ACES' distortions through a Turing Test experiment with participants from the Speech and Hearing Science community. It illustrates that text samples generated with ACES distortions are generally not distinguishable from text samples originating from individuals with aphasia. This paper explores ACES distortions through a `How Human' is it test, in which participants explicitly rate how human- or computer-like distortions appear to be.
We need to communicate!: helping hearing parents of deaf children learn American sign language BIBAFull-Text 91-98
  Kimberly A. Weaver; Thad Starner
Language immersion from birth is crucial to a child's language development. However, language immersion can be particularly challenging for hearing parents of deaf children to provide as they may have to overcome many difficulties while learning American Sign Language (ASL). We are in the process of creating a mobile application to help hearing parents learn ASL. To this end, we have interviewed members of our target population to gain understanding of their motivations and needs when learning sign language. We found that the most common motivation for parents learning ASL is better communication with their children. Parents are most interested in acquiring more fluent sign language skills through learning to read stories to their children.

Sign language comprehension

Evaluating importance of facial expression in American sign language and pidgin signed English animations BIBAFull-Text 99-106
  Matt Huenerfauth; Pengfei Lu; Andrew Rosenberg
Animations of American Sign Language (ASL) and Pidgin Signed English (PSE) have accessibility benefits for many signers with lower levels of written language literacy. In prior experimental studies we conducted evaluating animations of ASL, native signers gave informal feedback in which they critiqued the insufficient and inaccurate facial expressions of the virtual human character. While face movements are important for conveying grammatical and prosodic information in human ASL signing, no empirical evaluation of their impact on the understandability and perceived quality of ASL animations had previously been conducted. To quantify the suggestions of deaf participants in our prior studies, we experimentally evaluated ASL and PSE animations with and without various types of facial expressions, and we found that their inclusion does lead to measurable benefits for the understandability and perceived quality of the animations. This finding provides motivation for our future work on facial expressions in ASL and PSE animations, and it lays a novel methodological groundwork for evaluating the quality of facial expressions for conveying prosodic or grammatical information.
Assessing the deaf user perspective on sign language avatars BIBAFull-Text 107-114
  Michael Kipp; Quan Nguyen; Alexis Heloir; Silke Matthes
Signing avatars have the potential to become a useful and even cost-effective method to make written content more accessible for Deaf people. However, avatar research is characterized by the fact that most researchers are not members of the Deaf community, and that Deaf people as potential users have little or no knowledge about avatars. Therefore, we suggest two well-known methods, focus groups and online studies, as a two-way information exchange between research and the Deaf community. Our aim was to assess signing avatar acceptability, shortcomings of current avatars and potential use cases. We conducted two focus group interviews (N=8) and, to quantify important issues, created an accessible online user study (N=317). This paper deals with both the methodology used and the elicited opinions and criticism. While we found a positive baseline response to the idea of signing avatars, we also show that there is a statistically significant increase in positive opinion caused by participating in the studies. We argue that inclusion of Deaf people on many levels will foster acceptance as well as provide important feedback regarding key aspects of avatar technology that need to be improved.
Evaluating quality and comprehension of real-time sign language video on mobile phones BIBAFull-Text 115-122
  Jessica J. Tran; Joy Kim; Jaehong Chon; Eve A. Riskin; Richard E. Ladner; Jacob O. Wobbrock
Video and image quality are often objectively measured using peak signal-to-noise ratio (PSNR), but for sign language video, human comprehension is most important. Yet the relationship of human comprehension to PSNR has not been studied. In this survey, we determine how well PSNR matches human comprehension of sign language video. We use very low bitrates (10-60 kbps) and two low spatial resolutions (192×144 and 320×240 pixels) which may be typical of video transmission on mobile phones using 3G networks. In a national online video-based user survey of 103 respondents, we found that respondents preferred the 320×240 spatial resolution transmitted at 20 kbps and higher; this does not match what PSNR results would predict. However, when comparing perceived ease/difficulty of comprehension, we found that responses did correlate well with measured PSNR. This suggests that PSNR may not be suitable for representing subjective video quality, but can be reliable as a measure for comprehensibility of American Sign Language (ASL) video. These findings are applied to our experimental mobile phone application, MobileASL, which enables real-time sign language communication for Deaf users at low bandwidths over the U.S. 3G cellular network.

Multimedia and TV

Annotation-based video enrichment for blind people: a pilot study on the use of earcons and speech synthesis BIBAFull-Text 123-130
  Benoît Encelle; Magali Ollagnier-Beldame; Stéphanie Pouchot; Yannick Prié
Our approach to address the question of online video accessibility for people with sensory disabilities is based on video annotations that are rendered as video enrichments during the playing of the video. We present an exploratory work that focuses on video accessibility for blind people with audio enrichments composed of speech synthesis and earcons (i.e. nonverbal audio messages). Our main results are that earcons can be used together with speech synthesis to enhance understanding of videos; that earcons should be accompanied with explanations; and that a potential side effect of earcons is related to video rhythm perception.
Developing accessible TV applications BIBAFull-Text 131-138
  José Coelho; Carlos Duarte; Pradipta Biswas; Patrick Langdon
The development of TV applications nowadays excludes users with certain impairments from interacting with and accessing the same type of contents as other users do. Developers are also not interested in developing new or different versions of applications targeting different user characteristics. In this paper we describe a novel adaptive accessibility approach on how to develop accessible TV applications, without requiring too much additional effort from the developers. Integrating multimodal interaction, adaptation techniques and the use of simulators in the design process, we show how to adapt User Interfaces to the individual needs and limitations of elderly users. For this, we rely on the identification of the most relevant impairment configurations among users in practical user-trials, and we draw a relation with user specific characteristics. We provide guidelines for more accessible and centered TV application development.
Accessibility of 3D game environments for people with Aphasia: an exploratory study BIBAFull-Text 139-146
  Julia Galliers; Stephanie Wilson; Sam Muscroft; Jane Marshall; Abi Roper; Naomi Cocks; Tim Pring
People with aphasia experience difficulties with all aspects of language and this can mean that their access to technology is substantially reduced. We report a study undertaken to investigate the issues that confront people with aphasia when interacting with technology, specifically 3D game environments. Five people with aphasia were observed and interviewed in twelve workshop sessions. We report the key themes that emerged from the study, such as the importance of direct mappings between users' interactions and actions in a virtual environment. The results of the study provide some insight into the challenges, but also the opportunities, these mainstream technologies offer to people with aphasia. We discuss how these technologies could be more supportive and inclusive for people with language and communication difficulties.

Web accessibility

The interplay between web aesthetics and accessibility BIBAFull-Text 147-154
  Grace Mbipom; Simon Harper
Visual aesthetics enhances user experience in the context of the World Wide Web (Web). Accordingly, many studies report positive relationships between Web aesthetics and facets of user experience like usability and credibility, but does this hold for accessibility also? This paper describes an empirical investigation towards this end. The aesthetic judgements of 30 sighted Web users were elicited to understand what types of Web design come across as being visually pleasing. Participants judged 50 homepages based on Lavie and Tractinsky's classical and expressive Web aesthetics framework. A cross-section of the homepages were then manually audited for accessibility compliance by 11 Web accessibility experts who used a heuristic evaluation technique known as the Barrier Walkthrough (BW) method to check for accessibility barriers that could affect people with visual impairments. Web pages judged on the classical dimension as being visually clean showed significant correlations with accessibility, suggesting that visual cleanness may be a suitable proxy measure for accessibility as far as people with visual impairments are concerned. Expressive designs and other aesthetic dimensions showed no such correlation, however, demonstrating that an expressive or aesthetically pleasing Web design is not a barrier to accessibility.
How voice augmentation supports elderly web users BIBAFull-Text 155-162
  Daisuke Sato; Masatomo Kobayashi; Hironobu Takagi; Chieko Asakawa; Jiro Tanaka
Online Web applications have become widespread and have made our daily life more convenient. However, older adults often find such applications inaccessible because of age-related changes to their physical and cognitive abilities. Two of the reasons that older adults may shy away from the Web are fears of the unknown and of the consequences of incorrect actions. We are extending a voice-based augmentation technique originally developed for blind users. We want to reduce the cognitive load on older adults by providing contextual support. An experiment was conducted to evaluate how voice augmentation can support elderly users in using Web applications. Ten older adults participated in our study and their subjective evaluations showed how the system gave them confidence in completing Web forms. We believe that voice augmentation may help address the users' concerns arising from their low confidence levels.
Monitoring accessibility: large scale evaluations at a Geo political level BIBAFull-Text 163-170
  Silvia Mirri; Ludovico Antonio Muratori; Paola Salomoni
Once we assumed that Web accessibility is a right, we implicitly state the necessity of a governance of it. Beyond any regulation, institutions must provide themselves with suitable tools to control and support accessibility on typically large scale scenarios of content and resources. No doubt, the economic impact and effectiveness of these tools affect accessibility level. In this paper, we propose an application to effectively monitor Web accessibility from a geo-political point of view, by referring resources to the specific (category of) institutions which are in charge of it and to the geographical places they are addressed to. Snapshots of such a macro level spatial-geo-political analysis can be used to effectively focus investments and skills where they are actually necessary.

Mobile and ubiquitious UI

A mobile phone based personal narrative system BIBAFull-Text 171-178
  Rolf Black; Annalu Waller; Nava Tintarev; Ehud Reiter; Joseph Reddington
Currently available commercial Augmentative and Alternative Communication (AAC) technology makes little use of computing power to improve the access to words and phrases for personal narrative, an essential part of social interaction. In this paper, we describe the development and evaluation of a mobile phone application to enable data collection for a personal narrative system for children with severe speech and physical impairments (SSPI). Based on user feedback from the previous project "How was School today?" we developed a modular system where school staff can use a mobile phone to track interaction with people and objects and user location at school. The phone also allows taking digital photographs and recording voice message sets by both school staff and parents/carers at home. These sets can be played back by the child for immediate narrative sharing similar to established AAC device interaction using sequential voice recorders. The mobile phone sends all the gathered data to a remote server. The data can then be used for automatic narrative generation on the child's PC based communication aid. Early results from the ongoing evaluation of the application in a special school with two participants and school staff show that staff were able to track interactions, record voice messages and take photographs. Location tracking was less successful, but was supplemented by timetable information. The participating children were able to play back voice messages and show photographs on the mobile phone for interactive narrative sharing using both direct and switch activated playback options.
Blind people and mobile touch-based text-entry: acknowledging the need for different flavors BIBAFull-Text 179-186
  João Oliveira; Tiago Guerreiro; Hugo Nicolau; Joaquim Jorge; Daniel Gonçalves
The emergence of touch-based mobile devices brought fresh and exciting possibilities. These came at the cost of a considerable number of novel challenges. They are particularly apparent with the blind population, as these devices lack tactile cues and are extremely visually demanding. Existing solutions resort to assistive screen reading software to compensate the lack of sight, still not all the information reaches the blind user. Good spatial ability is still required to have notion of the device and its interface, as well as the need to memorize buttons' position on screen. These abilities, as many other individual attributes as age, age of blindness onset or tactile sensibility are often forgotten, as the blind population is presented with the same methods ignoring capabilities and needs. Herein, we present a study with 13 blind people consisting of a touch screen text-entry task with four different methods. Results show that different capability levels have significant impact on performance and that this impact is related with the different methods' demands. These variances acknowledge the need of accounting for individual characteristics and giving space for difference, towards inclusive design.
Automatically generating tailored accessible user interfaces for ubiquitous services BIBAFull-Text 187-194
  Julio Abascal; Amaia Aizpurua; Idoia Cearreta; Borja Gamecho; Nestor Garay-Vitoria; Raúl Miñón
Ambient Assisted Living environments provide support to people with disabilities and elderly people, usually at home. This concept can be extended to public spaces, where ubiquitous accessible services allow people with disabilities to access intelligent machines such as information kiosks. One of the key issues in achieving full accessibility is the instantaneous generation of an adapted accessible interface suited to the specific user that requests the service. In this paper we present the method used by the EGOKI interface generator to select the most suitable interaction resources and modalities for each user in the automatic creation of the interface. The validation of the interfaces generated for four different types of users is presented and discussed.

Supporting visual interaction

Improving calibration time and accuracy for situation-specific models of color differentiation BIBAFull-Text 195-202
  David R. Flatla; Carl Gutwin
Color vision deficiencies (CVDs) cause problems in situations where people need to differentiate the colors used in digital displays. Recoloring tools exist to reduce the problem, but these tools need a model of the user's color-differentiation ability in order to work. Situation-specific models are a recent approach that accounts for all of the factors affecting a person's CVD (including genetic, acquired, and environmental causes) by using calibration data to form the model. This approach works well, but requires repeated calibration -- and the best available calibration procedure takes more than 30 minutes. To address this limitation, we have developed a new situation-specific model of human color differentiation (called ICD-2) that needs far fewer calibration trials. The new model uses a color space that better matches human color vision compared to the RGB space of the old model, and can therefore extract more meaning from each calibration test. In an empirical comparison, we found that ICD-2 is 24 times faster than the old approach, and had small but significant gains in accuracy. The efficiency of ICD-2 makes it feasible for situation-specific models of individual color differentiation to be used in the real world.
Supporting blind photography BIBAFull-Text 203-210
  Chandrika Jayant; Hanjie Ji; Samuel White; Jeffrey P. Bigham
Blind people want to take photographs for the same reasons as others -- to record important events, to share experiences, and as an outlet for artistic expression. Furthermore, both automatic computer vision technology and human-powered services can be used to give blind people feedback on their environment, but to work their best these systems need high-quality photos as input. In this paper, we present the results of a large survey that shows how blind people are currently using cameras. Next, we introduce EasySnap, an application that provides audio feedback to help blind people take pictures of objects and people and show that blind photographers take better photographs with this feedback. We then discuss how we iterated on the portrait functionality to create a new application called PortraitFramer designed specifically for this function. Finally, we present the results of an in-depth study with 15 blind and low-vision participants, showing that they could pick up how to successfully use the application very quickly.
On the intelligibility of fast synthesized speech for individuals with early-onset blindness BIBAFull-Text 211-218
  Amanda Stent; Ann Syrdal; Taniya Mishra
People with visual disabilities increasingly use text-to-speech synthesis as a primary output modality for interaction with computers. Surprisingly, there have been no systematic comparisons of the performance of different text-to-speech systems for this user population. In this paper we report the results of a pilot experiment on the intelligibility of fast synthesized speech for individuals with early-onset blindness. Using an open-response recall task, we collected data on four synthesis systems representing two major approaches to text-to-speech synthesis: formant-based synthesis and concatenative unit selection synthesis. We found a significant effect of speaking rate on intelligibility of synthesized speech, and a trend towards significance for synthesizer type. In post-hoc analyses, we found that participant-related factors, including age and familiarity with a synthesizer and voice, also affect intelligibility of fast synthesized speech.

Posters and demonstrations

A straight-talking case study BIBAFull-Text 219-220
  Annalu Waller; Suzanne Prior; Kathleen Cummins
The Straight-Talking User Group within Dundee University's School of Computing aims to create a place where adults with complex disabilities can meet to explore technology and where they can work with researchers to develop better technology. A pilot project has shown the potential for this type of centre is terms of increasing the self-esteem and motivation of participants and raising expectations of what people are able to achieve.
A tactile-thermal display for haptic exploration of virtual paintings BIBAFull-Text 221-222
  Victoria E. Hribar; Dianne T. V. Pawluk
To enable individuals who are blind and visually impaired to participate fully in the world around them, it is important to make all environments accessible to them. This includes art museums which provide opportunities for cultural education and personal interest/enjoyment. Our interest focuses on the portrayal of paintings through refreshable haptic displays from their digital representations. As a complement to representing the structural elements (i.e., objects and shapes) in a painting, we believe it is also important to provide a personal experience of the style and expressiveness of the artist. This paper proposes a haptic display and display methods to do so. The haptic display consists of: (1) a pin matrix display to the fingers to relay tactile texture information about brushstroke, (2) a thermal display on which the warm-cold spectrum of colors is mapped, and (3) the sensing of location within the painting used to change tactile and thermal feedback to create contrasts within a painting.
Access lecture: a mobile application providing visual access to classroom material BIBAFull-Text 223-224
  Stephanie Ludi; Alex Canter; Lindsey Ellis; Abhishek Shrestha
Following along with course lecture material is a critical challenge for low vision students. Access Lecture is a mobile, touch-screen application that will aid low vision students in viewing class notes in real-time. This paper presents the system overview, features, and initial feedback on the system. Current status and next steps are also presented.
An integrated system for blind day-to-day life autonomy BIBAFull-Text 225-226
  Hugo Fernandes; José Faria; Hugo Paredes; João Barroso
The autonomy of blind people in their daily life depends on their knowledge of the surrounding world, and they are aided by keen senses and assistive devices that help them to deduce their surroundings. Existing solutions require that users carry a wide range of devices and, mostly, do not include mechanisms to ensure the autonomy of users in the event of system failure. This paper presents the nav4b system that combines guidance and navigation with object's recognition, extending traditional aids (white cane and smartphone). A working prototype was installed on the UTAD campus to perform experiments with blind users.
Audio haptic videogaming for navigation skills in learners who are blind BIBAFull-Text 227-228
  Jaime Sánchez; Matías Espinoza
The purpose of this study was to determine whether the use of audio and a haptic-based videogame has an impact on the development of Orientation and Mobility (O&M) skills in school-age blind learners. The video game Audio Haptic Maze (AHM) was designed, developed and its usability and cognitive impact was evaluated to determine the impact on the development of O&M skills. The results show that the interfaces used in the videogame are usable and appropriately designed, and that the haptic interface is as effective as the audio interface for O&M purposes.
Automatic sign categorization using visual data BIBAFull-Text 229-230
  Marek Hruúz
This paper presents a method of visual tracking in recordings of isolated signs and the usage of the tracked features for automatic sign categorization. The tracking method is based on skin color segmentation and is suitable for recordings of a sign language dictionary. The result of the tracking is the location and outer contour of head and both hands. These features are used to categorize the signs into several categories: movement of hands, contact of body parts, symmetry of trajectory, location of the sign.
Click control: improving mouse interaction for people with motor impairments BIBAFull-Text 231-232
  Christopher Kwan; Isaac Paquette; John J. Magee; Paul Y. Lee; Margrit Betke
Camera-based mouse-replacement systems allow people with motor impairments to control the mouse pointer with head movements if they are unable to use their hands. To address the difficulties of accidental clicking and usable simulation of a real computer mouse, we developed Click Control, a tool to augment the functionality of these systems. When a user attempts to click, Click Control displays a form that allows him or her to cancel the click if it was accidental, or send different types of clicks with an easy-to-use gesture interface. Initial studies of a prototype with users with motor impairments showed that Click Control improved their mouse control experiences.
Design of a bilateral vibrotactile feedback system for lateralization BIBAFull-Text 233-234
  Bernd Tessendorf; Daniel Roggen; Michael Spuhler; Thomas Stiefmeier; Gerhard Tröster; Tobias Grämer; Manuela Feilner; Peter Derleth
We present a bilateral vibrotactile feedback system for accurate lateralization of target angles in the complete 360 degree-range. We envision integrating this system into context-aware hearing instruments (HIs) or cochlear implants (CIs) to support users that experience lateralization difficulties. As a foundation for this it is vital to investigate which kind of feedback and vibration patterns are optimal to provide support for lateralization. Our system enables to evaluate and compare different encoding schemes with respect to resolution, reaction time, intuitiveness and user dependency. The system supports bilateral vibrotactile feedback to reflect integration into HIs or CIs worn at both ears and implemented two approaches: Quantized Absolute Heading (QAH) and Continuous Guidance Feedback (CGF). We provide a detailed description of our hardware that was designed to be also applicable for generic vibrotactile feedback applications.
Displaying braille and graphics on a mouse-like tactile display BIBAFull-Text 235-236
  Patrick C. Headley; Victoria E. Hribar; Dianne T. V. Pawluk
For presenting graphics on small, moveable tactile displays such as those that resemble computer mice, word labels can be as important as the diagram itself. In addition, the ability to present Braille with these displays offers an alternative for accessing full pages of Braille with a cost effective system. In this work, we consider the inherent difficulties arising from presenting Braille on these displays and propose algorithms to circumvent these problems. Lastly, we present preliminary results from individuals who are visually impaired that suggests the promise of this approach.
Do multi-touch screens help visually impaired people to recognize graphics BIBAFull-Text 237-238
  Ikuko Eguchi Yairi; Kumi Naoe; Yusuke Iwasawa; Yusuke Fukushima
This paper introduces our research project for developing a novel graphic representation method with touch and sound as the universal designed touch-screen interface for visually impaired people to understand graphical information. Our previous works on single-touch screens clarified the problems about the low recognition rate of curves. To solve this, an improved graphical representation interface for multi-touch screens were implemented and evaluated.
E-drawings as an evaluation method with deaf children BIBAFull-Text 239-240
  Ornella Mich
This paper describes a pilot test on the use of a drawing software program as an evaluation method for experiments with deaf children. As deaf children are visual learners, evaluation methods based on drawings seem to be a good alternative to traditional ones. We tested the effectiveness of such a method with a group of deaf children, all raised orally apparently without any knowledge of sign language, and a few hearing children, from eight to fourteen years old. As a testbed, we evaluated the readability of a set of stories, part of a literacy software tool for deaf children. All participants were relaxed and collaborative during the test. The results are promising.
Enhancing social connections through automatically-generated online social network messages BIBAFull-Text 241-242
  John J. Magee; Christopher Kwan; Margrit Betke; Fletcher Hietpas
Social isolation and loneliness are important challenges faced by people with certain physical disabilities. Technical and complexity issues may prevent some people from participating in online social networks that otherwise may address some of these issues. We propose to generate social network messages automatically from within assistive technology and augmentative and alternative communication software. These messages will help users post some of their daily activities with the software to online social networks. Based on our initial user studies, the inclusion of social networking connections may help improve engagement and interaction between users with disabilities and their friends, families and caregivers, and this increased interest can lead to the desire to use the assistive technology more fully.
Evaluating information support system for visually impaired people with mobile touch screens and vibration BIBAFull-Text 243-244
  Takato Noguchi; Yusuke Fukushima; Ikuko Eguchi Yairi
Throughout 6 months of blind user's touch panel usage interview, there were many problems of touch panel's audio assistant. The participant strongly demanded to improve the low accuracy and the slow software keyboard typing, and the poor understanding of the shape of the picture. She was also interested in recognizing the 3D shapes. Thus, we developed a fingertip tactile feedback system in order to indicate the "f" and "j" key for touch-typing and improved the speed of the software keyboard typing, and simplify the way to understand the 3D shapes. In this paper, we introduce and evaluate the recognition of the 3D shape by three participants using the system. The results showed that the proposed system succeeded in enabling the visually impaired participants to recognize the 3D shapes.
Exploring iconographic interface in emergency for deaf BIBAFull-Text 245-246
  Tânia Pereira; Benjamim Fonseca; Hugo Paredes; Miriam Cabo
In this demo, we present an application for mobile phones, which can allow communication between deaf and emergency medical services using an iconographic touch interface. This application can be useful especially for deaf but also for persons without disabilities that face sudden situations where speech is hard to articulate.
Future technology oriented scenarios on e-accessibility BIBAFull-Text 247-248
  Christos Kouroupetroglou; Adamantios Koumpis; Dimitris Papageorgiou
This paper presents a set of future scenarios as a part of our study which explores and analyzes the relationships between the emerging ICT landscape in the European societal and economic context, and the development and provision of e-Accessibility, within a perspective of 10 years. Part of our study is the development and validation of various scenarios regarding the impact of new technologies in accessibility. This paper presents some draft scenarios that were produced by combining technologies referred by experts as crucial for the future of eAccessibility.
Guidelines for an accessible web automation interface BIBAFull-Text 249-250
  Yury Puzis; Eugene Borodin; Faisal Ahmed; Valentine Melnyk; I. V. Ramakrishnan
In recent years, the Web has become an ever more sophisticated and irreplaceable tool in our daily lives. While the visual Web has been advancing at a rapid pace, assistive technology has not been able to keep up, increasingly putting visually impaired users at a disadvantage. Web automation has the potential to bridge the accessibility divide between the ways blind and sighted people access the Web; specifically, it can enable blind people to accomplish quickly web browsing tasks that were previously slow, hard, or even impossible to complete. In this paper, we propose guidelines for the design of intuitive and accessible web automation that has the potential to increase accessibility and usability of web pages, reduce interaction time, and improve user browsing experience. Our findings and a preliminary user study demonstrate the feasibility of and emphasize the pressing need for truly accessible web automation technologies.
Helping children with cognitive disabilities through serious games: project CLES BIBAFull-Text 251-252
  Aarij Mahmood Hussaan; Karim Sehaba; Alain Mille
Our work addresses the development of a Serious Game for the diagnostic and learning of persons with cognitive disabilities. In reality, many studies have shown that young people, especially children, are attracted towards computer games. Often, they play these games with great interest and attention. Thus, the idea of using serious games to provide education is attractive for most of them. This work is situated in the context of Project CLES. This project, in collaboration with many research laboratories, aims at developing an Adaptive Serious Game to treat a variety of cognitive handicaps. In this context, this article presents a system that generates learning scenarios keeping into account the user's profile and their learning objectives. The user's profile is used to represent the cognitive abilities and the domain competences of the user. The system also records the user's activities during his/her interaction with the Serious Game and represents them in interaction traces. These traces are used as knowledge sources in the generation of learning scenarios.
Improving accessibility for deaf people: an editor for computer assisted translation through virtual avatars BIBAFull-Text 253-254
  Davide Barberis; Nicola Garazzino; Paolo Prinetto; Gabriele Tiotto
This paper presents the ATLAS Editor for Assisted Translation (ALEAT), a novel tool for the Computer Assisted Translation (CAT) from Italian written language to Italian Sign Language (LIS) of Deaf People. The tool is a web application that has been developed within the ATLAS project, that targets the automatic translation from Italian written language to Italian Sign Language in the weather forecasts domain. ALEAT takes a text as input, written according to the Italian Language grammar, performs the automatic translation of the sentence and gives the result of the translation to the user by visualizing it through a virtual character. Since the automatic translation is error-prone, ALEAT allows to correct it with the intervention of the user. The translation is stored in a database resorting to a novel formalism: the ATLAS Written Extended LIS (AEWLIS). AEWLIS allows to play the translation through the ATLAS visualization module and to load it within ALEAT for successive modifications and improvement.
Improving deaf accessibility in remote usability testing BIBAFull-Text 255-256
  Jerry Schnepp; Brent Shiver
For studies involving Deaf participants in United States, remote usability testing has several potential advantages over face-to-face testing, including convenience, lower cost and the ability to recruit participants from diverse geographic regions. However, current technologies force Deaf participants to use English instead of their preferred language, which is American Sign Language (ASL). A new remote testing technology allows researchers to conduct studies exclusively in ASL at a lower cost than face-to-face testing. The technology design facilitates open-ended questions and is reconfigurable for use in a variety of studies. Results from usability tests of the tool are encouraging and a fullscale study is underway to compare this approach to face-to-face testing.
In-vehicle assistive technology (IVAT) for drivers who have survived a traumatic brain injury BIBAFull-Text 257-258
  Julia DeBlasio Olsheski; Bruce N. Walker; Jeff McCloud
IVAT (in-vehicle assistive technology) is an in-dash interface borne out from a collaborative effort between the Shepherd Center assistive technology team, the Georgia Tech Sonification Laboratory, and Centrafuse. The aim of this technology is to increase driver safety by taking individual cognitive abilities and limitations into account. While the potential applications of IVAT are widespread, the initial population of interest for the current research is survivors of a traumatic brain injury (TBI). TBI can cause a variety of impairments that limit driving ability. IVAT is aimed at enabling the individual to overcome these limitations in order to regain some independence by driving after injury.
Increased accessibility to nonverbal communication through facial and expression recognition technologies for blind/visually impaired subjects BIBAFull-Text 259-260
  Douglas Astler; Harrison Chau; Kailin Hsu; Alvin Hua; Andrew Kannan; Lydia Lei; Melissa Nathanson; Esmaeel Paryavi; Michelle Rosen; Hayato Unno; Carol Wang; Khadija Zaidi; Xuemin Zhang; Cha-Min Tang
Conversation between two individuals requires verbal dialogue; the majority of human communication however consists of non-verbal cues such as gestures and facial expressions. Blind individuals are thus hindered in their interaction capabilities. To address this, we are building a computer vision system with facial recognition and expression algorithms to relay nonverbal messages to a blind user. The device will communicate the identities and facial expressions of communication partners in realtime. In order to ensure that this device will be useful to the blind community, we conducted surveys and interviews and we are working with subjects to test prototypes of the device. This paper describes the algorithms and design concepts incorporated in this device, and it provides a commentary on early survey and interview results. A corresponding poster with demonstration stills is exhibited at this conference.
MICOO (multimodal interactive cubes for object orientation): a tangible user interface for the blind and visually impaired BIBAFull-Text 261-262
  Muhanad S. Manshad; Enrico Pontelli; Shakir J. Manshad
This paper presents the development of Multimodal Interactive Cubes for Object Orientation (MICOO) manipulatives. This system provides a multimodal tangible user interface (TUI), enabling people with visual impairments to create, modify and naturally interact with diagrams and graphs on a multitouch surface. The system supports a novel notion of active orientation and proximity tracking of manipulatives against diagram and graph components. If the orientation of a MICOO matches a component, then a user is allowed to modify that component by moving the MICOO. Conversely, if a MICOO does not match orientation or is far from a component, audio feedback is activated to help the user reach that component. This will lessen the need for manual intervention, enable independent discovery on the part of the user, and offers dynamic behavior, whereas the representation interacts and provides feedback to the user. The platform has been developed and it is undergoing formal evaluation (e.g., browse, modify and construct graphs on a Cartesian plot and diagrams).
Mobile web on the desktop: simpler web browsing BIBAFull-Text 263-264
  Jeffery Hoehl; Clayton Lewis
This paper explores the potential benefits of using mobile webpages to present simpler web content to people with cognitive disabilities. An empirical analysis revealed that the majority of popular mobile sites are smaller than their desktop equivalents with an average of half the viewable content, making them a viable method for simplifying web presentation.
Multi-modal dialogue system with sign language capabilities BIBAFull-Text 265-266
  Marek Hrúz; Pavel Campr; A Zdenek Krnoul; Milos Zelezný; Oya Aran; Pinar Santemiz
This paper presents the design of a multimodal sign-language-enabled dialogue system. Its functionality was tested on a prototype of an information kiosk for the deaf people providing information about train connections. We use an automatic computer-vision-based sign language recognition, automatic speech recognition and touchscreen as input modalities. The outputs are shown on a screen displaying 3D signing avatar and on a touchscreen displaying graphical user interface. The information kiosk can be used both by hearing users and deaf users in several languages. We focus on description of sign language input and output modality.
Multi-view platform: an accessible live classroom viewing approach for low vision students BIBAFull-Text 267-268
  Raja S. Kushalnagar; Stephanie A. Ludie; Poorna Kushalnagar
We present a multiple-view platform for low vision students that utilizes students' personal smart phone cameras and tablets in the classroom. Low vision or deaf students can independently use the platform to obtain flexible, magnified views of lecture visuals, such as the presentation slides or whiteboard on their personal screen. This platform also enables cooperation among sighted and hearing classmates to provide better views for everyone, including themselves.
Note-taker 3.0: an assistive technology enabling students who are legally blind to take notes in class BIBAFull-Text 269-270
  David S. Hayden; Michael Astrauskas; Qian Yan; Liqing Zhou; John A., Jr. Black
While the Americans with Disabilities Act mandates that universities provide visually disabled students with human note-takers, studies have shown that it is vital that students take their own notes during classroom lectures. Because students are cognitively engaged while taking notes, their retention is better, even if they never review their notes after class. However, students with visual disabilities are at a disadvantage compared to their sighted peers when taking notes -- especially in fast-pace class presentations. They find it more difficult to rapidly switch back and forth between viewing the front of the room and viewing the notes they are taking. Currently available assistive technologies do not adequately address this need to rapidly switch back and forth. This paper presents the results of a 3-year study aimed at the development of an assistive technology that is specifically aimed at allowing students with visual disabilities to take handwritten and/or typed notes in the classroom, without relying on any classroom infrastructure, or any special accommodations by the lecturer or the institution.
Participatory design process for an in-vehicle affect detection and regulation system for various drivers BIBAFull-Text 271-272
  Mounghoon Jeon; Jason Roberts; Parameshwaran Raman; Jung-Bin Yim; Bruce N. Walker
Considerable research has shown that diverse affective (emotional) states influence cognitive processes and performance. To detect a driver's affective states and regulate them may help increase driving performance and safety. There are some populations who are more vulnerable to issues regarding driving, affect, and affect regulation (e.g., novice drivers, young drivers, older drivers, and drivers with TBI (Traumatic Brain Injury)). This paper describes initial findings from multiple participatory design processes, including interviews with 21 young drivers, and focus groups with a TBI driver and two driver rehab specialists. Depending on user groups, there are distinct issues and needs; therefore, differentiated approaches are needed to design an in-vehicle assistive technology system for a specific target user group.
Peer interviews: an adapted methodology for contextual understanding in user-centred design BIBAFull-Text 273-274
  Rachel Menzies; Annalu Waller; Helen Pain
In User-Centred Design (UCD) the needs and preferences of the end user are given primary consideration. In some cases, current methodologies such as interviewing may be difficult to conduct, for example when working with children, particularly those with Autism Spectrum Disorders (ASD). This paper outlines an approach to understanding the end-users, context and subject matter through the use of peer interviewing. This is proposed as a viable adaptation to User-Centred methodologies for inclusion of children and those with ASD.
Reading in multimodal environments: assessing legibility and accessibility of typography for television BIBAFull-Text 275-276
  Penelope Allen; Judith Garman; Ian Calvert; Jane Murison
Television viewing is accompanied by ever more complex supporting content: as interactive TV becomes more functional, it also becomes more multimodal. Television typography is no longer limited to teletext, subtitles and captions. It provides navigation, tickers, tabulated results, info-graphics and is embedded in videos and games. At the same time, screen resolution is improving, and the size of household screens are increasing. Currently, little is known about what user needs are associated with these advances. With the use of a customisation prototype, the research explores television typography by understanding the preferences of a variety of users; people with cognitive and sensory access needs, older users and users with no stated access needs. The results from the first study showed participants preferred a larger font size than the current television standard. This preference was particularly prevalent when text was presented along with other content elements demanding attentional focus. The font type Helvetica Neue was particularly favoured by participants with access needs. The second study aims to establish a further exploration into font size for differing interactive environments. We seek to develop the prototype to test and improve the accessibility of interactive TV services.
Self-selection of accessibility options BIBAFull-Text 277-278
  Nithin Santhanam; Shari Trewin; Cal Swart; Padmanabhan Santhanam
This study focuses on the use of web accessibility software by people with cerebral palsy performing three typical user tasks. We evaluate the customization options in the IBM accessibility Works add-on to the Mozilla Firefox browser, as used by ten users. While specific features provide significant benefit, we find that users tend to pick unnecessary options, resulting in a potentially negative user experience.
Sensing human movement of mobility and visually impaired people BIBAFull-Text 279-280
  Yusuke Fukushima; Hiromasa Uematsu; Ryotarou Mitsuhashi; Hidetaka Suzuki; Ikuko Eguchi Yairi
This paper studies human movement of both mobility and visually impaired people using mobile sensing devices as the first step toward creating an accessible information base. Nine mobility impaired persons conduct an experiment of wheelchair moving, and the visualized sensing results mapped on Googlemap is compared with their subjective feelings. Also, one blind person conducts an experiment of walking with a walking assistant. The sensing results show that a single accelerometer enabled to detect walking, descending and waiting behaviors.
Smartphone haptic feedback for nonvisual wayfinding BIBAFull-Text 281-282
  Shiri Azenkot; Richard E. Ladner; Jacob O. Wobbrock
We explore using vibration on a smartphone to provide turn-by-turn walking instructions to people with visual impairments. We present two novel feedback methods called Wand and ScreenEdge and compare them to a third method called Pattern. We built a prototype and conducted a user study where 8 participants walked along a pre-programmed route using the 3 vibration feedback methods and no audio output. Participants interpreted the feedback with an average error rate of just 4 percent. Most preferred the Pattern method, where patterns of vibrations indicate different directions, or the ScreenEdge method, where areas of the screen correspond to directions and touching them may induce vibration.
Supporting deaf children's reading skills: the many challenges of text simplification BIBAFull-Text 283-284
  Chiara Vettori; Ornella Mich
Deaf children have great difficulties in reading comprehension. In our contribution, we illustrate how we have collected, simplified and presented some stories in order to render them suitable for young Italian deaf readers both from a linguistic and a formal point of view. The aim is to stimulate their pleasure of reading. The experimental data suggest that the approach is effective and that enriching the stories with static and/or animated drawings significantly improves text readability. However, they also clearly point out that textual simplification alone is not enough to meet the needs of the target group and that the story structure itself and its presentation have to be carefully planned.
TapBeats: accessible and mobile casual gaming BIBAFull-Text 285-286
  Joy Kim; Jonathan Ricaurte
Conventional video games today rely on visual cues to drive user interaction, and as a result, there are few games for blind and low-vision people. To address this gap, we created an accessible and mobile casual game for Android called TapBeats, a musical rhythm game based on audio cues. In addition, we developed a gesture system that utilizes text-to-speech and haptic feedback to allow blind and low-vision users to interact with the game's menu screens using a mobile phone touchscreen. A graphical user interface is also included to encourage sighted users to play as well. Through this game, we aimed to explore how both blind and sighted users can share a common game experience.
The CHAMPION software project BIBAFull-Text 287-288
  Suzanne Prior; Annalu Waller; Thilo Kroll
A visit to hospital is traumatic for both a patient with disabilities and their family members, especially when the patient has no or limited functional speech [1]. For adults with Severe Speech and Physical Impairments (SSPI) being hospitalized presents particular challenges as hospital staff are often unaware of how the adult with SSPI communicates and what their basic care needs are.
   The CHAMPION project aimed to develop a piece of software which would allow an adult with SSPI to input multimedia information on their care needs and on the "person behind the patient". It was hoped that the system could be used by the person with SSPI as independently as possible. The aim would then be for the information to be accessed in hospital.
   The first stage of the process has now been completed with the input and output software developed using User Centred Design techniques. What is now required is an investigation into the efficacy of the software in the real life hospital setting.
The effect of hand strength on pointing performance of users for different input devices BIBAFull-Text 289-290
  Pradipta Biswas; Pat Langdon
We have investigated how hand strength affects pointing performance of people with and without mobility impairment in graphical user interfaces for four different input modalities. We have found that grip strength and active range of motion of wrist are most indicative of the pointing performance. We have used the study to develop a set of linear equations to predict pointing time for different devices.
Thought cubes: exploring the use of an inexpensive brain-computer interface on a mental rotation task BIBAFull-Text 291-292
  G. MIchael Poor; Laura Marie Leventhal; Scott Kelley; Jordan Ringenberg; Samuel D. Jaffee
Brain-computer interfaces (BCI) allow users to relay information to a computer by capturing reactions to their thoughts via brain waves (or similar measurements). This "new" type of interaction allows users with limited motor control to interact with a computer without a mouse/keyboard or other physically manipulated interaction device. While this technology is in its infancy, there have been major strides in the area allowing researchers to investigate potential uses. One of the first such interfaces that has broached the commercial market at an affordable price is the Emotiv "EPOC" headset. This paper reports on results of a study exploring usage of the EPOC headset.
Toward 3D scene understanding via audio-description: Kinect-iPad fusion for the visually impaired BIBAFull-Text 293-294
  Juan Diego Gomez; Sinan Mohammed; Guido Bologna; Thierry Pun
Microsoft's Kinect 3-D motion sensor is a low cost 3D camera that provides color and depth information of indoor environments. In this demonstration, the functionality of this fun-only camera accompanied by an iPad's tangible interface is targeted to the benefit of the visually impaired. A computer-vision-based framework for real time objects localization and for their audio description is introduced. Firstly, objects are extracted from the scene and recognized using feature descriptors and machine-learning. Secondly, the recognized objects are labeled by instruments sounds, whereas their position in 3D space is described by virtual space sources of sound. As a result, the scene can be heard and explored while finger-triggering the sounds within the iPad, on which a top-view of the objects is mapped. This enables blindfolded users to build a mental occupancy grid of the environment. The approach presented here brings the promise of efficient assistance and could be adapted as an electronic travel aid for the visually-impaired in the near future.
TypeInBraille: a braille-based typing application for touchscreen devices BIBAFull-Text 295-296
  Sergio Mascetti; Cristian Bernareggi; Matteo Belotti
Smartphones provide new exciting opportunities to visually impaired users because these devices can support new assistive technologies that cannot be deployed on desktops or laptops. Some devices, like the iPhone, are rapidly gaining popularity among the visually impaired since the use of pre-installed screenreader applications renders these devices accessible. However, there are still some operations that require a longer time or higher mental workload to be completed by a visually impaired user. In this contribution we present a novel application for text entry, called TypeInBraille, that is based on the Braille code and hence is specifically designed for blind users.
Use of serious games for motivational balance rehabilitation of cerebral palsy patients BIBAFull-Text 297-298
  Biel Moyà Alcover; Antoni Jaume-i-Capó; Javier Varona; Pau Martinez-Bueso; Alejandro Mesejo Chiong
Research studies show that serious games help to motivate users in rehabilitation and therapy is better when users are motivated. In this work we experiment with serious games for cerebral palsy patients, who rarely show capacity increases with therapy which causes them demotivation. For this reason, we have implemented balance rehabilitation video games for this group of patients. The video games were developed using the prototype development paradigm, respecting the requirements indicated by physiotherapists and including desirable features for rehabilitation serious games presented in the literature. A set of patients who abandoned therapy last year due to loss of motivation, has tested the video game for a period of 6 months. Whilst using the video game no patients have abandoned therapy, showing the appropriateness of games for this kind of patients.
Using a game controller for text entry to address abilities and disabilities specific to persons with neuromuscular diseases BIBAFull-Text 299-300
  Torsten Felzer; Stephan Rinderknecht
This paper proposes a poster about an alternative text entry method, based on a commercially available game controller as input device, as well as a demo of the accompanying software application. The system was originally intended for a particular gentleman with the neuromuscular disease Friedreich's Ataxia (FA), who asked us to help him -- by developing an optimal keyboard replacement for him -- already several years ago. Our work focused on his impressions in an initial case study testing this newest attempt. Taking the tester's comments into account, the outcome seems to be rather promising in meeting his needs, and it appears very probable that the system could be of help for anyone with a similar condition.
Using accelerometers for the assessment of improved function due to postural support for individuals with cerebral palsy BIBAFull-Text 301-302
  Yu Iwasaki; Tetsuya Hirotomi; Annalu Waller
Proper seating and positioning is crucial in performing functional activities by individuals with neurological disabilities such as cerebral palsy. Subjective seating assessments are usually performed by physical and occupational therapists observing activities with different seating adaptations. Frequent assessments are required to maintain and adapt seating as individuals' physical characteristics change over time. We conducted a single case study with a 10 year old boy with cerebral palsy to investigate the potential use of accelerometers for the assessment of improved function due to postural support in seating. The results suggest that the root mean square values of acceleration correspond well with the subjective assessment of therapists that reduction in involuntary movements improves function.
Using device models for analyzing user interaction problems BIBAFull-Text 303-304
  Matthias Schulz; Stefan Schmidt; Klaus-Peter Engelbrecht; Sebastian Möller
This paper presents work in progress which aims at analyzing the origins of interaction problems which certain users have when interacting with new technology. Our analysis is based on device models which categorize certain classes of devices via a pre-defined set of features. We provide examples which show that usability problems are partially caused by an erroneous transfer of device features to new/unknown devices.
Voice banking and voice reconstruction for MND patients BIBAFull-Text 305-306
  Christophe Veaux; Junichi Yamagishi; Simon King
When the speech of an individual becomes unintelligible due to a degenerative disease such as motor neuron disease (MND), a voice output communication aid (VOCA) can be used. To fully replace all functions of speech communication: communication of information, maintenance of social relationships and displaying identity, the voice must be intelligible, natural-sounding and retain the vocal identity of the speaker. Attempts have been made to capture the voice before it is lost, using a process known as voice banking. But, for patients with MND, the speech deterioration frequently coincides or quickly follows diagnosis. Using model-based speech synthesis, it is now possible to retain the vocal identity of the patient with minimal data recordings and even deteriorating speech. The power of this approach is that it is possible to use the patient's recordings to adapt existing voice models pre-trained on many speakers. When the speech has begun to deteriorate, the adapted voice model can be further modified in order to compensate for the disordered characteristics found in the patient's speech. We present here an on-going project for voice banking and voice reconstruction based on this technology.
Web-based sign language synthesis and animation for on-line assistive technologies BIBAFull-Text 307-308
  A Zdenek Krnoul
This article presents recent progress with design of sign language synthesis and avatar animation adapted for the web environment. New 3D rendering method is considered to enable transfer of avatar animation to end users. Furthermore the animation efficiency of facial expressions as part of the non-manual component is discussed. The designed web service ensures on-line accessibility and fluent animation of 3D avatar model, does not require any additional software and gives a wide range of usage for target users.

Student research competition

Analyzing visual questions from visually impaired users BIBAFull-Text 309-310
  Erin L. Brady
Many new technologies have been developed to assist people who are visually impaired in learning about their environment, but there is little understanding of their motivations for using these tools. Our tool VizWiz allows users to take a picture using their mobile phone, ask a question about the picture's contents, and receive an answer in nearly realtime. This study investigates patterns in the questions that visually impaired users ask about their surroundings, and presents the benefits and limitations of responses from both human and computerized sources.
Brazilian sign language multimedia hangman game: a prototype of an educational and inclusive application BIBAFull-Text 311-312
  Renata Cristina Barros Madeo
This paper presents a prototype of an educative and inclusive application: the Brazilian Sign Language Multimedia Hangman Game. This application aims to estimulate people, specially children, deaf or not, to learn a sign language and to help deaf people to improve their vocabulary in an oral language. The differential of this game is that its input consists of videos of the user performing signs from Brazilian Sign Language corresponding to Latin alphabet letters, recorded through the game graphical interface. These videos are processed by a computer vision module in order to recognize the letter to which the sign corresponds, using a recognition strategy based on primitives -- hand configuration, movement and orientation, reaching 84.3% accuracy.
Developing for autism with user-centred design BIBAFull-Text 313-314
  Rachel Menzies
This paper describes the process undertaken to develop software that allows children with Autism Spectrum Disorders (ASD) to explore social situations, in particular the concept of sharing. The User-Centred Design (UCD) process is described, along with adaptations made to alleviate anxiety resulting from the reduced social skills seen in ASD.
Fashion for the blind: a study of perspectives BIBAFull-Text 315-316
  Michele A. Burton
Clothing is a universal aspect of life and a significant form of communication for both the wearer and observer. However, clothing is almost exclusively perceived visually begging the question: "How is beauty in fashion interpreted by those with vision impairments?" We conducted face-to-face interviews and a diary study with eight legally blind participants to gain the perspectives of those with vision impairments on what makes clothing attractive and appealing. Our primary focus was gathering their point-of-view on beauty in clothing but all of the participants also discussed accessibility challenges of clothing and fashion. We report our findings on the major aspects of clothing's appeal to blind wearers as well as the challenges with lack of access and assistive technology. These findings have far-reaching implications for future research within fashion design, interaction design and assistive technology.
Improving public transit accessibility for blind riders: a train station navigation assistant BIBAFull-Text 317-318
  Markus Guentert
Blind people often depend on public transit for mobility. In interviews I learned that changing trains and orientation inside stations is a significant hindering reason for not being spontaneous. Since GPS-navigation typically cannot be used indoors, this paper focuses on building a tool for blind people to assist them in navigating inside train stations, designed for commodity hardware like the Apple iPhone.
Kinerehab: a kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities BIBAFull-Text 319-320
  Jun-Da Huang
This study used Microsoft's Kinect motion sensor to develop an intelligent rehabilitation system. Through discussion with physical therapists at the Kaohsiung County Special Education School, researchers understood that students with physical disabilities typically lack enthusiasm for rehabilitation, hindering their recovery of limb function and ability to care for themselves. Because therapists must simultaneously care for numerous students, there is also a shortage of human resources. Using fieldwork and recommendations by physical therapists, this study applied the proposed system to students with muscle atrophy and cerebral palsy, and assisted them in physical therapy. The system increased their motivation to participate in rehabilitation and enhanced the efficiency of rehab activities, greatly contributing to the recovery of muscle endurance and reducing the workload of therapists.
Providing haptic feedback using the kinect BIBAFull-Text 321-322
  Brandon T. Shrewsbury
Interpreting surroundings through the senses often relies on visual channels for a full interpretation of the environment. Our approach substitutes this use of vision by integrating haptic feedback with the sensory data from the depth camera system packaged within the Microsoft Kinect.
StopFinder: improving the experience of blind public transit riders with crowdsourcing BIBAFull-Text 323-324
  Sanjana Prasain
I developed a system for mobile devices for crowdsourcing landmarks around bus stops for blind transit riders. The main focus of my research is to develop a method to provide reliable and accurate information about landmarks around bus stops to blind transit riders. In addition to that, my research focuses on understanding how access to such information affects their use of public transportation.
The Phonicstick: a joystick to generate novel words using phonics BIBAFull-Text 325-326
  Rolf Black
Current Voice Output Communication Aids (VOCAs) give little support for playing with sounds and blending these into words. This paper presents a joystick that can be used to access six different letter sounds (phonics) and blend them into short words. Seven children (five with some degree of physical and/or learning disability) showed their ability to use the device after only one 20 minutes introduction session.
The TaskTracker: assistive technology for task completion BIBAFull-Text 327-328
  Victoria E. Hribar
A cognitive impairment can restrict the independence an individual has over their own life. Commonly, attention deficits affect individuals with cognitive impairments and make completion of everyday tasks difficult. While many technologies exist to assist this group in memory storage and retrieval, these individuals could also benefit from technology focused on time management and assistance with task completion. The TaskTracker has been created for this purpose by incorporating several features focused on task completion into one useful Android smart phone application. A progress bar, alarm reminders, and a motivational message have been combined to motivate task completion and time management rather than task memory alone.
Using a computer intervention to support phonological awareness development of nonspeaking adults BIBAFull-Text 329-330
  Ha Trinh
The present study investigates the effectiveness of a computer-based intervention to support adults with severe speech and physical impairments (SSPI) in developing their phonological awareness, an essential contributory factor to literacy acquisition. Three participants with SSPI undertook seven intervention sessions during which they were asked to play a training game on an iPad. The game was designed to enable learners to practice their phonological awareness skills independently with minimal instruction from human instructors. Preliminary results of post-intervention assessments demonstrate general positive effects of the intervention upon the phonological awareness and literacy skills of the participants. These results support the use of mainstream technologies to aid learning for individuals with disabilities.
ZigADL: an ADL training system enabling teachers to assist children with intellectual disabilities BIBAFull-Text 331-332
  Zhi-Zhan Lu
Assistive technology for children with intellectual disabilities is developed to achieve the goal of performing ADLs (Activities of Daily Living) independently. In special education under the guidance of a teacher, we used a ZigBee sensor network called ZigADL to assist a 9-year-old child. The study assessed the effectiveness of ZigADL for teaching waste disposal. A subject research design was used following a three-week monitoring period. Results indicate that for a child with severe intellectual disability, acquisition of ADLs may be facilitated by use of ZigADL in conjunction with operant conditioning strategies.