HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

Eighth Annual ACM SIGACCESS Conference on Assistive Technologies

Fullname:Eighth International ACM SIGACCESS Conference on Assistive Technologies
Editors:Simon Harper
Location:Portland, Oregon, USA
Dates:2006-Oct-23 to 2006-Oct-25
Publisher:ACM
Standard No:ISBN 1-59593-290-9; ACM Order Number 444060; ACM DL: Table of Contents hcibib: ASSETS06
Papers:70
Pages:304
  1. Keynote
  2. Motor input assistance
  3. Vision
  4. Design challenges
  5. Navigational assistance
  6. Cognition and emotion
  7. Mode transformations for vision
  8. Alternative modes for motor input
  9. Posters and demos
  10. Student research competition

Keynote

Technology and older adults: designing for accessibility and usability BIBAFull-Text 1
  Sara J. Czaja
Two major demographic trends underscore the importance of considering technology adoption by older adults: the aging of the population and rapid dissemination of technology within most societal contexts. In the past decade, developments in computer and information technologies have occurred at an unprecedented rate and technology has become an integral component of work, education, healthcare, communication and entertainment. At the same time that we are witnessing explosive developments in technology the population is aging. In 2003 people aged 65+ yrs. in the United States numbered about 35 million and represented approximately 13% of the population. By 2030 this number is expected to increase to about 71 million representing 20% of the population). Moreover, there will be a dramatic increase in those aged 85+ yrs. increasing in numbers from about 4 million in 2000 to nearly 21 million by 2050. Recent data for the U.S. also indicate that although the use of technology such as computers and the Internet among older adults is increasing there is still an age-based digital divide.
   Not having access to and being able to use technology may put older adults at a disadvantage in terms of their ability to live independently. For example, the Internet is rapidly becoming a major vehicle for communication and information dissemination about health, community and government services. Technology also offers the potential for enhancing the quality of life of older people by augmenting their ability to perform a variety of tasks and access information. To make technology useful to and usable by older adults a challenge for the research and design community is to "know thy user" and better understand the needs, preferences and abilities of older people. It is fairly well established that many technology products and systems are not easily accessible to older people. There are of course a myriad of reasons for this such as cost, lack of access to training programs, etc. However, to a large part it is because designers are unaware of the needs of users with varying abilities or do not know how to accommodate their needs in the design process.
   Although, older adults today are healthier, more diverse and better educated than previous generations, there are age-related changes in functional abilities that have relevance to the design of technology systems. These include changes in sensory/perceptual processes, motor abilities, response speed, and cognitive processes. The likelihood of developing a disability increases with age, and many older people have at least one chronic condition such as arthritis or hearing and vision impairments. This presentation will discuss the implications of age-related changes in abilities that have relevance to system design and provide a summary of what is currently know about the adoption and use of technology by older people. Recommendations to accommodate these age-related changes in abilities will also be discussed. In addition, a brief discussion of strategies to include the needs of older people in the design process will be presented. It is hoped that this presentation will highlight some important issues and in doing so help bridge the existing age-related digital divide.

Motor input assistance

From letters to words: efficient stroke-based word completion for trackball text entry BIBAFull-Text 2-9
  Jacob O. Wobbrock; Brad A. Myers
We present a major extension to our previous work on Trackball EdgeWrite -- a unistroke text entry method for trackballs -- by taking it from a character-level technique to a word-level one. Our design is called stroke-based word completion, and it enables efficient word selection as part of the stroke-making process. Unlike most word completion designs, which require users to select words from a list, our technique allows users to select words by performing a fluid crossing gesture. Our theoretical model shows this word-level design to be 45.0% faster than our prior model for character-only strokes. A study with a subject with spinal cord injury comparing Trackball EdgeWrite to the onscreen keyboard WiViK, both using word prediction and completion, shows that Trackball EdgeWrite is competitive with WiViK in speed (12.09 vs. 11.82 WPM) and accuracy (3.95% vs. 2.21% total errors), but less visually tedious and ultimately preferred. The results also show that word-level Trackball EdgeWrite is 46.5% faster and 36.7% more accurate than our subject's prior peak performance with character-level Trackball EdgeWrite, and 75.2% faster and 40.2% more accurate than his prior peak performance with his preferred on-screen keyboard. An additional evaluation of the same subject over a two-month field deployment shows a 43.9% reduction in unistrokes due to strokebased word completion in Trackball EdgeWrite.
Alternative text entry using different input methods BIBAFull-Text 10-17
  Torsten Felzer; Rainer Nordmann
This paper deals with PC-based alternative (i.e., keyboardfree) text entry and the issues related to emulating keystrokes with only a limited number of input signals. The previously introduced HaMCoS tool tries to enable someone who cannot use the hands to enter text almost as fast as someone exclusively using a manual mouse. To achieve this rather ambitious goal, HaMCoS provides two different (but combinable) solutions. On the one hand, word completion is offered as a shortcut technique. On the other hand, in addition to a mere on-screen keyboard, a completely new application has been implemented where selecting characters is somehow similar to entering Morse code (but with four 'bits' instead of dots and dashes only). In order to show the effect of these measures, the times needed to copy a moderately long text in various circumstances are reported.
Indirect text entry using one or two keys BIBAFull-Text 18-25
  Melanie Baljko; Andrew Tam
This paper introduces a new descriptive model for indirect text composition facilities that is based on the notion of a containment hierarchy. This paper also demonstrates a novel, computer-aided technique for the design of indirect text selection interfaces -- one in which Huffman coding is used for the derivation of the containment hierarchy. This approach guarantees the derivation of optimal containment hierarchies, insofar as mean encoding length. This paper describes an empirical study of two two-key indirect text entry variants and compares them to one another and to the predictive model. The intended application of these techniques is the design of improved indirect text entry facilities for the users of AAC systems.
Developing steady clicks: a method of cursor assistance for people with motor impairments BIBAFull-Text 26-33
  Shari Trewin; Simeon Keates; Karyn Moffatt
Slipping while clicking and accidental clicks are a source of errors for mouse users with motor impairments. The Steady Clicks assistance feature suppresses these errors by freezing the cursor during mouse clicks, preventing overlapping button presses and suppressing clicks made while the mouse is moving at a high velocity. Evaluation with eleven target users found that Steady Clicks enabled participants to select targets using significantly fewer attempts. Overall task performance times were significantly improved for the five participants with the highest slip rates. Blocking of overlapping and high velocity clicks also shows promise as an error filter. Nine participants preferred Steady Clicks to the unassisted condition. If used in conjunction with existing techniques for cursor positioning, all of the major sources of clicking errors observed in empirical studies would be addressed, enabling faster and more effective mouse use for those who currently struggle with the standard mouse.

Vision

A multi-domain approach for enhancing text display for users with visual aberrations BIBAFull-Text 34-39
  Miguel, Jr. Alonso; Armando Barreto; Julie A. Jacko; Malek Adjouadi
In this paper, we describe a multi-domain approach for enhancing text displayed on a computer screen for users with visual aberrations. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate text that, when displayed to this user, will counteract his/her visual aberration. The method described in this paper advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels.
Accommodating color blind computer users BIBAFull-Text 40-47
  Luke Jefferson; Richard Harvey
Important visual information often disappears when color documents are viewed by color blind people. The algorithm introduced here maps colors using the World Wide Web Consortium evaluation criteria so that detail is preserved for color blind viewers, especially dichromats. The algorithm has four parts: 1) select a representative set of colors from the source document; 2) compute target color distances using color and brightness differences; 3) solve an optimization step that preserves the target distances for a particular class of color blind viewer; and 4) interpolate the mapped colors across the remaining colors in the document. We demonstrate the efficacy of our method using simulations and critique our method in the context of earlier work.
Lambda: a multimodal approach to making mathematics accessible to blind students BIBAFull-Text 48-54
  Alistair D. N. Edwards; Heather McCartney; Flavio Fogarolo
The study of mathematics is all but precluded to most blind students because of the reliance on visual notations. The Lambda System is an attempt to overcome this barrier to access through the development of a linear mathematical notation which can be manipulated by a multimodal mathematical editor. This provides access through braille, synthetic speech and a visual display. Initial results from a longitudinal study with prospective users are encouraging.
Measuring website usability for visually impaired people-a modified GOMS analysis BIBAFull-Text 55-62
  Henrik Tonn-Eichstadt
Web designers regularly wonder which version of a design would suit best their target groups' needs. This becomes even more complicated if the design is to comply with accessibility rules. This paper describes an interaction model of blind users' interaction strategies. This model is based on GOMS (Goals, Operators, Methods, Selection rules) and can be used to measure aspects of website usability for blind users. The model evolved from findings of user observations and field studies. It can be applied to specific layouts in order to find the 'best' alternative. 'Classic' GOMS models lack functions which are necessary for the presented GOMS model. Thus, new structures to extend the classic GOMS notation are proposed. Finally, an example GOMS analysis is run on a modified version of the ASSETS '06 web page.

Design challenges

Dynamically adapting GUIs to diverse input devices BIBAFull-Text 63-70
  Scott Carter; Amy Hurst; Jennifer Mankoff; Jack Li
Many of today's desktop applications are designed for use with a pointing device and keyboard. Someone with a disability, or in a unique environment, may not be able to use one or both of these devices. We have developed an approach for automatically modifying desktop applications to accommodate a variety of input alternatives as well as a demonstration implementation, the Input Adapter Tool (IAT). Our work is differentiated from past work by our focus on input adaptation (such as adapting a paint program to work without a pointing device) rather than output adaptation (such as adapting web pages to work on a cellphone). We present an analysis showing how different common interactive elements and navigation techniques can be adapted to specific input modalities. We also describe IAT, which supports a subset of these adaptations, and illustrate how it adapts different inputs to two applications, a paint program and a form entry program.
MobileASL: intelligibility of sign language video as constrained by mobile phone technology BIBAFull-Text 71-78
  Anna Cavender; Richard E. Ladner; Eve A. Riskin
For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and one user study with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eyetracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. These results show promise for realtime access to the current cell phone network through signlanguage-specific encoding techniques.
American sign language recognition in game development for deaf children BIBAFull-Text 79-86
  Helene Brashear; Valerie Henderson; Kwang-Hyun Park; Harley Hamilton; Seungyon Lee; Thad Starner
CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.
Are "universal design resources" designed for designers? BIBAFull-Text 87-94
  Young Sang Choi; Ji Soo Yi; Chris M. Law; Julie A. Jacko
Universal design (UD) is an approach to design that incorporates things which can be used by all people to the greatest extent possible. UD in information and communication technologies (ICTs) is of growing importance because standard ICTs have great potential to be usable by all people, including people with disabilities (PWDs). Currently, PWDs who need ICTs often have less access because the products have not been universally designed. We hypothesize that one of the reasons for the slow adoption of UD is that universal design resources (UDRs) are not adequate for facilitating designers' tasks. We investigated the usability of UDRs from designers' perspectives. A heuristic evaluation on eight selected UDRs was conducted, and the opinions of contributors to the content of these resources were collected through a web-based survey study. The results of the heuristic evaluation show that most of the investigated UDRs do not provide a clear central idea and fail to support the cognitive processes of designers. The results of the survey also confirmed that the content of these resources do not systematically address the needs of designers as end-users during the development process.

Navigational assistance

Indoor wayfinding: developing a functional interface for individuals with cognitive impairments BIBAFull-Text 95-102
  Alan L. Liu; Harlan Hile; Henry Kautz; Gaetano Borriello; Pat A. Brown; Mark Harniss; Kurt Johnson
Assistive technology for wayfinding will significantly improve the quality of life for many individuals with cognitive impairments. The user interface of such a system is as crucial as the underlying implementation and localization technology. We built a system using the Wizard-of-Oz technique that let us experiment with many guidance strategies and interface modalities. Through user studies, we evaluated various configurations of the user interface for accuracy of route completion, time to completion, and user preferences. We used a counter-balanced design that included different modalities (images, audio, and text) and different routes. We found that although users were able to use all types of modalities to find their way indoors, they varied significantly in their preferred modalities. We also found that timing of directions requires careful attention, as does providing users with confirmation messages at appropriate times. Our findings suggest that the ability to adapt indoor wayfinding devices for specific users' preferences and needs will be particularly important.
Where's my stuff?: design and evaluation of a mobile system for locating lost items for the visually impaired BIBAFull-Text 103-110
  Julie A. Kientz; Shwetak N. Patel; Arwa Z. Tyebkhan; Brian Gane; Jennifer Wiley; Gregory D. Abowd
Finding lost items is a common problem for the visually impaired and is something that computing technology can help alleviate. In this paper, we present the design and evaluation of a mobile solution, called FETCH, for allowing the visually impaired to track and locate objects they lose frequently but for which they do not have a specific strategy for tracking. FETCH uses devices the user already owns, such as their cell phone or laptop, to locate objects around their house. Results from a focus group with visually impaired users informed the design of the system. We then studied the usability of a laptop solution in a laboratory study and studied the usability and usefulness of the system through a one-month deployment and diary study. These studies demonstrate that FETCH is usable and useful, but there is still room for improvement.
Interactive tracking of movable objects for the blind on the basis of environment models and perception-oriented object recognition methods BIBAFull-Text 111-118
  Andreas Hub; Tim Hartter; Thomas Ertl
In previous work we have presented a prototype of an assistant system for the blind that can be used for self-localization and interactive object identification of static objects stored within 3D environment models. In this paper we present a new method for interactive tracking of various types of movable objects. The state of fixed movable objects, like doors, can be recognized by comparing the distance between sensor data and a 3D model. For the identification and model-based tracking of free movable objects, like chairs, we have developed an algorithm that is similar to human perception, based on shape and color comparisons to trained objects. Further, using a common face detection algorithm, our assistant system informs the user of the presence of people, and enables the localization of a real person based on interactive tracking of virtual models of humans.
Using an audio interface to assist users Who are visually impaired with steering tasks BIBAFull-Text 119-124
  Robert F. Cohen; Valerie Haven; Jessica A. Lanzoni; Arthur Meacham; Joelle Skaff; Michael Wissell
In this paper we describe the latest results in our on-going study of techniques to present relational graphs to users with visual impairments. Our work tests the effectiveness of the PLUMB software package, which uses audio feedback and the pen-based Tablet PC interface to relay graphs and diagrams to users with visual impairments. Our study included human trials with ten participants without usable vision, in which we evaluated the users' ability to perform steering tasks under varying conditions.

Cognition and emotion

Networked reminiscence therapy for individuals with dementia by using photo and video sharing BIBAFull-Text 125-132
  Noriaki Kuwahara; Shinji Abe; Kiyoshi Yasuda; Kazuhiro Kuwabara
Reminiscence therapy, which is effective for increasing the selfesteem of and for reducing behavioral disturbances in individuals with dementia, is usually conducted in a group led by experienced staff. However, due to the shortage of care attendants, only a limited number of patients at home can receive the benefits of this therapy. To provide this therapy for patients anytime or anywhere, we have developed a networked reminiscence therapy system that combines IP videophones with a photo- and video-sharing mechanism based on Web technology. First, we prepared the experimental setup in a hospital and examined whether dementia patients could communicate with therapists by videophone. Then we conducted a field trial of networked reminiscence therapy with a more realistic situation where remote volunteers communicated with dementia sufferers in the care home by IP videophones connected by broadband network. In this paper, we describe our developed system. Then, we present experimental results showing that dementia sufferers could communicate with therapists by videophone and that networked reminiscence sessions were generally as successful for individuals with dementia as face-to-face reminiscence sessions.
Attention analysis in interactive software for children with autism BIBAFull-Text 133-140
  A. Ould Mohamed; V. Courboulay; K. Sehaba; M. Menard
This work is a part of an ongoing project that focuses on potential applications of an interactive system that helps children with autism. Autism is classified as a neurodevelopmental disorder that manifests itself in markedly abnormal social interaction, communication ability, patterns of interests, and patterns of behavior [1]. Children with autism are socially impaired and usually do not attend to the people around them. An interesting point which characterized children with autism is that they are unable to choose which event is more or less important. As a consequence they are often saturated because of too many stimuli and thus they adopt an extremely repetitive, unusual, self-injurious, or aggressive behaviour. Recently, a new trend of using human computer interface (HCI) technology and computer science in the treatment of autism has emerged [2, 3]. The platform we developed helps children with autism to focus their attention on a specific task. In this article, we only present the attention analysis system which is a part of a more general system that used a multi-agent architecture [4]. Each task proposed on our system fit to each child, is reproducible and evolutive following a specific scenario defined by the expert. This scenario takes into account age, ability, and degree of autism of each child. In order to focus a child's attention onto the relevant object, our system displays or plays specific stimulus; once again the specific stimulus is defined for each child. Symbol or sound represents an emotional and satisfaction value for the child. The major problem is to define the correct moment when the system has to (dis)play this signal. We tackle this problem by defining a robust measure of attention. This measure is defined by analyzing the gaze direction and the face orientation, and incorporating the child's specific profile. Following expert directives, our system helps children to categorize elementary perception (strong, smooth, quick, slow, big, small...). Our objective is that children re-use these classifications in others situations.
Understanding emotion through multimedia: comparison between hearing-impaired people and people with hearing abilities BIBAFull-Text 141-148
  Rumi Hiraga; Nobuko Kato
We conducted an experiment to determine the abilities of hearing-impaired and normal-hearing people to recognize intended emotions conveyed in four types of stimuli: a drum performance, a drum performance accompanied by a drawing expressing the same intended emotion, and a drum performance accompanied by one of two types of motion pictures. The recognition rate was the highest for a drum performance accompanied by a drawing even though participants in both groups found it difficult to identify the intended emotion because they felt the two stimuli sometimes conveyed different emotions. Visual stimuli were especially effective for performances whose intended emotions were not clear by themselves. The difference in ability to recognize intended emotions between the hearing-impaired and normal-hearing participants was insignificant. The results of this and a series of experiments will enable us to better understand the similarities and differences between how people with different hearing abilities encode and decode emotions in and from sound and visual media. We should then be able to develop a system that will enable hearing-impaired and normal-hearing people to play music together.
Determining the impact of computer frustration on the mood of blind users browsing the web BIBAFull-Text 149-156
  Jonathan Lazar; Jinjuan Feng; Aaron Allen
While previous studies have investigated the impact of frustration on computer users' mood as well as the causes of frustration, no research has ever been conducted to examine the relationship between computer frustrations and mood change for users with visual impairment. In this paper, we report on a study that examined the frustrating experiences and mood change of 100 participants, all with visual impairments, when they were browsing the web. The result shows that frustration does cause the participants' mood to deteriorate. However, the amount of time lost due to frustrating situations does not have a significant impact on users' mood, which is very different from the previous research on users without visual impairment. The impact on work seems to have the greatest impact on user mood.

Mode transformations for vision

Transforming flash to XML for accessibility evaluations BIBAFull-Text 157-164
  Shin Saito; Hironobu Takagi; Chieko Asakawa
Rich Internet content, such as Flash and DHTML, has been spreading all over the net, since it can provide rich and dynamic Web experiences for the sighted majority. It is obvious that this content is inaccessible for visually impaired people because of its visual richness. For Flash, many efforts have been made to address the issue, such as accessibility guidelines and best practices documents However, the amount of accessible content has not been increasing in spite of these efforts. One of the severe issues is the lack of tools to create accessible content. Current Web accessibility technologies are built on top of XMLbased technology infrastructures. In contrast, there is no foundation for investigating inside of Flash content, since it is distributed in a binary format. This characteristic has prevented vendors from developing Flash accessibility technologies. In order to address this issue, this paper proposes a method to transform existing Flash content into XML structures. It combines two approaches for accessing the internal structures. One approach is to obtain MSAA output through the Flash Player and the other is to acquire information by injecting ActionScript bridge code into the content. In this paper, we will first give an overview of the accessibility framework for Flash content, and then present our XML transformation and checking method. A prototype of the checker has been implemented, and some preliminary results of accessibility evaluations are discussed.
Analyzing visual layout for a non-visual presentation-document interface BIBAFull-Text 165-172
  Tatsuya Ishihara; Hironobu Takagi; Takashi Itoh; Chieko Asakawa
Presentation documents play important roles in many fields, such as business and education. The principal purpose of presentation documents is to convey information visually, so recognizing the visual layout is essential for understanding those documents. However it is inherently difficult for the blind people to recognize a visual layout, because there are numerous types of charts in presentation documents. As the first step to solve such problems, this study focuses on diagrams in which objects or groups of objects are bound by arrows. Such diagrams usually show relationships among the objects. If such relationships could be recognized by screen readers, it would make them accessible. However, the presentation authoring applications do not have functions for embedding these relationships among objects. Therefore this paper proposes a visual analysis method for diagram structure in presentation documents to automatically create metadata. It generates metadata which describes the relationships of objects, and the source-destination relationships of arrows. Then a novel interface utilizing the metadata was prototyped to present the visual structure of presentation documents in a tree view. This allows blind users to understand presentation documents easily, because it represents the visual structure that current screen readers cannot expose. In addition, they are familiar with the tree view interface, so they can use it without training. Finally, an evaluation shows that our method for automatically creating the metadata can be applied to various types of diagrams in presentation documents.
Learning and perceiving colors haptically BIBAFull-Text 173-180
  Kanav Kahol; Jamieson French; Laura Bratton; Sethuraman Panchanathan
Color is an integral part of spatial perception and there is a need to develop systems that render color information accessible to blind individuals. A novel system that allows learning, presentation and analysis of color information, designed in consultations with focus groups of individuals who are blind is proposed. Our system is based on a methodology that renders colors as textures through a haptic device. The aim of the proposed approach is to enable color perception and provide a basis for assessing color similarity. Initial testing of the system shows that both blind individuals and sighted individuals can recognize colors through our approach and further assess similarity between colors through the system. A space was obtained through multidimensional scaling performed on similarity scores between pairs of colors as presented through our system. This space obtained high congruency with the chromaticity diagram and the hue saturation color wheel which shows the validity of our system to allow color visualization. A realtime system based on the proposed mapping is designed to allow realtime color perception.
WebInSight: making web images accessible BIBAFull-Text 181-188
  Jeffrey P. Bigham; Ryan S. Kaminsky; Richard E. Ladner; Oscar M. Danielsson; Gordon L. Hempton
Images without alternative text are a barrier to equal web access for blind users. To illustrate the problem, we conducted a series of studies that conclusively show that a large fraction of significant images have no alternative text. To ameliorate this problem, we introduce WebInSight, a system that automatically creates and inserts alternative text into web pages on-the-fly. To formulate alternative text for images, we present three labeling modules based on web context analysis, enhanced optical character recognition (OCR) and human labeling. The system caches alternative text in a local database and can add new labels seamlessly after a web page is downloaded, resulting in minimal impact to the browsing experience.

Alternative modes for motor input

Improvements in vision-based pointer control BIBAFull-Text 189-196
  Rick Kjeldsen
Vision-based head trackers have been around for some years and are even beginning to be commercialized, but problems remain with respect to usability. Users without the ability to use traditional pointing devices - the intended audience of such systems - have no alternative if the automatic boot strapping process fails, there is room for improvement in face tracking, and the pointer movement dynamics do not support accurate and efficient pointing. This paper describes a novel head tracking pointer that addresses these problems.
The vocal joystick: evaluation of voice-based cursor control techniques BIBAFull-Text 197-204
  Susumu Harada; James A. Landay; Jonathan Malkin; Xiao Li; Jeff A. Bilmes
Mouse control has become a crucial aspect of many modern day computer interactions. This poses a challenge for individuals with motor impairments or those whose use of hands are restricted due to situational constraints. We present a system called the Vocal Joystick which allows the user to continuously control the mouse cursor by varying vocal parameters such as vowel quality, loudness and pitch. A survey of existing cursor control methods is presented to highlight the key characteristics of the Vocal Joystick. Evaluations were conducted to characterize expert performance capability of the Vocal Joystick, and to compare novice user performance and preference for the Vocal Joystick and two other existing speech based cursor control methods. Our results show that Fitts' law is a good predictor of the speedaccuracy tradeoff for the Vocal Joystick, and suggests that the optimal performance of the Vocal Joystick may be comparable to that of a conventional hand-operated joystick. Novice user evaluations show that the Vocal Joystick can be used by people without extensive training, and that it presents a viable alternative to existing speech-based cursor control methods.
A voice-activated syntax-directed editor for manually disabled programmers BIBAFull-Text 205-212
  Thomas J. Hubbell; David D. Langan; Thomas F. Hain
This paper discusses a research project targeted at the design and implementation of an interface intended to allow manually disabled people to more easily perform the task of programming. It proposes a Speech User Interface (SUI) targeted for this task. Voice was selected as the means of input as an alternative to the keyboard and mouse. Traditional programming IDEs tend to be character and line oriented. It is argued that this orientation is not conducive to voice input, and so a syntaxdirected programming interface is proposed. To test the viability of this combination of voice with a syntax-directed approach, an editor named VASDE (Voice-Activated Syntax-Directed Editor) was implemented using ECLIPSE as the underlying platform for development. This paper describes the syntax-directed interface, VASDE, and some of the lessons learned from initial usability studies.
Non-speech input and speech recognition for real-time control of computer games BIBAFull-Text 213-220
  Adam J. Sporka; Sri H. Kurniawan; Murni Mahmud; Pavel Slavic
This paper reports a comparison of user performance (time and accuracy) when controlling a popular arcade game of Tetris using speech recognition or non-speech (humming) input techniques. The preliminary qualitative study with seven participants shows that users were able to control the game using both methods but required more training and feedback for the humming control. The revised interface, which implemented these requirements, was positively responded by users. The quantitative test with 12 other participants shows that humming excelled in both time and accuracy, especially over longer distances and advanced difficulty levels.

Posters and demos

A general HCI framework of sonification applications BIBAFull-Text 221-222
  Ag Asri Ag Ibrahim; Andy Hunt
This paper proposes a general HCI framework of sonification applications. It is used to explain and understand sonification applications and how it might be interpreted by the users. The framework emphasizes on two models namely Sonification Application Model (SA Model) and User Interpretation Construction Model (UIC Model).
A three-countries case study of older people's browsing BIBAFull-Text 223-224
  Prush Sa-nga-ngam; Sri Kurniawan
This paper presents quantitative data on browsing activities with 63 respondents aged 55 years old and over from three countries: UK, USA and Thailand. The questionnaire explored frequently browsed topics, browser's functions used, problems with standard browsers and features to add to a standard browser to make it more ageing-friendly. The study revealed country-related differences in various aspects of Internet uses, including the topics accessed and places of access. However, no country-related difference was observed on the input device used and the number of browsing windows opened.
Helping aphasic people process online information BIBAFull-Text 225-226
  Siobhan Devlin; Gary Unthank
In this paper, we describe the HAPPI (Helping Aphasic People Process Information) project which aims to develop web based systems to help Aphasic people gain access to web based information such as online news stories. It does this by simplifying the language and providing alternative means to help jog users' memories and hence improve their comprehension of the online material.
Loudmouth: modifying text-to-speech synthesis in noise BIBAFull-Text 227-228
  Rupal Patel; Michael Everett; Eldar Sadikov
Current speech synthesis technology is difficult to understand in everyday noise situations. Although there is a significant body of work on how humans modify their speech in noise, the results have yet to be implemented in a synthesizer. Algorithms capable of processing and incorporating these modifications may lead to improved speech intelligibility of assistive communication aids and more generally of spoken dialogue systems. We describe our efforts in building the Loudmouth synthesizer which emulates human modifications to speech in noise. A perceptual experiment indicated that Loudmouth achieved a statistically significant gain in intelligibility compared to a standard synthesizer in noise.
Hardware-based text-to-braille translator BIBAFull-Text 229-230
  Xuan Zhang; Cesar Ortega-Sanchez; Iain Murray
This paper describes the hardware implementation of a text to Braille Translator using Field-Programmable Gate Arrays (FPGAs). Different from most commercial software-based translators, the circuit presented is able to carry out text-to-Braille translation in hardware. The translator is based on the translating algorithm, proposed by Paul Blenkhorn [1]. The Very high speed Hardware Description Language (VHDL) was used to describe the chip in a hierarchical way. The test results indicate that the hardware-based translator achieves the same results as software-based commercial translators.
A portable device for the translation of braille to text BIBAFull-Text 231-232
  Iain Murray; Andrew Pasquale
This paper presents the development of a portable device for the translation of embossed Braille to text. The device optically scans a Braille page and outputs the equivalent text output in real time, thus acting as a written communications gateway between sighted and vision impaired persons.
South African sign language machine translation project BIBAFull-Text 233-234
  Lynette van Zijl
We describe the South African Sign Language Machine Translation project, and point out the role that the project is playing in the larger context of South African Sign Language and accessibility for the South African Deaf community.
A cisco education tool accessible to the vision impaired BIBAFull-Text 235-236
  J. Hope; B. R. von Konsky; I. Murray; L. C. Chew; B. Farrugia
This paper describes iNetSim, a universally accessible network simulator, created to allow vision-impaired and sighted users to complete Cisco Certified Network Associate level two (CCNA 2) laboratory sessions. Previously, software used in the CCNA course was not accessible to those with impaired vision because it utilized images of network topology. These images were incompatible with screen reader software. In contrast, iNetSim is assessable by blind and vision impaired users, in addition to those with normal vision. It is based on Mac OS X Tiger, an operating system with an integrated screen reader called VoiceOver.
Wireless headset communications for vision impaired persons in multi-User environments BIBAFull-Text 237-238
  Iain Murray; Andrew Pasquale
Wireless headsets are a great asset to Vision Impaired Persons (VIP's) as they prove to be much easier to use and reliable than wired equivalents. Radio based wireless headsets are the most common and have many favorable characteristics, however for environments where there may be numerous users with wireless headsets, radio channels easily become congested compromising audio quality and reliable operation. The research undertaken in this project attempts to sidestep the radio channel congestion problem and also produce a wireless headset tailored to the requirements of VIP's.
Picture planner: a cognitively accessible personal activity scheduling application BIBAFull-Text 239-240
  Thomas Keating
This paper describes design elements and field test results for an icon-driven, cognitively accessible personal activity scheduling application for use by individuals with disabilities and their assistants. Results showed that users with significant cognitive disabilities can learn to use and benefit from accessible computerbased self-management applications.
Designing a scripting language to help the blind program visually BIBAFull-Text 241-242
  Kenneth G. Franqueiro; Robert M. Siegfried
The vast proliferation of GUI-based applications, including graphical interactive development environments (IDEs), has placed blind programmers at a severe disadvantage in a profession that had previously been relatively accessible. Visual Basic is one such programming language and IDE, in which most programmers "point and click" to design the forms on which their applications rely. It is the goal of this project to introduce a scripting language that eliminates this barrier by providing a scripting language that makes it possible to define Visual Basic GUI forms and their components verbally, while remaining easy to write.
Automatically generating custom user interfaces for users with physical disabilities BIBKFull-Text 243-244
  Krzysztof Z. Gajos; Jing Jing Long; Daniel S. Weld
Keywords: arnauld, automatic UI generation, optimization, physical disabilities, supple
A rollator-mounted wayfinding system for the elderly: a smart world perspective BIBAFull-Text 245-246
  Aliasgar Kutiyanawala; Vladimir Kulyukin; Edmund LoPresti
We will demonstrate the iWalker, a three-sensor rollatormounted wayfinding system for the elderly with cognitive and visual impairments. Unlike several previous and ongoing research efforts, iWalker emphasizes a smart world (SW) perspective. A SWis a physical space equipped with embedded sensors. One implication of the SW perspective is the simplification of the onboard computing machinery needed to make iWalker operational.
Facetop tablet: note-taking assistance for deaf persons BIBAFull-Text 247-248
  Dorian Miller; James Culp; David Stotts
Meetings comprise a vital part of participation in social activities. For a deaf or hard of hearing person who does not understand spoken language, following meetings can become confusing if there are too many simultaneous sources of information. When the person focuses on one source of information, she misses information from another source; for example, while looking at a presenter's slides, the person misses information from the signing interpreter. The features of Facetop Tablet were iteratively designed according to feedback from members of the Deaf community. Through this feedback, we have refined and completed the implementation and it is ready for evaluation through user studies. We are ready to recruit participants whom we could reach through demonstrations.
Evaluating a pen-based computer interface for novice older users BIBAFull-Text 249-250
  Dante Arias Torres
Nowadays, few efforts have been dedicated to the design of specialized graphical user interfaces (GUIs) for elderly people despite the fact that they have computer interaction problems with the WIMP standard (Windows, Icons, Menus, and Pointers). This research goes one step further, proposing and evaluating a penbased interaction technique based on to draw simple lines that improves the computer usability for novice older users.
Using think aloud protocol with blind users: a case for inclusive usability evaluation methods BIBAFull-Text 251-252
  Sambhavi Chandrashekar; Tony Stockman; Deborah Fels; Rachel Benedyk
There is a need to assess the applicability of conventional Usability Evaluation Methods to users with disabilities, given the growing importance of involving these users in the usability evaluation process. We found that conventional Think Aloud Protocol cannot be used as is, and will require modification to be useful, when evaluating websites with blind users.
Accessibility evaluation based on machine learning technique BIBAFull-Text 253-254
  Daisuke Sato; Hironobu Takagi; Chieko Asakawa
Presentation documents are used in several situations. However, there is no tool to sufficiently check the accessibility level of a presentation document. Traditional rule-based checking has limitations in checking semantic criteria. This paper describes a new approach to evaluate the accessibility of presentation documents using machine learning with a model built from features of a presentation's appearance. A prototype system was implemented, and an exploratory experiment was conducted.
Designing auditory displays to facilitate object localization in virtual haptic 3D environments BIBAFull-Text 255-256
  Koen Crommentuijn; Fredrik Winberg
Five different auditory displays were designed to aid blind users in finding objects in a virtual haptic 3d environment. Each auditory display was based on a different principle and incorporated different methods for representing spatial information. Results from an evaluation with seven visually impaired persons reveal to what extent these methods facilitate object localization in a virtual haptic 3d environment.
Lecture adaptation for students with visual disabilities using high-resolution photography BIBAFull-Text 257-258
  Gregory Hughes; Peter Robinson Robinson
Visual content in lectures can be enhanced for use by students with visual disabilities by using high-resolution digital still cameras. This paper presents a system which uses two high-resolution cameras; one to capture multiple sources of visual content and another to monitor the head pose of up to 20 audience members. This capture technique eliminates the need for multiple cameras or intrusive and distracting instrumentation but introduced some new problems which were solved with an algorithm used to distinguish between two possible sources of visual interest.
SADIe: transcoding based on CSS BIBAFull-Text 259-260
  Simon Harper; Sean Bechhofer; Darren Lunn
Visually impaired users are hindered in their efforts to access the World Wide Web (Web) because their information and presentation requirements are different from those of a sighted user. These requirements can become problems as the Web becomes ever more visually centric with regard to presentation and information order / layout, this can (and does) hinder users who need presentationagnostic access to information. Finding semantic information already encoded directly into pages can help to alleviate these problems and support users who wish to understand the meaning as opposed to the presentation and order of the information. Our solution, Structural-Semantics for Accessibility and Device Independence (SADIe) involves building ontologies of Cascading Style-Sheets (CSS) and using those ontologies to transform Web pages.
Linux screen reader: extensible assistive technology BIBAFull-Text 261-262
  Peter Parente; Brett Clippingdale
The Linux Screen Reader (LSR) project is an open source effort to develop an extensible assistive technology for the GNOME desktop environment. The goal of the project is to create a reusable development platform for building alternative and supplemental user interfaces in support of people with diverse disabilities. In this paper, we highlight some key features of LSR including cascading scripts that tailor the user experience to particular applications and tasks, support for novel methods of input and output (e.g. concurrent spatial audio) suited to the needs and preferences of the user, and the ease and flexibility of extension development.
PLUMB: an interface for users who are blind to display, create, and modify graphs BIBAFull-Text 263-264
  Matt Calder; Robert F. Cohen; Jessica Lanzoni; Yun Xu
We demonstrate the most recent version of our system to communicate graphs and relational information to blind users. We have developed a system called exPLoring graphs at UMB (PLUMB) that displays a drawn graph on a tablet PC and uses auditory cues to help a blind user navigate the graph. This work has applications to assist blind individuals in Computer Science and other educational disciplines, navigation and map manipulation.
The personal portable profile project BIBAFull-Text 265-266
  Blaise W. Liffick; Gary Zoppetti; Shane Shearer
This presentation will demonstrate the Personal Portable Profile (P3) system, capturing the characteristics of a user profile from one computer and copying them to a second computer. This is accomplished through an auto-run program stored on a UD-RW flash drive device. Such a system will be useful for those with disabilities by allowing them to easily set the interaction characteristics of any computer they encounter to their "home" settings.
A prototype of google interfaces modified for simplifying interaction for blind users BIBAFull-Text 267-268
  Patrizia Andronico; Marina Buzzi
In this study we present a SW prototype developed within the framework of a research project aiming at improving the usability of search engines for blind users who interact via screen reader and voice synthesizer. Following the eight specific guidelines we proposed for simplifying interaction with search engines using assistive technology, we redesigned Google user interfaces (i.e. simple search and result pages) by using XSL Transformations, Google APIs and PERL technologies. A remote test with 12 totally blind users was carried out in order to evaluate the proposed prototype. Collected results highlight ways in which Google interfaces could be modified in order to improve usability for the blind. In our demo we will show how interaction with the modified Google UIs is simplified and how the time for reaching the most important elements (i.e. first query result, next result page, etc.) is shortened in comparison to interaction with the original Google UIs. The demo uses the JAWS screen reader for announcing the UI contents.
Functional web accessibility techniques and tools from the university of Illinois BIBAFull-Text 269-270
  Jon Gunderson; Hadi Bargi Rangin; Nicholas Hoyt
For web developers to create functionally accessible web resources they need more than general guidelines and tools that provide them with lists of manual accessibility checks. Web developers need specific web accessibility techniques and tools that help them verify they have correctly implemented the techniques. The techniques also need to support the wider concepts of the web of interoperability and device independence. The CITES/DRES Functional Web Accessibility Best Practices provide developers with specific techniques and requirements to implement Section 508 and W3C WCAG 1.0 requirements. The use of the Functional Web Accessibility Evaluation (FAE) Tool and the Mozilla/Firefox accessibility extension provide free and open source tools to allow developers to verify they have used the best practices.
Introduction to the talking points project BIBKFull-Text 271-272
  Scott Gifford; Jim Knox; Jonathan James; Atul Prakash
Keywords: RFID, location awareness, visual impairment
Improving non-visual web access using context BIBAFull-Text 273-274
  Jalal Mahmud; Yevgen Borodin; Dipanjan Das; I. V. Ramakrishnan
To browse the Web, blind people have to use screen readers, which process pages sequentially, making browsing timeconsuming. We present a prototype system, CSurf, which provides all features of a regular screen reader, but when a user follows a link, CSurf captures the context of the link and uses it to identify relevant information on the next page. CSurf rearranges the content of the next page, so, that the relevant information is read out first. A series experiments have been conducted to evaluate the performance of CSurf.
VoxBoox: a system for automatic generation of interactive talking books BIBAFull-Text 275-276
  Aanchal Jain; Gopal Gupta
The VoxBoox system makes digital books accessible to visually impaired individuals via audio and voice. It automatically translates a book published in HTML to VoiceXML, and then further enhances this VoiceXML rendering of the book to enable listener-controlled dynamic aural navigation. The VoxBoox system has the following salient features: (i) it leverages existing infrastructure since the book that is to be made accessible need only be published digitally using HTML on the visual Web, (ii) it is based on accepted Web standards of HTML and VoiceXML and thus books can be made accessible inexpensively, and (iii) it is user-centered in that the listener (the user) has complete control over (aural) navigation of the book. In this paper, we present details of the technologies that make the VoxBoox system possible, as well as the details of the system itself. A prototype of the VoxBoox system is operational.
Accessibility now!: teaching accessible computing at the introductory level BIBAFull-Text 277-278
  Brian J. Rosmaita
As ASSETS attendees, we are clearly interested in promoting accessibility in computing. One way to do this is to teach courses on the topic. Most such courses are aimed at upper-level students. But why wait? It's possible to teach accessibility immediately at the introductory level, thereby affecting a greater number of students. I offer a description of a course in computer science that accomplishes this.
A demonstration of the iCARE portable reader BIBAFull-Text 279-280
  Terri Hedgpeth; John A., Jr. Black; Sethuraman Panchanathan
This demonstration will show the features and function of the portable iCARE Reader device, which allows people who are blind or visually impaired to read books (and other forms of printed text) in a more natural and convenient way than current tabletop flatbed scanner systems.
The impact of user research on product design case study: accessibility ecosystem for windows vista BIBAFull-Text 281-282
  Annuska Perkins; Tira Cohene
This paper describes the impact of user research on the accessibility features of the Windows Vista operating system. Conducting user research for a complex and widely-used product requires assessing a wide-range of users, experiences, and an ecosystem of PC hardware and software. Our user research for Windows XP gave us a greater understanding of the user's selfperception of their abilities. We also uncovered three pivotal usability issues: awareness, discoverability, and learnability. To address these issues for Windows Vista, we iteratively researched the product while focusing on universal design. The impact of this research resulted in design changes to the following major accessibility areas: an enhanced entry-point, a recommendation process that maps user needs to relevant accessibility components, and enhanced features of Windows Speech Recognition.
DuckCall: tackling the first hundred yards problem BIBAFull-Text 283-284
  Stephen Fickas; Craig Pataky; Zebin Chen
We describe a system that supports travel planning for a user with a cognitive impairment.
An extensible, scalable browser-based architecture for synchronous and asynchronous communication and collaboration systems for deaf and hearing individuals BIBAFull-Text 285-286
  Jonathan Schull
To facilitate face-to-face conversation between deaf and hearing team-members, we created a cross-platform, browser-based, persistent-content text-as-you-type system that aggregates each individual's utterances in revisable personal notes on a userconfigurable multi-person workspace. The system increases the fluidity of real time interaction, makes it easier to keep track of an individual's contributions over time, and supports new patterns of interaction. It has also become an interesting case of universal design: by rethinking web-based chat for deaf users, we have developed a platform with promise for the general population.
"Beyond Perceivability": critical requirements for universal design of information BIBAFull-Text 287-288
  Takashi Kato; Masahiro Hori
This paper addresses the importance of cognitive accessibility and cognitive usability as critical requirements for universal design of information. Information should not be said to be accessed unless its content is cognitively internalized or understood by the user. Accessibility of information, therefore, should be evaluated not only for its perceivability but also for its understandability. We proposed a new cognitive walkthrough (CW) method whose CW questions were formulated based on an extended HCI model that distinguishes between perceiving and understanding. Applied to a Web design evaluation study, the extended CW was shown to be more effective in identifying accessibility and usability problems while remaining as efficient as the currently-practiced CW.

Student research competition

Task analysis for sonification applications usability evaluation BIBAFull-Text 289-290
  Ag Asri Ag Ibrahim
This paper proposes a tasks analysis of sonification applications for usability inspection. The analysis is based on a unified HCI Sonification Application Model. The tasks are based on three different perspectives on how the data being transformed into sound representation as well as from three different points of view including users, interaction and application. The input and output of the transformations are also included in this analysis as the final interfaces and sound representations of the applications.
WISE: a wizard interface supporting enhanced usability BIBAFull-Text 291-292
  Joshua M. Hailpern
The current state of software which targets older adults' ability to use computers focuses on physical issues while largely ignoring the cognitive issues. As a larger percentage of Americans are considered "old" (60+), the lack of a system tailored to the needs of this age demographic has resulted in a part of the population that is disconnected from the rest of the world. This paper describes WISE, an alternative OS and application UI that specifically targets the cognitive deficits of older adults.
Designing assistive technology for blind users BIBAFull-Text 293-294
  Kristen Shinohara
This project reports on an observational and interview study of a non-sighted person to develop design insights for enhancing interactions between a blind person and everyday technological artifacts found in their home such as wristwatches, cell phones or software applications. Analyzing situations where work-arounds compensate for task failures reveals important insights for future artifact design for the blind such as the value of socialization, tactile and audio feedback, and facilitation of user independence.
A mixed method for evaluating input devices with older persons BIBAFull-Text 295-296
  Murni Mahmud
This research is an exploratory study which introduces a mixed method in evaluating common input devices. The method includes both quantitative and qualitative approaches and considers both subjective and objective measures. The study incorporates psychometric tests to measure user ability, introduces real tasks in the evaluation, and interviews users to elicit their opinions regarding the important qualities of preferred devices. A mouse, a tablet-with-stylus and a touch screen have been evaluated in two tasks: browsing a website and playing a card game. This paper shows the mixed method has made possible a more nuanced understanding of the use of input devices by older persons.
Self-adapting user interfaces as assistive technology for handheld mobile devices BIBAFull-Text 297-298
  Robert Dodd
The accessibility of handheld mobile devices is a unique problem domain. They present with a small form factor, constraining display size, and making serious demands on user mobility. Existing assistive technology tackles these problems with bespoke solutions and text-to-speech augmentation, bulking out the device, and forcing visual metaphors upon blind users. Stepping away from such "bolt-on" accessibility, this research revisits the processes by which user interfaces are designed, constructing a model of user interface development that allows for dynamic adaptation of the interface to match individual user capability profiles. In doing so, it abstracts content meaning from presentation, mapping interaction metaphors to categorized user capabilities within individual design spaces (visual, sonic, and haptic) and interaction metaphors to relevant content meaning.
Usability and accessibility issues in the localization of assistive technology BIBAFull-Text 299-300
  Ira Jhangiani
People with disabilities are faced with several barriers to computer usage. Various companies provide assistive software that makes computer usage possible for the population with disabilities. While increased awareness of disability issues has resulted in the formulation of guidelines for developing accessible software, such guidelines do not guarantee that the end product will be optimal for a person with a disability [5]. During localization of software it is important to understand the needs and requirements and target culture of users, which is beyond mere translation of the interface language. This project was carried out to provide broad design guidelines which are driven by usability and accessibility issues that were uncovered during the evaluation of an assistive technology software package.
A flexible VXML interpreter for non-visual web access BIBAFull-Text 301-302
  Yevgen Borodin
VoiceXML (VXML) is a W3C's standard for specifying interactive dialogs. It finds multiple uses in variousWeb applications. VXML can also be used in non-visual Web browsing. There is no suitable, complete, open-source, flexible VXML interpreter to process VXML dialogs. My project is focusing on developing a VXML interpreter, VXMLSurf, that will be fully compliant with VXML 2.0 specifications and geared toward accessing Web content. VXMLSurf implements a number of extended features that provide blind users with more control over interactive browsing dialogs. VXMLSurf is a part of the HearSay project for developing a non-visual Web browser. The goal of the project is to make the Web more accessible for blind people.