HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

Seventh Annual ACM SIGACCESS Conference on Assistive Technologies

Fullname:Seventh International ACM SIGACCESS Conference on Assistive Technologies
Editors:Enrico Pontelli
Location:Baltimore, MD, USA
Dates:2005-Oct-09 to 2005-Oct-12
Publisher:ACM
Standard No:ISBN 1-59593-159-7; ACM Order Number 444050; ACM DL: Table of Contents hcibib: ASSETS05
Papers:46
Pages:221
  1. Panel
  2. Evaluating accessibility
  3. Designing for individuals with hearing impairment
  4. Cursor control and pointing devices
  5. Assistive technologies for individuals with visual impairments I
  6. Designing for individuals with memory and cognitive disabilities
  7. Assistive technologies for individuals with visual impairments II
  8. Posters & demos
Bridges for the mind: opportunities for research on cognitive disabilities BIBFull-Text 1
  Clayton Lewis

Panel

Universal designs versus assistive technologies: research agendas and practical applications BIBKFull-Text 2-3
  Chris Law; Julie Jacko; Bill Peterson; Jim Tobias
Keywords: assistive technologies, research and practice, universal design

Evaluating accessibility

What help do older people need?: constructing a functional design space of electronic assistive technology applications BIBAFull-Text 4-11
  Dennis Maciuszek; Johan Aberg; Nahid Shahmehri
In times of ageing populations and shrinking care resources, electronic assistive technology (EAT) has the potential of contributing to guaranteeing frail older people a continued high quality of life. This paper provides users and designers of EAT with an instrument for choosing and producing relevant and useful EAT applications in the form of a functional design space. We present the field study that led to the design space, and give advice on using the tool.
An exploratory investigation of handheld computer interaction for older adults with visual impairments BIBAFull-Text 12-19
  V. Kathlene Leonard; Julie A. Jacko; Joseph J. Pizzimenti
This study explores factors affecting handheld computer interaction for older adults with Age-related Macular Degeneration (AMD). This is largely uncharted territory, as empirical investigations of human-computer interaction (HCI) concerning users with visual dysfunction and/or older adults have focused primarily on desktop computers. For this study, participants with AMD and visually-healthy controls used a handheld computer to search, select and manipulate familiar playing card icons under varied icon set sizes, inter-icon spacing and auditory feedback conditions. While all participants demonstrated a high rate of task completion, linear regression revealed several relationships between task efficiency and the interface, user characteristics and ocular factors. Two ocular measures, severity of AMD and contrast sensitivity, were found to be highly predictive of efficiency. The outcomes of this work reveal that users with visual impairments can effectively interact with GUIs on small displays in the presence of low-cost, easily implemented design interventions. This study presents a rich data set and is intended to inspire future work exploring the interactions of individuals with visual impairments with non-traditional information technology platforms, such as handheld computers.
Programmer-focused website accessibility evaluations BIBAFull-Text 20-27
  Chris Law; Julie Jacko; Paula Edwards
Suggested methods for conducting website accessibility evaluations have typically focused on the needs of end-users who have disabilities. However, programmers, not people with disabilities, are the end-users of evaluations reports generated by accessibility specialists. Programmers' capacity and resource needs are seldom met by the voluminous reports and long lists of individual website fixes commonly produced using earlier methods. The rationale for the need to consider the whole website development process, and the social characteristics of programmers and project managers is presented. A new programmer-centric Streamlined Evaluation and Reporting Process for Accessibility (SERPA) is described in detail.
The information-theoretic analysis of unimodal interfaces and their multimodal counterparts BIBAFull-Text 28-35
  Melanie Baljko
That multimodal interfaces have benefits over unimodal ones has often been asserted. Several such benefits have been described informally, but, to date, few have actually been formalized or quantified. In this paper, the hypothesized benefits of semantically redundant multimodal input actions are described formally and are quantified using the formalisms provided by Information Theory. A reinterpretation of Keates and Robinson's empirical data (1998) shows that their criticism of multimodal interfaces was, in part, unfounded.

Designing for individuals with hearing impairment

Wizard-of-Oz test of ARTUR: a computer-based speech training system with articulation correction BIBAFull-Text 36-43
  Olle Balter; Olov Engwall; Anne-Marie Oster; Hedvig Kjellstrom
This study has been performed in order to test the human-machine interface of a computer-based speech training aid named ARTUR with the main feature that it can give suggestions on how to improve articulation. Two user groups were involved: three children aged 9-14 with extensive experience of speech training, and three children aged 6. All children had general language disorders.
   The study indicates that the present interface is usable without prior training or instructions, even for the younger children, although it needs some improvement to fit illiterate children. The granularity of the mesh that classifies mispronunciations was satisfactory, but can be developed further.
Representing coordination and non-coordination in an american sign language animation BIBAFull-Text 44-51
  Matt Huenerfauth
While strings and syntax trees are used by the Natural Language Processing community to represent the structure of spoken languages, these encodings are difficult to adapt to a signed language like American Sign Language (ASL). In particular, the multichannel nature of an ASL performance makes it difficult to encode in a linear single-channel string. This paper will introduce the Partition/Constitute (P/C) Formalism, a new method of computationally representing a linguistic signal containing multiple channels. The formalism allows coordination and non-coordination relationships to be encoded between different portions of a signal. The P/C formalism will be compared to representations used in related research in gesture animation. The way in which P/C is used by this project to build an English-to-ASL machine translation system will also be discussed.
Visualizing non-speech sounds for the deaf BIBAFull-Text 52-59
  Tara Matthews; Janette Fong; Jennifer Mankoff
Sounds constantly occur around us, keeping us aware of our surroundings. People who are deaf have difficulty maintaining an awareness of these ambient sounds. We present an investigation of peripheral, visual displays to help people who are deaf maintain an awareness of sounds in the environment. Our contribution is twofold. First, we present a set of visual design preferences and functional requirements for peripheral visualizations of non-speech audio that will help improve future applications. Visual design preferences include ease of interpretation, glance-ability, and appropriate distractions. Functional requirements include the ability to identify what sound occurred, view a history of displayed sounds, customize the information that is shown, and determine the accuracy of displayed information. Second, we designed, implemented, and evaluated two fully functioning prototypes that embody these preferences and requirements, serving as examples for future designers and furthering progress toward understanding how to best provide peripheral audio awareness for the deaf.

Cursor control and pointing devices

Accuracy and frequency analysis of multitouch interfaces for individuals with Parkinsonian and essential hand tremor BIBAFull-Text 60-67
  Eric J. Frett; Kenneth E. Barner
In this study, the accuracy of an optical mouse, optical trackball, isotonic joystick, and a FingerWorks MultiTouch Surface (MTS) are compared for users suffering from Parkinsonian tremor and essential tremor. Using a data acquisition program, WinFitts, created at the University of Oregon's HCI Lab, data collected from five subjects with Parkinsonian tremor, five with essential tremor, and eleven with no tremor is analyzed and compared. Both temporal and spatial analyses are obtained from all of the subject data. The time-based measures of performance for each device include the uses of Fitts' law and the Proximity Movement Time, while the spatially-based measures include the use of the Deviation Accuracy and the Click Histogram. A statistical analysis is performed using a t-test to show the differences between the resulting means of some of the measures. By using the MUSIC spectral estimation technique, an analysis of the frequency and the amplitude of the tremor showed how certain devices performed in hand tremor suppression.
Effect of age and Parkinson's disease on cursor positioning using a mouse BIBAFull-Text 68-75
  Simeon Keates; Shari Trewin
Point-and-click tasks are known to present difficulties to users with physical impairments, particularly motor- or vision-based, and to older adults. This paper presents the results of a study to quantify and understand the effects of age and impairment on the ability to perform such tasks. Results from four separate user groups are presented and compared using metrics that describe the features of the movements made. Distinct differences in behaviour between all of the user groups are observed and the reasons for those differences are discussed.
The migratory cursor: accurate speech-based cursor movement by moving multiple ghost cursors using non-verbal vocalizations BIBAFull-Text 76-83
  Yoshiyuki Mihara; Etsuya Shibayama; Shin Takahashi
We present the migratory cursor, which is an interactive interface that enables users to move a cursor to any desired position quickly and accurately using voice alone. The migratory cursor combines discrete specification that allows a user to specify a location quickly, but approximately, with continuous specification that allows the user to specify a location more precisely, but slowly. The migratory cursor displays multiple ghost cursors that are aligned vertically or horizontally with the actual cursor. The user quickly specifies an approximate position by referring to the ghost cursor nearest the desired position, and then uses non-verbal vocalizations to move the ghost cursors continuously until one is on the desired position. The time spent using the continuous specification which is slow to use is short, since it is used just for fine refinement. In addition, the migratory cursor employs only two directional movements: vertical and horizontal, so that the user can move it quickly to any desired position. Moreover, the user can easily and accurately stop cursor movements by becoming silent when the cursor reaches the desired position. We tested the usefulness of the migratory cursor, and showed that users could move the cursor to a desired position quickly and accurately.
Toward Goldilocks' pointing device: determining a "just right" gain setting for users with physical impairments BIBAFull-Text 84-89
  Heidi Horstmann Koester; Edmund LoPresti; Richard C. Simpson
We designed and evaluated an agent that recommends a pointing device gain for a given user, with mixed success. 12 participants with physical impairments used the Input Device Agent (IDA), to determine a recommended gain based on their performance over a series of target acquisition trials. IDA recommended a gain other than the Windows default for 9 of 12 subjects. Subsequent performance using the IDA gain showed no meaningful differences as compared to the default setting or users' pre-study settings. Across all gains used by these subjects, however, gain did have a significant effect on throughput, percent of error-free trials, cursor entries, and overshoot. Linear models of gain's effect on performance showed that its effect on throughput is relatively small, with only a 13% difference from highest throughput (at gain = 10) to lowest throughput (at gain = 6). Cursor entries were more strongly affected, showing a steady increase with increasing gain.

Assistive technologies for individuals with visual impairments I

Gist summaries for visually impaired surfers BIBAFull-Text 90-97
  Simon Harper; Neha Patel
Anecdotal evidence suggests that Web document summaries provide the sighted reader with a basis for making decisions regarding the route to take within non-linear text; and additional research shows that sighted people use 'Gist' summaries as decision points to bolster their browsing behaviour. Other studies have found that visually impaired users are hindered in their cognition of the content of Web-pages because users must wait for an entire Web-page to be read before deciding on it's usefulness to their current task. In these cases, we draw similarities between sighted and visually impaired users, in that sighted users cannot see the target of a Web Anchor and are therefore 'handicapped'1 by the technology. Previously, we have investigate four simple summarisation algorithms against each other and a manually created summary; producing empirical evidence as a formative evaluation. This evaluation concludes that users prefer simple automatically generated 'gist' summaries thereby reducing cognitive overload and increasing awareness of the focus of the Web-page under investigation. In this paper we focus on the development of 'FireFox' based tool which creates a summary of a Web page 'on-the-fly'. The algorithm used to create this summary is based on the results of our formative evaluation which automatically and dynamically annotates Web pages with the generated 'gist' summary. In this way visually impaired users are supported in their decisions as the relevancy of the page at hand.
Talking braille: a wireless ubiquitous computing network for orientation and wayfinding BIBAFull-Text 98-105
  David A. Ross; Alexander Lightman
An ubiquitous computing network is being developed to assist persons with vision loss in finding their way around buildings and other indoor public spaces. It is based on the "Cyber Crumb" concept: the idea that tiny, inexpensive solar-powered digital chips can be used to store relevant pieces of information that can be placed along building walkways like a trail of crumbs to follow. A wireless network of "crumbs" provides access from any point in the building to a central server that provides orientation and wayfinding information. Initial hardware and consumer tests verify feasibility and benefit.
A wearable face recognition system for individuals with visual impairments BIBAFull-Text 106-113
  Sreekar Krishna; Greg Little; John Black; Sethuraman Panchanathan
This paper describes the iCare Interaction Assistant, an assistive device for helping the individuals who are visually impaired during social interactions. The research presented here addresses the problems encountered in implementing real-time face recognition algorithms on a wearable device. Face recognition is the initial step towards building a comprehensive social interaction assistant that will identify and interpret facial expressions, emotions and gestures. Experiments conducted for selecting a face recognition algorithm that works despite changes in facial pose and illumination angle are reported. Performance details of the face recognition algorithms tested on the device are presented along with the overall performance of the system. The specifics of the hardware components used in the wearable device are mentioned and the block diagram of the wearable system is explained in detail.
Sparsha: a comprehensive Indian language toolset for the blind BIBAFull-Text 114-120
  Anirban Lahiri; Satya Jyoti Chattopadhyay; Anupam Basu
Braille and audio feedback based systems have vastly improved the lives of the visually impaired across a wide majority of the globe. However, more than 13 million visually impaired people in the Indian sub-continent could not benefit much from such systems. This was primarily due to the difference in the technology required for Indian languages compared to those corresponding to other popular languages of the world. In this paper, we describe the Sparsha toolset. The contribution made by this research has enabled the visually impaired to read and write in Indian vernaculars with the help of a computer.

Designing for individuals with memory and cognitive disabilities

Semantic knowledge in word completion BIBAFull-Text 121-128
  Jianhua Li; Graeme Hirst
We propose an integrated approach to interactive word-completion for users with linguistic disabilities in which semantic knowledge combines with $n$-gram probabilities to predict semantically more-appropriate words than $n$-gram methods alone. First, semantic relatives are found for English words, specifically for nouns, and they form the semantic knowledge base. The selection process for these semantically related words is first to rank the pointwise mutual information of co-occurring words in a large corpus and then to identify the semantic relatedness of these words by a Lesk-like filter. Then, the semantic knowledge is used to measure the semantic association of completion candidates with the context. Those that are semantically appropriate to the context are promoted to the top positions in prediction lists due to their high association with context. Experimental results show a performance improvement when using the integrated model for the completion of nouns.
Research-derived web design guidelines for older people BIBAFull-Text 129-135
  Sri Kurniawan; Panayiotis Zaphiris
This paper presents the development of a set of research-derived ageing-centred Web design guidelines. An initial set of guidelines was first developed through an extensive review of the HCI and ageing literature and through employing a series of classification methods (card sorting and affinity diagrams) were employed as a means for obtaining a revised and more robust set of guidelines. A group of older Web users were then involved in evaluating the usefulness of the guidelines. To provide evaluation context for these users, two websites targeted to older people were used. This study makes several contributions to the field. First, it is perhaps the first manuscript that proposes ageing-friendly guidelines that are for most part backed by published studies. Second, the guidelines proposed in this study have been thoroughly examined through a series of expert and user verifications, which should give users of these guidelines confidence of their validity.
Autism/excel study BIBAFull-Text 136-141
  Mary Hart
Five high school students with ASD (autistic spectrum disorder) participating in the Excel/Autism study were able to demonstrate mastery of a set of Excel topics. The Excel curriculum covered approximately the same topics as are covered in the Excel portion of Computer Business Applications, a class for regular education students at Fox Chapel Area High School, a high school in suburban Pittsburgh. The students with ASD were provided with one-on-one tutoring support. Two of the five ASD participants self-initiated activities and engaged in generative thinking to a substantial degree over the course of the eight instructional sessions for which data was recorded. Two others demonstrated lesser amounts of this behavior, and one participant did not demonstrate any. The ASD experimental participants, as compared to a treatment group of three students with ASD who did not receive instruction in Excel, demonstrated improvement in a multi-step planning task which was significant.
Requirements gathering with alzheimer's patients and caregivers BIBAFull-Text 142-149
  Kirstie Hawkey; Kori M. Inkpen; Kenneth Rockwood; Michael McAllister; Jacob Slonim
Technology may be able to play a role in improving the quality of life for Alzheimer's patients and their caregivers. We are evaluating the feasibility of an information appliance with the goal of alleviating repetitive questioning behaviour, a contributing factor to caregiver stress. Interviews were conducted with persons with Alzheimer's disease and their caregivers to determine the nature of the repetitive questioning behaviour, the information needs of patients, and the interaction abilities of both the patients and the caregivers. We report results of these interviews and discuss the challenges of requirements gathering with persons with Alzheimer's disease and the feasibility of introducing an information appliance to this population.

Assistive technologies for individuals with visual impairments II

Automating tactile graphics translation BIBAFull-Text 150-157
  Richard E. Ladner; Melody Y. Ivory; Rajesh Rao; Sheryl Burgstahler; Dan Comden; Sangyun Hahn; Matthew Renzelmann; Satria Krisnandi; Mahalakshmi Ramasamy; Beverly Slabosky; Andrew Martin; Amelia Lacenski; Stuart Olsen; Dmitri Groce
Access to graphical images (bar charts, diagrams, line graphs, etc.) that are in a tactile form (representation through which content can be accessed by touch) is inadequate for students who are blind and take mathematics, science, and engineering courses. We describe our analysis of the current work practices of tactile graphics specialists who create tactile forms of graphical images. We propose automated means by which to improve the efficiency of current work practices.
   We describe the implementation of various components of this new automated process, which includes image classification, segmentation, simplification, and layout. We summarize our development of the tactile graphics assistant, which will enable tactile graphics specialists to be more efficient in creating tactile graphics both in batches and individually. We describe our unique team of researchers, practitioners, and student consultants who are blind, all of whom are needed to successfully develop this new way of translating tactile graphics.
SmartColor: disambiguation framework for the colorblind BIBAFull-Text 158-165
  Ken Wakita; Kenta Shimamura
Failure in visual communication between the author and the colorblind reader is caused when color effects that the author expects for the reader to experience are not observed by the reader. The proposed framework allows the author to annotate his/her intended color effects to the colored document. They are used to generate a repainted document that let the colorblind enjoy similar color effects that normal color vision person does for the original document. The annotations are formulated as a set of mathematical constraints that can describe several commonly used color effects. Constraints are defined over the normal vision color space. Then they are projected onto the restricted color space that corresponds to the one that the colorblind perceives. Finally, the projected constraints are resolved for the search of best repainting of the document that most successfully presents to the colorblind person the color effects experienced by the normal vision person on the original document. Effectiveness of the proposal is shown by colorblind simulation.
Automatic production of tactile graphics from scalable vector graphics BIBAFull-Text 166-172
  Stephen E. Krufka; Kenneth E. Barner
This paper presents a method to convert vector graphics into tactile representations for the blind. Generating tactile pictures from vector graphics is an important effort to bring more accessibility to the WWW as well as other means of communications since vector graphics are an increasing trend in web based graphics. Prior research has investigated methods that extracts object boundaries from images to produce raised-line tactile pictures. The proposed method extends this idea for vector graphics, producing tactile pictures where important outlines are emphasized. Important outlines are determined by using the hierarchical structure of a vector graphic. A Braille printer is used where raised dots are embossed for the outlining boundaries. Important and detail boundaries are embossed with dots of larger and smaller height, respectively, while all other regions contain no raised dots. Results testing a person's ability to discriminate, identify, and comprehend tactile pictures shows the proposed methods' advantage over two other methods.
3D sound interactive environments for problem solving BIBAFull-Text 173-179
  Jaime Sanchez; Mauricio Saenz
Audio-based virtual environments have been increasingly used to foster cognitive and learning skills. A number of studies have also highlighted that the use of technology can help learners to develop affective skills such as motivation and self-esteem. This study presents the design and usability of 3D interactive environments for children with visual disabilities to help them to solve problems related with the Chilean geography and culture. We introduce AudioChile, a virtual environment that can be navigated through 3D sound to enhance spatiality and immersion throughout the environment. 3D sound is used to orientate, avoid obstacles, and identify the position of diverse personages and objects within the environment. We have found during usability evaluation that sound can be fundamental for attention and motivation purposes during interaction. Learners identified and differentiated clearly environmental sounds to solve everyday problems, spatial orientation, and laterality.

Posters & demos

Online focus groups used as an accessible participatory research method BIBAFull-Text 180-181
  Ted L. Wattenberg
Participatory research methods are being used internationally to gather data on complex social, cultural, and political concerns that effect the use of technology [4]. Researchers have found it difficult to include people with disabilities in these studies [5, 6, 7]. The Accessible Learning Through Text-to-Speech Project will utilize online focus groups as a method of integrating people with disabilities into a participatory research project. The Alt-Learning Project will have three primary target populations; users of screen readers with vision, users of screen readers who are blind, and professionals responsible for the delivery of assistive technology. The online focus groups will allow the observation and collection of data as a participant would normally utilize their screen reader applications at home, school, or workplace.
PLUMB: displaying graphs to the blind using an active auditory interface BIBAFull-Text 182-183
  Robert F. Cohen; Rui Yu; Arthur Meacham; Joelle Skaff
We present our ongoing research in the communication of graphs and relational information to blind users. We have developed a system called exPLoring graphs at UMB (PLUMB) that displays a drawn graph on a tablet PC and uses auditory cues to help a blind user navigate the graph. This work has applications to assist blind individuals in Computer Science education, navigation and map manipulation.
Gestural text entry on multiple devices BIBAFull-Text 184-185
  Jacob O. Wobbrock; Brad A. Myers
We present various adaptations of the EdgeWrite unistroke text entry method that work on multiple computer input devices: styluses, touchpads, displacement and isometric joysticks, four keys or buttons, and trackballs. We argue that consistent, flexible, multi-device input is important to both accessibility and to ubiquitous computing. For accessibility, multi-device input means users can switch among devices, distributing strain and fatigue among different muscle groups. For ubiquity, it means users can "learn once, write anywhere," even as new devices emerge. By considering the accessibility and ubiquity of input techniques, we can design for both motor-impaired users and "situationally impaired" able-bodied users who are on-the-go. We discuss the requirements for such input and the challenges of multi-device text entry, such as solving the segmentation problem. This paper accompanies a demonstration of EdgeWrite on multiple devices.
Interactive virtual client for teaching occupational therapy evaluative processes BIBAFull-Text 186-187
  Sharon Stansfield; Tom Butkiewicz; Evan Suma; Marilyn Kane
In this paper, we describe our current work in developing a computer-based educational tool for Occupational Therapy students learning client evaluation techniques. The software is dialog-based and allows the student to interact with a virtual client. Students carry out an evaluation, following the appropriate procedures and assessing both the client's physical and emotional state as they proceed. Students' actions are saved to a file for instructor and self evaluation of their performance. The software is being developed using the Source game engine SDK developed by Valve.
Touchable online braille generator BIBAFull-Text 188-189
  Wooseob Jeong
Using the force feedback technology which has been used in video games for years, a prototype of an online Braille generator was developed for the visually impaired or blind user. Without any expensive devices, the prototype lets sightless persons use the information on the web by touching the output Braille displays on screen with a mouse. User studies will be conducted with blind people, and their data will provide valuable information about the optimal conditions for the online Braille display in the prototype, such as how strong the force should be and how big those Braille dots should be. The final product of this research will enable visually impaired people to enjoy all the library services and resources as well as the enormous amount of information on the web more freely.
Solo: interactive task guidance BIBAFull-Text 190-191
  Edmund LoPresti; Ned Kirsch; Richard Simpson; Debra Schreckenghost
Solo is a cognitive assistive device which provides scheduling support and interactive task guidance. Solo includes user interfaces for the person with a disability and the caregiver, as well as a Cognition Manager which manages schedules and responds to unplanned events.
An adaptive technologies course in a CS curriculum BIBAFull-Text 192-193
  Blaise W. Liffick
This poster describes part 2 of the 2-year project "Integrating Assistive Technology into an Undergraduate Computer Science Curriculum from an HCI Approach," funded by the National Science Foundation [3]. (Part I of this project is documented in [1, 2].) The intent of this phase of the project is to introduce the topic of computerized aids for the disabled (generally called assistive or adaptive technology (AT)) as an advanced elective course offered for senior Computer Science majors. This report will briefly describe some of the topics to be covered in this new course, how these topics fit within the CS curriculum, sample assignments, and the laboratory equipment used to support demonstrations and assignments. This course is more fully described in [4].
iSonic: interactive sonification for non-visual data exploration BIBAFull-Text 194-195
  Haixia Zhao; Catherine Plaisant; Ben Shneiderman
iSonic is an interactive sonification tool for vision impaired users to explore geo-referenced statistical data, such as population or crime rates by geographical regions. Users use a keyboard or a smooth surface touchpad to interact with coordinated map and table views of the data. The integrated use of musical sounds and speech allows users to grasp the overall data trends and to explore the data to get more details. Scenarios of use are described.
A system for creating personalized synthetic voices BIBAFull-Text 196-197
  Debra Yarrington; Chris Pennington; John Gray; H. Timothy Bunnell
We will be demonstrating the ModelTalker Voice Creation System, which allows users to create a personalized synthetic voice with an unrestricted vocabulary. The system includes a tool for recording a speech inventory and a program that converts the recorded inventory into a synthetic voice for the ModelTalker TTS engine. The entire system can be downloaded for use on a home PC or in a clinical setting, and the resulting synthetic voices can be used with any SAPI compliant system.
   We will demonstrate the recording process, and convert the recordings to a mini-database with a limited vocabulary for participants to hear.
How to operate a PC without using the hands BIBAFull-Text 198-199
  Torsten Felzer; Rainer Nordmann
A demo of a biosignal interface, which allows to operate a Windows PC without using the hands, shall be given. The system -- called HaMCoS (for Hands-free Mouse Control System) -- enables its user to simulate clicks and movements of the computer mouse by issuing intentional contractions of a single muscle of choice only. Therefore, by employing HaMCoS, even a person with very severe physical disabilities can operate a PC, provided everything exclusively relies on mouse input. The framework built around the system's Main Module is optimized in this respect, since it offers a comfortable keyboard-free user interface (e.g. comprising large, easily clickable buttons).
visiBabble demo BIBAFull-Text 200-201
  Harriet Fell; Joel MacAuslan; Jun Gong; Josh Ostrow
The visiBabble system responds with animations to an infant's syllable-like productions and records the acoustic-phonetic analysis. The system reinforces production of syllabic utterances associated with later language and cognitive development. This demo will show off new animated responses and recent improvements in acoustic-phonetic feature detection.
DHTML accessibility: solving the JavaScript accessibility problem BIBAFull-Text 202-203
  Becky Gibson; Richard Schwerdtfeger
This project demonstrates fully keyboard accessible components on a web page working with a screen reader. By adding the appropriate semantic data to web components and having user agents translate this to the platform accessibility application programming interfaces, the user interface of a web site can be made fully accessible to keyboard only and vision impaired users. In addition, the web component interface will operate in the same manner as client application components.
MathPlayer: web-based math accessibility BIBAFull-Text 204-205
  Neil Soiffer
MathPlayer is a plug-in to Microsoft's Internet Explorer (IE) that renders MathML[11] visually. It also contains a number of features that make mathematical expressions accessible to people with print-disabilities. MathPlayer integrates with many screen readers including JAWS and Window-Eyes. MathPlayer also works with a number of TextHELP!'s learning disabilities products.
Multimodal user input patterns in a non-visual context BIBAFull-Text 206-207
  Xiaoyu Chen; Marilyn Tremaine
How will users choose between speech and hand inputs to perform tasks when they are given equivalent choices between both modalities in a non-visual interface? This exploratory study investigates this question. The study was conducted using AudioBrowser, a non-visual information access for the visually impaired. Findings include: (1) Users chose between input modalities based on the type of operations undertaken. Navigation operations primarily used hand input on the touchpad, while non-navigation instructions primarily used speech input. (2) Surprisingly, multimodal error correction was not prevalent. Repeating a failed operation until it succeeded and trying other methods in the same input modality were dominant error-correction strategies. (3) The modality learned first was not necessarily the primary modality used later, but a training order effect existed. These empirical results provide implications for designing non-visual multimodal input dialogues.
Emerging issues, solutions & challenges from the top 20 issues affecting web application accessibility BIBAFull-Text 208-209
  David Hoffman; Lisa Battle
We will describe emerging accessible design issues, based on a second in-depth analysis of hundreds of accessibility issues documented in real projects, and a comparison of those results to a prior study of 1000+ accessibility issues. This poster will demonstrate recent trends in the top 20 UI design situations that are likely to pose problems for users with disabilities; highlight several creative design solutions; and identify several challenges that lack adequate solutions.
Verification of computer display pre-compensation for visual aberrations in an artificial eye BIBAFull-Text 210-211
  Miguel, Jr. Alonso; Armando Barreto; Julie A. Jacko; Malek Adjouadi
The possibility of pre-compensating images in a computer display according to the visual aberrations previously assessed in an optical system (e.g., the computer user's eye) has been confirmed for a simple "artificial eye". This device has been constructed from optical components, which include a plano-convex lens, an adjustable aperture, and a Charged-Couple Device (CCD) array that mimics the retina of a real eye. While the CCD array allows for the inspection of the image as it would form on the retina of a real eye, its specular reflection does not allow the resulting "artificial eye" to be measured appropriately in a wavefront analyzer (a necessary pre-requisite for the image precompensation process). Therefore, an alternative, interchangeable CCD array covered with gray paint (i.e., disabled) was also created to provide the diffuse reflectivity that is presumed in the operation of the wavefront analyzer. Experiments with this system show that the visual aberrations in a properly characterized optical system can, in fact, be precompensated by the methods proposed by Alonso et al., [1]. These same experiments, however, reveal the need to adjust the precompensation method according to the effective pupil diameter in the system during viewing.
A parametric approach to sign language synthesis BIBAFull-Text 212-213
  Amanda Irving; Richard Foulds
In this paper we discuss the progress made toward accurate synthesis of signs in American Sign Language (ASL) using a finite number of descriptive parameters. A sign editor produces elements of a sign inventory that can be used with a commercially available human avatar to allow the generation of signed sentences from written or spoken text.
Graphical arithmetic for learners with dyscalculia BIBAFull-Text 214-215
  Lena Pareto
We propose a model for arithmetic, based on graphical representations, to complement the symbolic language of mathematics. The focus is conceptual understanding of arithmetic. We argue that the graphical model supports understanding concepts known to be difficult for learners with dyscalculia, such as number-sense and decimal system. The proposed graphical representation share properties of the decimal system, but is closer to the semantic representation of numbers vital to the number-sense. The model is evaluated with school-children, but needs to be further tested by learners with dyscalculia.
iCARE interaction assistant: a wearable face recognition system for individuals with visual impairments BIBAFull-Text 216-217
  Sreekar Krishna; Greg Little; John Black; Sethuraman Panchanathan
This presentation demonstrates a working prototype of the iCare Interaction Assistant, a wearable assistive device based on research aimed at facilitating the social interactions of people who are blind or visually impaired. Using a tiny unobtrusive camera mounted inside the nose bridge of a pair of eyeglasses, this prototype is able to learn and recognize faces at a distances up to 10 feet, thus allowing the user to initiate conversations with persons in their vicinity, without waiting for others to approach them. Ongoing work is aimed at facilitating the subsequent verbal interaction by recognizing and interpreting non-verbal communication, including eye contact, facial expressions, emotions, and gestures.
User modeling for individuals with disabilities: a pilot study of word prediction BIBAFull-Text 218-219
  Abhishek Agarwal; Richard Simpson
We are developing user models that predict how a word prediction system affects performance on a text entry task for individuals with disabilities. In this paper we describe the instrumentation, test-bed software and analytic methods that we are using to collect pilot data.
BlackBoardNV: a system for enabling non-visual access to the blackboard course management system BIBFull-Text 220-221
  Vineet Enagandula; Niraj Juthani; I. V. Ramakrishnan; Devashish Rawal; Ritwick Vidyasagar