HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

Second Annual ACM SIGACCESS Conference on Assistive Technologies

Fullname:Second International ACM Conference on Assistive Technologies
Location:Vancouver, Canada
Dates:1996-Apr-11 to 1996-Apr-12
Publisher:ACM
Standard No:ACM ISBN 0-89791-776-6; ACM Order number 444960; ACM DL: Table of Contents hcibib: ASSETS96
Papers:22
Pages:146
  1. Keynote Address
  2. The User Interface -- I
  3. World Wide Web Issues
  4. Vision Impairments -- I
  5. Empirical Studies
  6. The User Interface -- II
  7. Panel Discussion
  8. Multimedia
  9. Vision Impairments -- II

Keynote Address

Beyond Assistive Technology: Universal Design Goes to School BIBAPDF 1
  David Rose
For centuries, print technologies have been the dominant means for learning and expression in our schools. However, print technologies are not equally accessible to all students. Those with sensory, physical and learning disabilities face particular barriers in print. Schools have attempted to overcome these barriers with a variety of adaptations such as special classrooms, therapies, and assistive technologies. As schools move to a more inclusive multimedia platform, new opportunities arise to eliminate barriers for children with disabilities. The realization of these opportunities can only be achieved through universal design of educational media and materials. The prospects of universal design in education will be the focus of this presentation.

The User Interface -- I

Touching and Hearing GUI's: Design Issues for the PC-Access System BIBAPDF 2-9
  Christophe Ramstein; Odile Martial; Aude Dufresne; Michel Carignan; Patrick Chasse; Philippe Mabilleau
PC-Access is a system which combines both hardware and software in order to provide multimodal feedback in a Microsoft Windows graphical interface and within its applications. We propose two versions of PC-Access: one which offers sound feedback with an enhanced drawing tablet, and another in which tactile stimuli are synthesized by a haptic pointing device. When using the second version, the user will be able to perceive the interface objects (e.g, icons, menus, windows) as well as actions (e.g, moving, re-sizing). Thus, PC-Access offers auditory information (non-verbal sounds and voice synthesis), reinforced by the sense of touch which in turn helps to direct manipulation.
Enhancing Scanning Input with Non-Speech Sounds BIBAKPDF 10-14
  Stephen A. Brewster; Veli-Pekka Raty; Atte Kortekangas
This paper proposes the addition of non-speech sounds to aid people who use scanning as their method of input. Scanning input is a temporal task; users have to press a switch when a cursor is over the required target. However, it is usually presented as a spatial task with the items to be scanned laid-out in a grid. Research has shown that for temporal tasks the auditory modality is often better than the visual. This paper investigates this by adding non-speech sound to a visual scanning system. It also shows how our natural abilities to perceive rhythms can be supported so that they can be used to aid the scanning process. Structured audio messages called Earcons were used for the sound output. The results from a preliminary investigation were favourable, indicating that the idea is feasible and further research should be undertaken.
Keywords: Non-speech sound, Earcons, Scanning input, Multimodal interaction
A Study of Input Device Manipulation Difficulties BIBAKPDF 15-22
  Shari Trewin
People with a motor disability affecting their use of the keyboard and/or mouse often tend to make unintentional input errors. Little or no quantified data exists on physical errors in the use of standard computer input devices, particularly with respect to motor disabilities.
   Such information, if available, could be used to develop techniques for automatic recognition of specific difficulties. Once recognised, many can be reduced or eliminated by appropriate system and application configuration.
   This paper describes the pilot study for an experiment intended to gather detailed information about input errors made with keyboards and mice. This work is a step towards provision of dynamic, automatic support for the configuration of systems and applications to suit individual users.
   Some initial results from the pilot study are presented, including an assessment of the experiment design and a summary of some interesting characteristics of the data gathered so far.
Keywords: Keyboard, Mouse, Errors, Physical disability, Input devices, Input logging

World Wide Web Issues

V-Lynx: Bringing the World Wide Web to Sight Impaired Users BIBAKPDF 23-26
  Mitchell Krell; Davor Cubranic
The World Wide Web (WWW) project merges the techniques of networked information and hypertext to make an easy but powerful global information system. A client program called a browser is used to access documents in the WWW system and present them all as parts of a seamless hypertext information space.
   However, today's browsers are primarily graphically or text oriented, which makes the whole system inaccessible to sight-impaired users. In this project we wanted to extend an existing browser with voice output. This extension would allow the sight-impaired to use, at least, textual data, which, at present, forms the bulk of information available over the Web. Our browser should be able to read the document a line or paragraph at a time, read only the first sentence in a paragraph for quick scanning of the document, convey the document structure (headings, emphasized text, lists, hyperlinks), and allow for easy navigation while inside and between documents.
Keywords: WWW, Web browser, Audio, Voice, Lynx, URL, HTTP protocol, Web navigation, Hypertext
Computer Generated 3-Dimensional Models of Manual Alphabet Hand Shapes for the World Wide Web BIBAKPDF 27-31
  Sarah Geitz; Timothy Hanson; Stephen Maher
A teaching tool consisting of a collection of three dimensional computer graphic models representing American Sign Language manual alphabet hand shapes in various locations and orientations has been established. These computer graphic models have been recorded in the "Virtual Reality Modeling Language (VRML) [1] for display with World Wide Web browsers such as Netscape or Mosaic, in conjunction with VRML browsers such as WebSpace or WorldView.
Keywords: ASL, VRML, Virtual reality, World Wide Web

Vision Impairments -- I

Emacspeak -- Direct Speech Access BIBAKPDF 32-36
  T. V. Raman
Emacspeak is a full-fledged speech output interface to Emacs, and is being used to provide direct speech access to a UNIX workstation. The kind of speech access provided by Emacspeak is qualitatively different from what conventional screen-readers provide -- emacspeak makes applications speak -- as opposed to speaking the screen.
   Emacspeak is the first full-fledged speech output system that will allow someone who cannot see to work directly on a UNIX system (Until now, the only option available to visually impaired users has been to use a talking PC as a terminal.) Emacspeak is built on top of Emacs. Once Emacs is started, the user gets complete spoken feedback.
   I currently use Emacspeak at work on my SUN SparcStation and have also used it on a DECALPHA workstation under Digital UNIX while at Digital's CRL. I also use Emacspeak as the only speech output system on my laptop running Linux.
   Emacspeak is available on the Internet:
   FTP ftp://crl.dec.com/pub/digital/emacspeak/
   WWW http://www.research.digital.com/CRL
Keywords: Direct speech access, Access to UNIX workstations
Combining Haptic and Braille Technologies: Design Issues and Pilot Study BIBAKPDF 37-44
  Christophe Ramstein
This article describes design issues for a bi-dimensional single cell braille display, called Pantobraille, combining a standard braille cell with a force feedback device developed as part of the CITI's PC-Access project. The Pantobraille, with a 10x16cm workspace, allows the user to place the pointer on a graphical interface, to perceive forms and textures using the sense of touch, and to read braille text on a bi-dimensional page. In order to determine the usability of such a device and to have a better understanding of the issues that may arise when manipulating it for actual interactive tasks, two visually handicapped persons were asked to use the device to follow reading patterns with one or two hands. Reading performance and comfort with the Pantobraille remain inferior to standard braille displays but significant improvements were observed while performing the complementary pointing and reading tasks using both hands.
Keywords: Single cell braille display, Haptic interface, Force feedback device, Braille display
Interactive Tactile Display System -- A Support System for the Visually Disabled to Recognize 3D Objects -- BIBAKPDF 45-50
  Yoshihiro Kawai; Fumiaki Tomita
We have developed an interactive tactile display system for the visually disabled to actively recognize three-dimensional objects or environments. The display presents visual patterns by tactile pins arranged in two-dimensional format. The pin height can be set to several levels to increase the touch information and to display a three-dimensional surface shape. Also, each pin has a tact switch in the bottom for the user to make the system know the position by pushing it. This paper describes the hardware and software of the system.
Keywords: Tactile display, The visually disabled, Interactive interface, Stereo vision, 3D
AudioGraf: A Diagram-Reader for the Blind BIBAKPDF 51-56
  Andrea R. Kennel
In technical reports and papers interrelations are often represented as diagrams. With the aid of a touch panel and auditory display AudioGraf enables blind and visually impaired people to read such diagrams. The diagram is displayed on the touch panel where a part can be selected with a finger. The selected part will be auditorally displayed. If the finger is moved, another part is selected and auditorally displayed. This way the whole diagram can be explored in an audio-tactile way. A model of this audio-tactile exploration is presented. Based on this model it is explained how AudioGraf supports the user. Usability tests have shown that simple diagrams can be read by blind users within relative short time.
Keywords: Auditory user interfaces, Blind users, Usability, Diagram, Reading-aid

Empirical Studies

EVA, an Early Vocalization Analyzer: An Empirical Validity Study of Computer Categorization BIBAKPDF 57-63
  Harriet J. Fell; Linda J. Ferrier; Zehra Mooraj; Etienne Benson; Dale Schneider
Previous research indicates that infant vocalizations are effective predictors of later articulation and language abilities (Locke, 1989, Menyuk, Liebergott, Shultz, Chesnick & Ferrier, 1991, Oller & Seibert 1988, Jensen, Boggild-Andersen, Schmidt, Ankerhus, Hansen, 1988). Intervention to encourage babbling activity in at-risk infants is frequently undertaken. Research and clinical diagnosis of delayed or reduced babbling have so far relied on time-consuming and unreliable perceptual analyses of recorded infant sounds. While acoustic analysis of infant sounds has provided important information on the early characteristics of infant vocalizations (Bauer, 1988, Stark 1986) this information has still to be used to carry out automatic, real-time analysis.
   We are developing a program, EVA, for the Macintosh computer that automatically analyzes digitized recordings of infant vocalizations. We describe the prototype and report on validity-testing of the first stage of development. Our human judge and EVA had 92.8% agreement on the number of utterances in the 20 minutes of recordings, commonly identifying 411 utterances. Their categorizations agreed 79.8% for duration and 87.3% for frequency, better than human interjudge agreement reported in the literature. The authors hope that the final version of EVA will serve as a reliable standard for the analysis and evaluation of utterances of normal and at-risk infants with a variety of etiologies. The acoustic information gained from such analysis will allow us to develop a computer-based system to encourage early vocalization.
Keywords: Infants, Pre-speech vocalization, Acoustic analysis, Early intervention
An Approach to the Evaluation of Assistive Technology BIBAKPDF 64-71
  Robert D. Stevens; Alistair D. N. Edwards
A valid criticism of many innovations in assistive technology is that they have not been evaluated. However, there are obstacles which make this form of technology difficult to evaluate according to conventional paradigms. The reasons behind this are discussed. A particular evaluation which endeavoured to circumvent those problems is described. The item evaluated was Mathtalk, a program to make mathematics accessible to blind people.
Keywords: Evaluation, Auditory interfaces, Earcons, Blind people, Mathematics

The User Interface -- II

Designing Interface Toolkit with Dynamic Selectable Modality BIBAKPDF 72-79
  Shiro Kawai; Hitoshi Aida; Tadao Saito
Incorporating flexibility to select desirable modality into user interface systems is needed for people with disabilities, since most modern applications use graphical user interfaces forcing fixed modality which is useful only to sighted users.
   However, the requirement of user interfaces with flexible and selectable modality is not a specific problem of disabled persons but a general problem of interfaces in the next generation, considering that environments, in which computers are used, are widening rapidly.
   This paper discusses about an architecture of user interface toolkit to support flexibility required by both users with disability and users in special environment, and proposes a model of semantic abstraction of user interaction, named abstract widgets. The experimental implementation of such toolkit, named Fruit system, is also described.
Keywords: Multi-modal interface, Graphical user interface, User interface management system
Multimodal Input for Computer Access and Augmentative Communication BIBAKPDF 80-85
  Alice Smith; John Dunaway; Patrick Demasco; Denise Peischl
This paper describes the overall goals of a project that focuses on multimodal input for computer access and Augmentative and Alternative Communication (AAC) systems. In particular the project explores the integration of speech recognition with head-pointing. The first part of this project addresses the use of speech and head-pointing to replace the traditional keyboard and mouse. While either of these technologies can emulate both keyboard and mouse functions, it is hypothesized that the most advantageous use of each technology will come from integration such that each device's strength is utilized appropriately.
   To test this hypothesis, a series of experiments are planned. The first experiment compares (quantitatively and qualitatively) each technology in the context of text generation. The second experiment looks at typical pointing tasks (e.g., dragging) for each technology. The third experiment will look at the technologies in an integrated context. Because each of the technologies are themselves highly complex, significant time and effort has been devoted to pilot testing. Those results and the implications on our research methodology are presented in this paper.
Keywords: Multimodal input, Speech recognition, Head pointing, Assistive technology, Computer access, Augmentative and alternative communication
The Keybowl: An Ergonomically Designed Document Processing Device BIBAKPDF 86-93
  Peter J. McAlindon; Kay M. Stanney; N. Clayton Silver
This paper discloses preliminary findings and provides a discussion of a newly designed alphanumeric input device called the Keybowl. The Keybowl was designed and developed primarily as an alternative input device to allow users of various upper extremity disabilities to effectively type, interact with, and navigate current computer interface designs. In addition, the Keybowl's unique characteristics of adapting to the user's needs may provide a solution to the multi-million dollar a year problem of carpal tunnel syndrome (CTS) as it relates to typing. The Keybowl totally eliminates finger movement, minimizes wrist movement, and uses the concept of concurrent independent inputs (i.e., chording) in which two domes are moved laterally to type. Initial results indicated that users of the Keybowl typed an average of 52% of their regular QWERTY flatboard keying speed in as little as five hours. With regard to ergonomic advantage, Keybowl typists' flexion/extension wrist movements were reduced by an average of 81.5% when compared to typists using the QWERTY keyboard. Movements in the ulnar/radial plane were reduced by an average of 48%.
Keywords: Keyboard, Cumulative trauma, Handicap, Typing, Carpal tunnel syndrome

Panel Discussion

Designing the World Wide Web for People with Disabilities: A User Centered Design Approach BIBAKPDF 94-101
  Lila F. Laux; Peter R. McNally; Michael G. Paciello; Gregg C. Vanderheiden
The emergence of the World Wide Web has made it possible for individuals with appropriate computer and telecommunications equipment to interact as never before. An explosion of next-generation information systems are flooding the commercial market. This cyberspace convergence of data, computers, networks, and multimedia presents exciting challenges to interface designers. However, this "new technology frontier" has also created enormous roadblocks and barriers for people with disabilities. This panel will discuss specific issues, suggest potential solutions and solicit contributions required to design an accessible Web interface that includes people with disabilities.
Keywords: Accessibility, Blindness, Deaf, Disabilities, Hypermedia, Mobility, People with disabilities, Special needs, Software development, User interfaces, User requirements

Multimedia

A Gesture Recognition Architecture for Sign Language BIBAKPDF 102-109
  Annelies Braffort
This paper presents a gesture recognition architecture dedicated to Sign Languages. Sign Language gestures include five co-occurring parameters, which convey complementary independent information. Some signs belong to a predefined lexicon which can be learned by the recognition system, but some other signs may be created during the discourses, depending on the context. The proposed recognition system is able to recognise both kinds of signs, by using specific classification tools, and a virtual scene for context storage. It is based on a study of French Sign Language (LSF).
Keywords: Sign language, Gesture recognition, Gesture interpretation, Data glove
'Composibility': Widening Participation in Music Making for People with Disabilities via Music Software and Controller Solutions BIBAKPDF 110-116
  Tim Anderson; Clare Smith
This paper discusses ways of enabling visually impaired and physically disabled people to compose and perform music.
   The usage and adaptation of existing software-based composition systems are described, in the context of education work undertaken by the Drake Music Project -- a charity which aims to facilitate disabled people in making music via technology. Some of the problems faced are discussed, and a custom system presented which aims to resolve some of these difficulties.
Keywords: Music, Physical disability, Visual impairment, Composition, Education, Software, MIDI, Adaptive technology

Vision Impairments -- II

A Generic Direct-Manipulation 3D-Auditory Environment for Hierarchical Navigation in Non-Visual Interaction BIBAKPDF 117-123
  Anthony Savidis; Constantine Stephanidis; Andreas Korte; Kai Crispien; Klaus Fellbaum
Auditory presentation methods may significantly enhance the interaction quality during user-computer dialogue. The impact of auditory interaction methods is important in the context of non-visual interaction, where audio is the primary direct perception output modality. In a few cases, 3D-audio output techniques have been employed for providing interaction for blind users. Unfortunately, such developments have been too specialized and do not support re-usability of the implemented approaches and techniques in different contexts, where non-visual interaction needs to be realized. A generic re-usable environment has been implemented, based on 3D audio, 3D pointing, hand gestures and voice input, which is applicable in all cases that interactive hierarchically structured selections from sets of alternatives must be handled. This environment has been used to implement the hierarchical navigation dialogue in a multimedia non-visual toolkit currently under development. It is composed of a set of modules implementing re-usable functionality with which interaction for non-visual hierarchical navigation can be realized within any non-visual interaction toolkit.
Keywords: Non-visual interaction, Auditory interfaces, Toolkits, 3D-audio, Re-usability
Improving the Usability of Speech-Based Interfaces for Blind Users BIBAPDF 124-130
  Ian J. Pitt; Alistair D. N. Edwards
Adaptations using speech synthesis provide a basic level of access to computer systems for blind users, but current systems pose a number of usability problems. A study was carried out in order to assess the impact of certain issues on the usability of a typical speech adaptation. The results suggest that much work needs to be done on the design of speech dialogues.
TDraw: A Computer-Based Tactile Drawing Tool for Blind People BIBAKPDF 131-138
  Martin Kurze
Considerations about blind people's relation to pictures of real world objects lead to the conclusion that blind and sighted people have very similar mental models of the 3D world. Because perception works completely differently, the mapping of the 3D world to a 2D picture differs significantly. A tool has been developed to allow blind people to draw pictures and at the same time study their drawing process. A first evaluation shows interesting results. These will eventually lead to a design of a rendering tool for (tactile) pictures for blind people.
Keywords: Tactile drawings, Tactile rendering, Mental model, Tactile drawing tool, TDraw
Development of Dialogue Systems for a Mobility Aid for Blind People: Initial Design and Usability Testing BIBAKPDF 139-144
  Thomas Strothotte; Steffi Fritz; Rrainer Michel; Andreas Raab; Helen Petrie; Valerie Johnson; Lars Reichert; Axel Schalt
This paper presents a new travel aid to increase the independent mobility of blind and elderly travellers. This aid builds on the technologies of geographical information systems (GIS) and the Global Positioning System (GPS). The MoBIC Travel Aid (MoTA) consists of two interrelated components: the MoBIC Pre-journey System (MoPS) to assist users in planning journeys and the MoBIC Outdoor System (MoODS) to execute these plans by providing users with orientation and navigation assistance during journeys. The MoBIC travel aid is complementary to primary mobility aids such as the long cane or guide dog. Results of a study of user requirements, the user interface designs, and the first field trial, currently being conducted in Berlin, are presented.
Keywords: Visually disabled users, Mobility and navigation, GPS, GIS, User trials