HCI Bibliography Home | HCI Conferences | ASSETS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ASSETS Tables of Contents: 9496980002040506070809101112131415

First Annual ACM SIGACCESS Conference on Assistive Technologies

Fullname:First International ACM/SIGCAPH Conference on Assistive Technologies
Editors:Ephraim P. Glinert
Location:Marina del Rey, California
Dates:1994-Oct-31 to 1994-Nov-01
Publisher:ACM
Standard No:ACM ISBN 0-89791-649-2; ACM Order number 444940; ACM DL: Table of Contents hcibib: ASSETS94
Papers:24
Pages:158
  1. Keynote Address
  2. Hearing Impairments
  3. Augmentative Communication -- I
  4. Vision Impairments -- I
  5. Motor Impairments
  6. Vision Impairments -- II
  7. Augmentative Communication -- II
  8. New Directions
  9. Panel

Keynote Address

Counting Our Assets and Liabilities: A Balance Sheet for Computing's First Half Century BIBAPDF ix
  Randy W. Dipner
In this talk, I will attempt to assess the gains and losses to "technology" and to "disability" from several viewpoints. National and international policy issues will be examined, including initiatives such as the National Information Infrastructure and legislation such as the Americans with Disabilities Act and the Assistive Technology Act Reauthorization. The resulting concerns and desires arising within the disabled community will be considered, and I will share my perspectives on these issues as owner of a private assistive technology development company. Finally, I will touch on current efforts within ACM, and discuss their implications both for people with disabilities and for the community of computer professionals as a whole.

Hearing Impairments

Pattern Recognition and Synthesis for Sign Language Translation System BIBAPDF 1-8
  Masaru Ohki; Hirohiko Sagawa; Tomoko Sakiyama; Eiji Oohira; Hisashi Ikeda; Hiromichi Fujisawa
Sign language is one means of communication for hearing-impaired people. Words and sentences in sign language are mainly represented by hands' gestures. In this report, we show a sign language translation system which we are developing. The system translates Japanese sign language into Japanese and vice versa. In this system, hand shape and position data are inputted using DataGlove. Inputted hand motions are recognized and translated into Japanese sentences. Japanese text is translated into sign language represented as 3-D computer-graphic animation of sign language gestures.
Multimedia Dictionary of American Sign Language BIBAPDF 9-16
  Sherman Wilcox; Joanne Scheibman; Doug Wood; Dennis Cokely; William C. Stokoe
The Multimedia Dictionary of American Sign Language (MM-DASL) is a Macintosh application designed to function as a bilingual (ASL-English) dictionary. It presents ASL signs in full-motion digital video using Apple's QuickTime technology. Major functions of the application include the capability to search for ASL signs by entering English words; the capability to search for ASL signs directly (by specifying formational features); and the capability to perform fuzzy searching (in both ASL and English search modes). For each ASL lexical entry, the dictionary contains definitions of the sign, grammatical information, usage notes, successful English translations, and other information. In addition to serving as the core engine for the MM-DASL, the application is capable of being localized to any signed language, thus allowing researchers and developers in other countries to use the MM-DASL to develop their own signed language dictionaries.
A System for Teaching Speech to Profoundly Deaf Children using Synthesized Acoustic and Articulatory Patterns BIBAPDF 17-22
  Elizabeth Keate; Hector Javkin; Norma Antonanzas-Barroso; Ranjun Zou
This paper describes a computer assisted method of teaching profoundly deaf children to speak, which employs the unique feature of an integrated text-to-speech system (ITS). Our earlier speech training system [1] presented a series of speech parameters, derived from articulatory instruments and acoustic analysis, in a visual form. In that system, teacher's speech is input to the system and used as a model for the children to follow, and the children's speech is monitored to provide feedback. As with other computer-aided speech training systems (e.g. [2]), the teacher-assisted trainer is limited by the time students have with speech teachers. Several computer-based systems for providing information as to the desired acoustic and articulatory patterns and feedback showing what the children are doing already exist. In our system, we have developed an articulatory component which synthesizes tongue-palate contact patterns for the children to follow.

Augmentative Communication -- I

Iconic Language Design for People with Significant Speech and Multiple Impairments BIBAPDF 23-30
  P. L. Albacete; S. K. Chang; G. Polese; B. Baker
We present an approach of iconic language design for people with significant speech and multiple impairments (SSMI), based upon the Theory of Icon Algebra and the theory of Conceptual Dependency (CD) to derive the meaning of iconic sentences. An interactive design environment based upon this methodology is described.
The Application of Spatialization and Spatial Metaphor to Augmentative and Alternative Communication BIBAPDF 31-38
  Patrick Demasco; Alan F. Newell; John L. Arnott
The University of Delaware and the University of Dundee are collaborating on a project that is investigating the application of spatialization and spatial metaphors to interfaces for Augmentative and Alternative Communication. This paper outlines the project's motivation, goals, and methodological considerations. It presents a number of design principles obtained from a review of the HCI literature. Finally, it describes progress on the demonstration of this approach. This application called VAL provides a computer-based word board that retains spatial equivalence to the user's paper-based system. It also allows the user to access an extended lexicon through an interface to the WordNet lexical database.

Vision Impairments -- I

Screen Reader/2: Access to OS/2 and the Graphical User Interface BIBAPDF 39-46
  Jim Thatcher
Screen Reader/2 is IBM's access system for OS/2, providing blind users access to the graphical user interface (GUI) of Presentation Manager, to Windows programs running under OS/2, and to text mode DOS and OS/2 programs. Screen Reader/2 is a completely redesigned and rewritten follow-on to IBM's Screen Reader Version 1.2 for DOS.
   There has been considerable discussion about the technical challenges, difficulties, and inherent obstacles presented by the GUI. Not enough time and energy has been devoted to the successes in GUI access, in part because the developers of GUI access software have had their hands full trying to solve very difficult problems.
   This paper will describe how IBM Screen Reader makes the GUI accessible.
Providing Access to Graphical User Interfaces -- Not Graphical Screens BIBAPDF 47-54
  W. Keith Edwards; Elizabeth D. Mynatt; Kathryn Stockton
The 1990 paper "The Graphical User Interface: Crisis, Danger and Opportunity" [BBV90] summarized an overwhelming concern expressed by the blind community: a new type of visual interface threatened to erase the progress made by the innovators of screen reader software. Such software (as the name implies) could read the contents of a computer screen, allowing blind computer users equal access to the tools used by their sighted colleagues. Whereas ASCII-based screens were easily accessible, new graphical interfaces presented a host of technological challenges. The contents of the screen were mere pixel values, the on or off "dots" which form the basis of any bit-mapped display. The goal for screen reader providers was to develop new methods for bringing the meaning of these picture-based interfaces to users who could not see them.
   The crisis was imminent. Graphical user interfaces were quickly adopted by the sighted community as a more intuitive interface. Ironically, these interfaces were deemed more accessible by the sighted population because they seemed approachable for novice computer users. The danger was tangible in the forms of lost jobs, barriers to education, and the simple frustration of being left behind as the computer industry charged ahead.
   Much has changed since that article was published. Commercial screen reader interfaces now exist for two of the three main graphical environments. Some feel that the crisis has been adverted, that the danger is now diminished. But what about the opportunity? Have graphical user interfaces improved the lives of blind computer users? The simple answer is not very much.
   This opportunity has not been realized because current screen reader technology provides access to graphical screens, not graphical interfaces. In this paper, we discuss the historical reasons for this mismatch as well as analyze the contents of graphical user interfaces. Next, we describe one possible way for a blind user to interact with a graphical user interface, independent of its presentation on the screen. We conclude by describing the components of a software architecture which can capture and model a graphical user interface for presentation to a blind computer user.
Increasing Access to Information for the Print Disabled through Electronic Documents in SGML BIBAPDF 55-61
  Bart Bauwens; Jan Engelen; Filip Evenepoel; Chris Tobin; Tom Wesley
There is a growing conviction that the Standard Generalized Markup Language, SGML, can play an important role as an enabling technology to increase access to information for blind and partially sighted people. This paper reports on mechanisms that have been devised to build in accessibility into SGML encoded electronic documents, concentrating on the work done in the CAPS Consortium -- Communication and Access to Information for People with Special Needs, a European Union funded project in the Technology Initiative for Disabled and Elderly People (TIDE) Programme -- and by ICADD, the International Committee on Accessible Document Design.
Interactive Audio Documents BIBAKPDF 62-68
  T. V. Raman; David Gries
Communicating technical material orally is often hindered by the relentless linearity of audio; information flows actively past a passive listener. This is in stark contrast to communication through the printed medium where we can actively peruse the visual display to access relevant information.
   ASTER is an interactive computing system for audio formatting electronic documents (presently, documents written in (LA)TEX) to produce audio documents. ASTER can speak both literary texts and highly technical documents that contain complex mathematics. In fact, the effective speaking and interactive browsing of mathematics is a key goal of ASTER. To this end, a listener can browse both complete documents and complex mathematical expressions. ASTER thus enables active listening.
   This paper describes the browsing component of ASTER. The design and implementation of ASTER is beyond the scope of this paper. Here, we will focus on the browser, and refer to other parts of the system in passing for the sake of completeness.
Keywords: Interactive audio renderings, Audio browsing, Browsing structure, In-context rendering and browsing, Spoken mathematics

Motor Impairments

An Overview of Programs and Projects at the Rehabilitation Research and Development Center BIBAPDF 69-76
  David L. Jaffe
The mission of the Rehabilitation Research and Development Center is to improve the independence and quality of life for disabled veterans through the creation and application of emerging technologies. In support of this mission, the Center develops concepts, devices, and techniques for in-house testing, national evaluation, and technology transfer leading to commercial production. This presentation will detail the Center's design/development process and technology transfer strategies using examples drawn from its fifteen years of operation.
Using the Baby-Babble-Blanket for Infants with Motor Problems: An Empirical Study BIBAKPDF 77-84
  Harriet J. Fell; Linda J. Ferrier; Hariklia Delta; Regina Peterson; Zehra Mooraj; Megan Valleau
Children with motor problems often develop to be passive, presumably because of an inability to communicate and to control the environment. The Baby-Babble-Blanket (BBB), a pad with pressure switches linked to a Macintosh computer, was developed to meet this need. Lying on the pad, infants use head-rolling, leg-lifting and kicking to produce digitized sound. Data is collected by the BBB software on the infant's switch activations. An empirical study was carried out on a five-month-old infant with club feet, hydrocephaly and poor muscle tone to determine what movements the infant could use to access the pad, whether movements would increase over a baseline in response to sound, and what level of cause and effect the infant would demonstrate. Videotapes and switch activation data suggest that the infant:
  • 1) could activate the device by rolling his head and raising his legs.
  • 2) increased switch activations, over a no-sound baseline, in response to the
        sound of his mother's voice.
  • 3) was able to change from using his head to raising his legs in response to
        the reinforcer.
    Keywords: Infants, Communication and environmental control, Sound, Motor problems, Single-case study, Pad
  • Vision Impairments -- II

    Personal Guidance System for the Visually Impaired BIBA 85-91
      Jack M. Loomis; Reginald G. Golledge; Roberta L. Klatzky; Jon M. Speigle; Jerome Tietz
    We outline the design for a navigation system for the visually impaired and describe the progress we have made toward such a system. Our long-term goal is for a portable, self-contained system that will allow visually impaired individuals to travel through familiar and unfamiliar environment without the assistance of guides. The system, as it exists now, consists of the following functional components: (1) a means of determining the traveler's position and orientation in space, (2) a Geographic Information System comprising a detailed database of the surrounding environment and functions for automatic route planning and for selecting the database information desired by the user, and (3) the user interface.
    Hyperbraille -- A Hypertext System for the Blind BIBA 92-99
      Thomas Kieninger; Norbert Kuhn
    Reading documents is a process which is strongly driven by visual impressions. This is even more the case when the document of interest is not only a linear text but rather a hypertext where links to other document parts are realized as highlighted or coloured text. Since blind people are unable to perceive this visual information there is a special need to enable them to navigate through such a non-linear document. In this paper we describe a set of new functions to enhance hypertext systems in order to ensure their accessibility by blind people. We figure out functions that are necessary to step through a hypertext document as well as some status report functions to give access to information that is usually presented visually in common hypertext systems.
       From our goal to set up an office workspace in a concrete application it becomes clear that we do not only want to enable a blind person to read hypertext-documents but moreover, it must be possible to edit hypertext documents in an easy-to-use on-line fashion. In addition to conventional text processing systems this means that we have to provide effective methods to build and to edit links. Furthermore, we integrate document analysis techniques to build a bridge between paper documents and braille output devices.
    Automatic Impact Sound Generation for using in Non-Visual Interfaces BIBAK 100-106
      A. Darvishi; E. Munteanu; V. Guggiana; H. Schauer; M. Motavalli; M. Rauterberg
    This paper describes work in progress on automatic generation of "impact sounds" based on purely physical modelling. These sounds can be used as non-speech audio presentation of objects and as interaction mechanisms to non visual interfaces. Different approaches for synthesizing impact sounds, the process of recording impact sounds and the analysis of impact sounds are introduced. A physical model for describing impact sounds "spherical objects hitting flat plates or beams" is presented. Some examples of impact sounds generated by mentioned physical model and comparison of spectra of real recorded sounds and model generated impact sounds (generated via physical modelling) are discussed. The objective of this research project (joint project University of Zurich and Swiss Federal Institute of Technology) is to develop a concept, methods and a prototype for an audio framework. This audio framework shall describe sounds on a highly abstract semantic level. Every sound is to be described as the result of one or several interactions between one or several objects at a certain place and in a certain environment.
    Keywords: Non speech sound generation, Visual impairment, Auditory interfaces, Physical modelling, Auditive feedback, Human computer interaction, Software ergonomics, Usability engineering, Material properties

    Augmentative Communication -- II

    A Communication Tool for People with Disabilities: Lexical Semantics for Filling in the Pieces BIBA 107-114
      Kathleen F. McCoy; Patrick W. Demasco; Mark A. Jones; Christopher A. Pennington; Peter B. Vanderheyden; Wendy M. Zickus
    The goal of this project is to provide a communication tool for people with severe speech and motor impairments (SSMI). The tool will facilitate the formation of syntactically correct sentences in the fewest number of keystrokes. Consider the situation where an individual is using a word-based augmentative communication system -- each word is (basically) one keystroke and morphological endings etc. require additional keystrokes. Our prototype system is intended to reduce the burden of the user by allowing him/her to select only the uninflected content words of the desired sentence. The system is responsible for adding proper function words (e.g., articles, prepositions) and necessary morphological endings. In order to accomplish this task, the system attempts to generate a semantic representation of an utterance under circumstances where syntactic (parse tree) information is not available because the input to the system is a compressed telegraphic message rather than a standard English sentence. The representation is used by the system to generate a full English sentence from the compressed input. The focus of the paper is on the knowledge and processing necessary to produce a semantic representation under these telegraphic constraints.
    Validation of a Keystroke-Level Model for a Text Entry System Used by People with Disabilities BIBA 115-122
      Heidi Horstmann Koester; Simon P. Levine
    A keystroke-level model of user performance was developed to predict the improvement in text generation rate with a word prediction system relative to letters-only typing. Two sets of model simulations were tested against the actual performance of able-bodied and spinal cord injured subjects. For Model 1A, user parameter values were determined independently of subjects' actual performance. The percent improvements predicted by Model 1A differed from the actual improvements by 11 percentage points for able-bodied subjects and 53 percentage points for spinal cord injured subjects. Model 1B employed user parameter values derived from subjects' data and yielded more accurate simulations, with an average error of 6 percentage points across all subjects.

    New Directions

    An Experimental Sound-Based Hierarchical Menu Navigation System for Visually Handicapped Use of Graphical User Interfaces BIBA 123-128
      Arthur I. Karshmer; Pres Brawner; George Reiswig
    The use of modern computers by the visually handicapped has become more difficult over the past few years. In earlier systems the user interface was a simple character based environment. In those systems, simple devices like screen readers, braille output and speech synthesizers were effective. Current systems now run Graphical User Interfaces (GUIs) which have rendered these simple aids almost useless.
       In the current work we are developing a tonally based mechanism that allows the visually handicapped user to navigate through the same complex hierarchical menu structures used in the GUI. The software can be easily, and cheaply, incorporated in modern user interfaces, making them available for use by the visually handicapped. In the remainder of this paper we present a description of the sound-based interfaces as well as the techniques we have developed to test them.
    A Rule-Based System that Suggests Computer Adaptations for Users with Special Needs BIBA 129-135
      William W. McMillan; Michael Zeiger; Lech Wisniewski
    A rule-based program was written in Prolog to give advice about how to configure a computing system for users who have special needs. It employs a simple user model describing visual, cognitive, motor, and other abilities. Recommendations are made about appropriate input and output devices, including screens, keyboards, speech devices, and many others. The program was tested against professionals in this field and was shown to agree with them about as well as they agree with one another. Potential uses include advising those who configure computer systems, serving as a teaching tool, and driving intelligent human-computer interaction.
    LVRS: The Low Vision Research System BIBA 136-140
      Mitchell Krell
    The purpose of this paper is to describe the Low Vision Research System (LVRS). This is a computer-based research tool to be used by vision researchers to develop vision enhancement systems. The three components of the LVRS include warping software, interactive filtering software, and a digital video editing package.
    EEG as a Means of Communication: Preliminary Experiments in EEG Analysis using Neural Networks BIBA 141-147
      Charles W. Anderson; Saikumar V. Devulapalli; Erik A. Stolz
    EEG analysis has played a key role in the modeling of the brain's cortical dynamics, but relatively little effort has been devoted to developing EEG as a limited means of communication. If several mental states can be reliably distinguished by recognizing patterns in EEG, then a paralyzed person could communicate to a device like a wheelchair by composing sequences of these mental states. EEG pattern recognition is a difficult problem and hinges on the success of finding representations of the EEG signals in which the patterns can be distinguished. In this article, we report on a study comparing three EEG representations, the raw signals, a reduced-dimensional representation using the K-L transform, and a frequency-based representation. Classification is performed with a two-layer neural network implemented on a CNAPS server (128 processor, SIMD architecture) by Adaptive Solutions, Inc.. The best classification accuracy on untrained samples is 73% using the frequency-based representation.
    Audio Formatting of a Graph BIBA 148-152
      Sophie H. Zhang; Mukkai Krishnamoorthy
    The software package of Audio Formatting of a Graph (AFG) is primarily designed for people who are visually challenged to study graph theory. It is menu-driven so that a user can get information about a graph easily and conveniently.
    Disabilities, Opportunities, Internetworking and Technology (DO-IT) on the Electronic Highway BIBA 153-156
      Sheryl Burgstahler; Dan Comden
    The United States needs citizens trained in science, engineering, and mathematics, including individuals from traditionally underrepresented groups such as women, racial minorities, and individuals with disabilities. The National Science Foundation has funded a project through the College of Engineering at the University of Washington whose purpose is to recruit and retain students with disabilities into science, engineering, and mathematics academic programs and careers. DO-IT (Disabilities, Opportunities, Internetworking and Technology) makes extensive use of computers, adaptive technology and the Internet network.

    Panel

    Interface Modeling Issues in Providing Access to GUIs for the Visually Impaired BIBA 157
      A. D. N. Edwards; E. D. Mynatt; J. Thatcher
    Research in providing access to graphical interfaces for people who are blind has been ongoing for a number of years. After significant work, screen readers for three commercial graphical environments (Macintosh, Windows, OS/2) are now available, and steps to make X Windows accessible are underway. But many issues about how a blind person might want to interact with an accessible graphical interface are still unresolved. Are concepts such as drag and drop, iconified windows and direct manipulation appropriate for nonvisual interfaces? If so, how can they be effectively conveyed to people who have never experienced working with graphical interfaces? At the heart of the matter is the question: What is the model of the user interface that the screen reader is providing access to? Even the name "screen reader" implies a certain way of thinking about the graphical interface.
       A number of different approaches have been utilized in various commercial screen access systems and research prototypes. These systems have opened some doors for nonvisual interaction with a graphical interface, but other doors remain closed. In this session, we will not discuss underlying implementation strategies, although these are interesting in their own right. Rather, we will focus on the designer's conceptual model of the graphical interface, and how this model is conveyed to the (blind) user of the nonvisual interface.