| Assets: where do we go from here? | | BIB | Full-Text | 1-3 | |
| Alan F. Newell | |||
| From assistive technology to a web accessibility service | | BIBAK | Full-Text | 4-8 | |
| Peter G. Fairweather; Vicki L. Hanson; Sam R. Detweiler; Richard S. Schwerdtfeger | |||
| This paper considers different ways to enhance access to the World Wide Web
for persons with sensory, cognitive, or motor limitations. Paradoxically, while
complex Web architectures may seem to have inhibited accessibility, they have
broadened the range of points where we can try to improve it. This paper
identifies these points and evaluates the advantages and disadvantages of each.
In particular, it describes a project to develop a strategy to enhance access
that can be distributed across multiple control points and implemented as an
aggregation of Web services. Keywords: Web services, World Wide Web, accessibility, adaptive interfaces | |||
| Improving the accessibility of aurally rendered HTML tables | | BIBAK | Full-Text | 9-16 | |
| Robert Filepp; James Challenger; Daniela Rosu | |||
| Current techniques employed to aurally render HTML tables often result in
output that is very difficult for sight-impaired users to understand. This
paper proposes TTPML, an XML-compliant markup language, which facilitates the
generation of prose descriptions of tabular information. The markup language
enables content creators to specify contextual reinforcement of, and linear
navigation through, tabular information. The markup language may be applied to
pre-existing Web content and is reusable across multiple tables. TTPML may be
interpreted by origin servers, proxy servers, or browsers. We believe that our
approach benefits sight-impaired users by improving accessibility to tabular
information. Keywords: Web accessibility, XML, aural interfaces, tables | |||
| Web accessibility for low bandwidth input | | BIBAK | Full-Text | 17-24 | |
| Jennifer Mankoff; Anind Dey; Udit Batra; Melody Moore | |||
| One of the first, most common, and most useful applications that today's
computer users access is the World Wide Web (web). One population of users for
whom the web is especially important is those with motor disabilities, because
it may enable them to do things that they might not otherwise be able to do:
shopping; getting an education; running a business. This is particularly
important for low bandwidth users: users with such limited motor and speech
that they can only produce one or two signals when communicating with a
computer. We present requirements for low bandwidth web accessibility, and two
tools that address these requirements. The first is a modified web browser, the
second a proxy that modifies HTML. Both work without requiring web page authors
to modify their pages. Keywords: WWW, low bandwidth input, motor impairment, web proxy | |||
| Navigation of HTML tables, frames, and XML fragments | | BIBAK | Full-Text | 25-32 | |
| E. Pontelli; D. Gillan; W. Xiong; E. Saad; G. Gupta; A. I. Karshmer | |||
| In this paper, we provide a progress report on the development of technology
to support the non-visual navigation of complex HTML and XML structures. Keywords: Web Accessibility, visually impaired users | |||
| Sketching images eyes-free: a grid-based dynamic drawing tool for the blind | | BIBAK | Full-Text | 33-40 | |
| Hesham M. Kamel; James A. Landay | |||
| In this paper we describe one method of transforming a mouse-based graphical
user interface into a navigable, grid-based auditory interface. We also report
the results of an experiment that tested the effectiveness of a drawing tool
for the blind called IC2D that uses this interaction style. The experiment
included eight visually impaired participants and eight blindfolded sighted
participants. The results show that auditory interpretation of graphics is an
effective interface technique for visually impaired users. Further, the
experiment demonstrates that visually impaired users can develop meaningful
drawings when given adequate technological support. Keywords: IC2D, auditory user interfaces, drawing, graphical semantic enhancement,
graphics, grid, visually impaired | |||
| Design and implementation of virtual environments training of the visually impaire | | BIBAK | Full-Text | 41-48 | |
| D. Tzovaras; G. Nikolakis; G. Fergadis; S. Malasiotis; M. Stavrakis | |||
| This paper presents the virtual reality applications developed for the
feasibility study tests of the EU funded IST project ENORASI. ENORASI aims at
developing a highly interactive and extensible haptic VR training system that
allows visually impaired people, especially those blind from birth, to study
and interact with various virtual objects. A number of custom applications have
been developed based on the interface provided by the CyberGrasp haptic device.
Eight test categories were identified and corresponding tests were developed
for each category. Twenty-six blind persons conducted the tests and the
evaluation results have shown the degree of acceptance of the technology and
the feasibility of the proposed approach. Keywords: haptics, training, virtual environments, visually impaired | |||
| Multimodal feedback: establishing a performance baseline for improved access by individuals with visual impairments | | BIBAK | Full-Text | 49-56 | |
| Holly S. Vitense; Julie A. Jacko; V. Kathlene Emery | |||
| Multimodal interfaces have the potential to enhance a user's overall
performance, especially when one perceptual channel, such as vision, is
compromised. This research investigated how unimodal, bimodal, and trimodal
feedback affected the performance of fully sighted users. Limited research
exists that investigates how fully sighted users react to multimodal feedback
forms, and to-date even less research is available that has investigated how
users with visual impairments respond to multiple forms of feedback. A complex
direct manipulation task, consisting of a series search and selection
drag-and-drop subtasks, was evaluated in this study. The multiple forms of
feedback investigated were auditory, haptic and visual. Each form of feedback
was tested alone and in combination. User performance was assessed through
measures of workload time. Workload was measured objectively and subjectively,
through the physiological measure of pupil diameter and a portion of the NASA
Task Load Index (TLX) workload survey, respectively. Time was captured by a
measure of how long it took to complete a particular element of the task. The
results demonstrate that multimodal feedback improves the performance of fully
sighted users and offers great potential to users with visual impairments. As a
result, this study serves as a baseline to drive the research and development
of effective feedback combinations to enhance performance for individuals with
visual impairments. Keywords: auditory, feedback, haptic, human-computer interaction, multimodal, visual,
visual impairment | |||
| Multimodal virtual reality versus printed medium in visualization for blind people | | BIBAK | Full-Text | 57-64 | |
| Wai Yu; Stephen Brewster | |||
| In this paper, we describe a study comparing the strengths of a multimodal
Virtual Reality (VR) interface against traditional tactile diagrams in
conveying information to visually impaired and blind people. The multimodal VR
interface consists of a force feedback device (SensAble PHANTOM), synthesized
speech and non-speech audio. Potential advantages of the VR technology are well
known however its real usability in comparison with the conventional
paper-based medium is seldom investigated. We have addressed this issue in our
evaluation. The experimental results show benefits from using the multimodal
approach in terms of more accurate information about the graphs obtained by
users. Keywords: assistive technology, haptics, human computer interaction, multimodal
interface, virtual reality | |||
| Auditory and tactile interfaces for representing the visual effects on the web | | BIBAK | Full-Text | 65-72 | |
| Chieko Asakawa; Hironobu Takagi; Shuichi Ino; Tohru Ifukube | |||
| In this paper, we describe auditory and tactile interfaces to represent
visual effects nonvisually for blind users, allowing intuitive recognition of
visual content that appears on the Web. This research examines how visual
effects could be recognized by blind subjects using the senses of hearing and
touch, aiming at integrating the results into a practical system in the future.
As an initial step, two experiments were performed, one for sonification and
tactilization of a page overview based on color-based fragmented groupings
without speech, and one for sonification and tactilization of emphasized text
based on analyzing rich text information with speech. The subjects could
recognize visual representations presented by auditory and tactile interfaces
throughout the experiment, and were conscious of the importance of the visual
structures. We believe this shows our approach may be practical and available
in the future. We will summarize our results and discuss what kind of
information is suitable for each sense, as well as the next planned experiment
and other future work. Keywords: auditory interface, blind, nonvisual, sonification, tactile interface,
tactilization | |||
| Planning, reasoning, and agents for non-visual navigation of tables and frames | | BIBAK | Full-Text | 73-80 | |
| Enrico Pontelli; Tran Cao Son | |||
| In this paper we demonstrate how the DSL for Table navigation [16] can be
reinterpreted in the context of an action theory [8]. We also show how this
generalization provides the ability to carry out more complex tasks such as (i)
allowing the user to describe the objective of his/her navigation as a goal and
let automatic mechanisms (i.e., a planner) develop (part of) the navigation
process; and (ii) allowing the semantic description to predefine not only
complete navigation strategies (as in [16]) but also partial skeletons, making
the remaining part of the navigation dependent on run-time factors, e.g.,
user's goals, specific aspects of the table's content, User's run-time
decisions. Keywords: agents, domain specific language, semantics navigation | |||
| Site-wide annotation: reconstructing existing pages to be accessible | | BIBAK | Full-Text | 81-88 | |
| Hironobu Takagi; Chieko Asakawa; Kentarou Fukuda; Junji Maeda | |||
| The Web has become a new information resource for the blind. However, Web
accessibility is becoming worse, since page authors tend to care only for the
visual appearance. We have developed an Accessibility Transcoding System to
solve this problem. This system has the ability to transcode complete pages on
annotated sites into totally accessible pages without changing the original
pages. However, site-wide annotation authoring is an extremely tedious and
time-consuming task. This prevented us from applying our transcoding system to
a wide variety of sites. In order to overcome this difficulty, we developed a
new algorithm, "Dynamic Annotation Matching". By utilizing this algorithm, our
transcoding system can automatically determine appropriate annotations based on
each page's layout. We also developed a site-wide annotation-authoring tool,
"Site Pattern Analyzer." We evaluated the feasibility of creating site-wide
annotations by using the algorithm and the tool, and report on our success
here. Keywords: Web Accessibility, annotation, layout-based annotation matching, transcoding | |||
| Using handhelds to help people with motor impairments | | BIBAK | Full-Text | 89-96 | |
| Brad A. Myers; Jacob O. Wobbrock; Sunny Yang; Brian Yeung; Jeffrey Nichols; Robert Miller | |||
| People with Muscular Dystrophy (MD) and certain other muscular and nervous
system disorders lose their gross motor control while retaining fine motor
control. The result is that they lose the ability to move their wrists and
arms, and therefore their ability to operate a mouse and keyboard. However,
they can often still use their fingers to control a pencil or stylus, and thus
can use a handheld computer such as a Palm. We have developed software that
allows the handheld to substitute for the mouse and keyboard of a PC, and
tested it with four people (ages 10, 12, 27 and 53) with MD. The 12-year old
had lost the ability to use a mouse and keyboard, but with our software, he was
able to use the Palm to access email, the web and computer games. The
27-year-old reported that he found the Palm so much better that he was using it
full-time instead of a keyboard and mouse. The other two subjects said that our
software was much less tiring than using the conventional input devices, and
enabled them to use computers for longer periods. We report the results of
these case studies, and the adaptations made to our software for people with
disabilities. Keywords: Muscular Dystrophy, Palm pilot, Pebbles, Personal Digital Assistants (PDAs),
assistive technologies, disabilities, hand-held computers, handicapped | |||
| Ongoing investigation of the ways in which some of the problems encountered by some dyslexics can be alleviated using computer techniques | | BIBAK | Full-Text | 97-103 | |
| Anna Dickinson; Peter Gregor; Alan F. Newell | |||
| This paper describes the ongoing development of a highly configurable word
processing environment developed using a pragmatic, obstacle-by-obstacle
approach to alleviating some of the visual problems encountered by dyslexic
computer users. The paper describes the current version of the software and the
development methodology as well as the results of a pilot study which indicated
that visual environment individually configured using the SeeWord software
improved reading accuracy as well as subjectively rated reading comfort. Keywords: configuration, dyslexia, user-centred design, word processing | |||
| Virtual environments for social skills training: the importance of scaffolding in practice | | BIBAK | Full-Text | 104-110 | |
| Steven J. Kerr; Helen R. Neale; Sue V. G. Cobb | |||
| Virtual Environments (VE's) offer the potential for users to explore social
situations and 'try out' different behaviour responses for a variety of
simulates social interactions. One of the challenges for the VE developer is
how to construct the VE to allow freedom of exploration and flexibility in
interactive behaviour, without the risk of users deliberately or inadvertently
missing important learning goals. Scaffolding embedded within the VE software
can aid the user's learning in different contexts, such as individual, tutored
or group learning situations. This paper describes two single-user VE scenarios
that have been developed within the AS interactive project and presents
observation results from initial trials conducted at a user school. Keywords: Virtual Environments, autism, scaffolding of learning, social skills
training | |||
| Modeling educational software for people with disabilities: theory and practice | | BIBAK | Full-Text | 111-118 | |
| Nelson Baloian; Wolfram Luther; Jaime Sanchez | |||
| Interactive multimedia learning systems are not suitable for people with
disabilities. They tend to propose interfaces which are not accessible for
learners with vision or auditory disabilities. Modeling techniques are
necessary to map real world experiences to virtual worlds by using 3D auditory
representations of objects for blind people and visual representations for deaf
people. In this paper we describe common aspects and differences in the process
of modeling the real world for applications involving tests and evaluations of
cognitive tasks with people with reduced visual or auditory cues. To validate
our concepts, we examine two existing systems using them as examples: AudioDoom
and Whisper. AudioDoom allows blind children to explore and interact with
virtual worlds created with spatial sound. Whisper implements a workplace to
help people with impaired auditory abilities to recognize speech errors. The
new common model considers not only the representation of the real world as
proposed by the system but also the modeling of the learner's knowledge about
the virtual world. This can be used by the tutoring system to enable the
learner to receive relevant feedback. Finally, we analyze the most important
characteristics in developing systems by comparing and evaluating them and
proposing some recommendations and guidelines. Keywords: modeling methodologies, sensory disabilities, tutoring systems, user adapted
interfaces | |||
| Zooming interfaces!: enhancing the performance of eye controlled pointing devices | | BIBAK | Full-Text | 119-126 | |
| Richard Bates; Howell Istance | |||
| This paper quantifies the benefits and usability problems associated with
eye-based pointing direct interaction on a standard graphical user interface.
It shows where and how, with the addition of a second supporting modality, the
typically poor performance and subjective assessment of eye-based pointing
devices can be improved to match the performance of other assistive technology
devices. It shows that target size is the overriding factor affecting device
performance and that when target sizes are artificially increased by 'zooming
in' on the interface under the control of a supporting modality then eye-based
pointing becomes a viable and usable interaction methodology for people with
high-level motor disabilities. Keywords: assistive technology, eye-tracking, graphical user interfaces, pointing
devices, zoom screen | |||
| HaWCoS: the "hands-free" wheelchair control system | | BIBAK | Full-Text | 127-134 | |
| Torsten Felzer; Bernd Freisleben | |||
| A system allowing to control an electrically powered wheelchair without
using the hands is introduced. HaWCoS -- the "Hands-free" Wheelchair Control
System -- relies upon muscle contractions as input signals. The working
principle is as follows. The constant stream of EMG signals associated with any
arbitrary muscle of the wheelchair driver is monitored and reduced to a stream
of contraction events. The reduced stream affects an internal program state
which is translated into appropriate commands understood by the wheelchair
electronics. The feasibility of the proposed approach is illustrated by a
prototypical implementation for a state-of-the-art wheelchair. Operating a
HaWCoS-wheelchair requires extremely little effort, which makes the system
suitable even for people suffering from very severe physical disabilities. Keywords: EMG signal, electrical wheelchair, muscle control | |||
| Cursor measures for motion-impaired computer users | | BIBAK | Full-Text | 135-142 | |
| Simeon Keates; Faustina Hwang; Patrick Langdon; P. John Clarkson; Peter Robinson | |||
| "Point and click" interactions remain one of the key features of graphical
user interfaces (GUIs). People with motion-impairments, however, can often have
difficulty with accurate control of standard pointing devices. This paper
discusses work that aims to reveal the nature of these difficulties through
analyses that consider the cursor's path of movement. A range of potential
cursor measures was applied, and a number of them were found to be significant
in capturing the differences between able-bodied users and motion-impaired
users, as well as the differences between a haptic force feedback condition and
a control condition. cursor measures found in the literature, however, do not
make up a comprehensive list, but provide a starting point for analysing cursor
movements more completely. Six new cursor characteristics for motion-impaired
users are introduced to capture aspects of cursor movement different from those
already proposed. Keywords: cursor studies, force feedback, motion-impaired users | |||
| An invisible keyguard | | BIBAK | Full-Text | 143-149 | |
| Shari Trewin | |||
| Overlap errors, in which two keys are pressed down at once, are a common
typing error for people with motor disabilities. Keyguards are a commonly
suggested means to may reduce overlap errors. However, they are also unpopular
with many users. We present an alternative to the keyguard, a software filter
which targets overlap errors. Basic, keystroke timing-based, and language-based
techniques for identifying and correcting overlap errors are described. Their
performance is compared using a corpus of typing data recorded by keyboard
users with motor disabilities. The best filter performance was obtained by
keystroke timing characteristics to identify and filter out extra characters.
Accuracy of error identification was dependent on the typing style of the user.
The filter accurately corrected 80% of the overlap errors presented. Combining
the identification and correction techniques gave a 50-75% reduction in errors
for the three study participants with the highest error rates. Keywords: OverlapKeys, accessibility, keyboard, keyguard, motor disabilities, typing
errors | |||
| Designing for dynamic diversity: interfaces for older people | | BIBAK | Full-Text | 151-156 | |
| Peter Gregor; Alan F. Newell; Mary Zajicek | |||
| In this paper, we describe why designers need to look beyond the twin aims
of designing for the 'typical' user and designing "prostheses". Making
accessible interfaces for older people is a unique but many-faceted challenge.
Effective applications and interface design needs to address the dynamic
diversity of the human species. We introduce a new design paradigm, Design for
Dynamic Diversity, and suggest a methodology to assist its achievement, User
Sensitive Inclusive Design. To support our argument for a new form of design we
report experimentation, which indicates that older people have significantly
different and dynamically changing needs. We also put forward initial solutions
for Designing for Dynamic Diversity, where memory, vision and confidence
provide the parameters for discussion, and illustrate the importance of User
Sensitive Inclusive Design in establishing a framework for the operation of
Design for Dynamic Diversity. Keywords: Design for Dynamic Diversity, HCI, User Sensitive Inclusive Design, aging,
design for all, older people, universal accessibility, usability engineering | |||
| A novel multi-stage approach to the detection of visuo-spatial neglect based on the analysis of figure-copying tasks | | BIBAK | Full-Text | 157-161 | |
| R. M. Guest; M. C. Fairhurst | |||
| This paper examines a computer-based technique for the detection of
visuo-spatial neglect from the responses of a simple geometric shape copying
task. Defining pass/fail criteria based on the presence of drawn components,
responses can be accurately and objectively assessed. More importantly, we show
that by analysing novel dynamic performance features detailing timing and
constructional aspects of each response, significant performance deficits can
be noted in drawings made by clinically diagnosed neglect subjects that would
have been classified as 'normal' using conventional static analysis, thus
improving the sensitivity of the assessment. Keywords: automated diagnosis, drawing analysis, visuo-spatial neglect | |||
| Assistive social interaction for non-speaking people living in the community | | BIBAK | Full-Text | 162-169 | |
| Nick Hine; John L. Arnott | |||
| The move from institution to community care has resulted in more disabled
and elderly people receiving care at home. For some, their disability or
frailty prevents them from being involved in social activities outside the
home, resulting in unacceptable social isolation. This problem is compounded if
the person has a speech or language impairment. In general, social interaction
is important for people, and they often use stories, pictures and other media
to present important events to others, In this paper, we will describe a
communication service designed to provide non-speaking people with a means to
interact socially when living independently, based on the sharing of stories
using pictures and other media. Keywords: Internet, assitive communication, community care, social isolation,
videoconferencing | |||
| Older adults' evaluations of speech output | | BIBAK | Full-Text | 170-177 | |
| Lorna Lines; Kate S. Hone | |||
| Speech output is frequently used to provide access to interactive systems
for visually impaired users, many of whom are older adults. This paper
considers the use of speech output within the context of an Intelligent Home
System designed to allow older adults to remain living independently for
longer. The importance of user evaluations of the system 'voice' in this
context is discussed and an experiment is reported that investigated the effect
of voice gender and type (natural or synthetic) on older users' evaluations. A
within-subjects factorial design was used with sixteen participants over the
age of 65. The results show that male voices were preferred to female voices
overall and natural voices were preferred to synthetic voices. The implications
of these results for the choice of system voice characteristics for speech
output are discussed. Keywords: assistive technology, human-computer interaction, intelligent home systems.,
older adults, speech output, visual impairment | |||
| Speech-based cursor control | | BIBAK | Full-Text | 178-185 | |
| Azfar S. Karimullah; Andrew Sears | |||
| Speech recognition can be a powerful tool for individuals with physical
disabilities that hinder their ability to use traditional input devices.
State-of-the-art speech recognition systems typically provide mechanisms for
both data entry and cursor control, but the researchers continue to investigate
methods of improving these interactions. Numerous researchers are investigating
methods to improve the underlying technologies that make speech recognition
possible and others focus on understanding the difficulties users experience
using dictation-oriented applications, but few researchers have investigated
the issues involved in speech-based cursor control. In this article, we
describe a study that investigates the efficacy of two variations of a standard
speech-based cursor control mechanism. One employs the standard mouse cursor
while the second provides a predictive cursor designed to help users compensate
for the delays often associated with speech recognition. As expected, larger
targets and shorter distances resulted in shorter target selection times while
larger targets also resulted in fewer errors. Although there were no
differences between the standard and predictive cursors, a relationship between
the delays associated with spoken input, the speed at which the cursor moves,
and the minimum size for targets that can be reliably selected emerged that can
guide the application of similar speech-based cursor control mechanisms as well
as future research. Keywords: cursor, mouse cursor, navigation, predictive, speech recognition | |||
| A predictive Blissymbolic to English translation system | | BIBAK | Full-Text | 186-191 | |
| Annalu Waller; Kris Jack | |||
| This paper reports on the use of predictive techniques to translate
Blissymbol sentences into grammatically correct English. Evaluations of this
approach show that it is possible to translate short sentences by analysing the
likelihood of word tri-gram occurrences in English source texts. The
translation system is designed to be a component of a Blissymbol word processor
which allows users to convert Blissymbol sentences into grammatically correct
English. Keywords: AAC, Blissymbolics, natural language translation | |||
| Speech recognition in university classrooms: liberated learning project | | BIBAK | Full-Text | 192-196 | |
| Keith Bain; Sara H. Basson; Mike Wald | |||
| The LIBERATED LEARNING PROJECT (LLP) is an applied research project studying
two core questions: 1) Can speech recognition (SR) technology successfully
digitize lectures to display spoken words as text in university classrooms? 2)
Can speech recognition technology be used successfully as an alternative to
traditional classroom notetaking for persons with disabilities? This paper
addresses these intriguing questions and explores the underlying complex
relationship between speech recognition technology, university educational
environments, and disability issues. Keywords: accessibility, higher education, speech recognition | |||
| Voice over Workplace (VoWP): voice navigation in a complex business GUI | | BIBAK | Full-Text | 197-204 | |
| Frankie James; Jeff Roelands | |||
| Voice interfaces can be used to meet some accessibility requirements for
physically disabled users, but only if they address inherent usability
problems, namely, the trade-off between user efficiency and ambiguity handling.
This paper explores usability issues related to voice interfaces for complex
GUIs. We present two user studies on a series of interface designs to support
voice navigation within a complex business GUI, and discuss the findings as
they relate to efficiency and ambiguity handling. We conclude by discussing
future directions for this work, including the addition of data input
capabilities, which will be necessary to provide a truly accessible solution. Keywords: GUI, accessibility, physical disabilities, user studies, voice interface | |||
| Tessa, a system to aid communication with deaf people | | BIBAK | Full-Text | 205-212 | |
| Stephen Cox; Michael Lincoln; Judy Tryggvason; Melanie Nakisa; Mark Wells; Marcus Tutt; Sanja Abbott | |||
| TESSA is an experimental system that aims to aid transactions between a deaf
person and a clerk in a Post Office by translating the clerk's speech to sign
language. A speech recogniser recognises speech from the clerk and the system
then synthesizes the appropriate sequence of signs in British Sign language
(BSL) using a specially-developed avatar. By using a phrase lookup approach to
language translation, which is appropriate for the highly constrained discourse
in a Post Office, we were able to build a working system that we could
evaluate. We summarise the results of this evaluation (undertaken by deaf users
and Post office clerks), and discuss how the findings from the evaluation are
being used in the development of an improved system. Keywords: Aids for the Deaf, avatars, interactive systems, speech recognition,
translation systems | |||
| Capturing phrases for ICU-Talk, a communication aid for intubated intensive care patients | | BIBAK | Full-Text | 213-217 | |
| S. Ashraf; A. Judson; I. W. Ricketts; A. Waller; N. Alm; B. Gordon; F. MacAulay; J. K. Brodie; M. Etchels; A. Warden; A. J. Shearer | |||
| The need for intubated patients, within the intensive care setting, to
communicate more effectively led to the development of ICU-Talk, an
augmentative and alternative communication aid. The communication aid contains
a database containing both core and patient-specific vocabulary. Many users of
communication aids can provide direct input into the vocabulary, but intensive
care patients are not in this position. This paper discusses the methods chosen
to gather the vocabulary for an intensive care setting. Keywords: AAC, ICU, communication, vocabulary | |||
| A new generation of communication aids under the ULYSSES component-based framework | | BIBAK | Full-Text | 218-225 | |
| Georgios Kouroupetroglou; Alexandros Pino | |||
| In this paper, we introduce a new generation of computer-based communication
aids, designed and developed using state of the art software engineering models
and architectures. The communicators we present are based on a component-based
framework called ULYSSES that aims to simplify the integration of multi-vendor
components into low cost products and maximizes modularity and reusability.
Following the ULYSSES approach, one can build up powerful and reliable
applications, adaptable to various user needs and requirements. For developers
of AAC components, ULYSSES provides an engineering-for-reuse environment with
guidelines and tools to build software modules, which can operate effectively
and interact with each other transparently, without even being aware of each
other's existence. Furthermore, ULYSSES grants a process of
engineering-with-reuse for AAC system integrators for the selection and
assembly of components on demand to build user-specific robust communicators
out of pre-fabricated software parts. Thus, adding or removing characteristics
and features as needed, is becoming an easy task for system AAC systems
integrators. Three complete Interpersonal Communication Aids are presented as
cases of ULYSSES application in this specific domain. Keywords: Augmentative and Alternative Communication (AAC), communication aids,
communicators, component based development, framework architecture | |||
| ICU-Talk, a communication aid for intubated intensive care patients | | BIBAK | Full-Text | 226-230 | |
| F. MacAulay; A. Judson; M. Etchels; S. Ashraf; I. W. Ricketts; A. Waller; J. K. Brodie; N. Alm; A. Warden; A. J. Shearer; B. Gordon | |||
| A Multi-disciplinary project staffed by personnel from nursing, computer
science and speech and language therapy developed a computer based
communication aid called ICU-Talk. This device has been designed specifically
for intubated patients in hospital intensive care units. The ICU-Talk device
was trialled with real patients. This paper reports the challenges faced when
developing a device for this patient group and environment. A description of
the methods used to produce ICU-Talk and results from the trials will be
presented. Keywords: AAC, HCI, ICU, communication, usability | |||