| Using information technology to assist people with disabilities | | BIBK | Full-Text | 1-2 | |
| Rory Cooper | |||
Keywords: assistive technology, human-machine interfaces, remote monitoring | |||
| Comparing evaluation techniques for text readability software for adults with intellectual disabilities | | BIBAK | Full-Text | 3-10 | |
| Matt Huenerfauth; Lijun Feng; Noémie Elhadad | |||
| In this paper, we compare alternative techniques for evaluating a software
system for simplifying the readability of texts for adults with mild
intellectual disabilities (ID). We introduce our research on the development of
software to automatically simplify news articles, display them, and read them
aloud for adults with ID. Using a Wizard-of-Oz prototype, we conducted
experiments with a group of adults with ID to test alternative formats of
questions to measure comprehension of the information in the news articles. We
have found that some forms of questions work well at measuring the difficulty
level of a text: multiple-choice questions with three answer choices, each
illustrated with clip-art or a photo. Some types of questions do a poor job:
yes/no questions and Likert-scale questions in which participants report their
perception of the text's difficulty level. Our findings inform the design of
future evaluation studies of computational linguistic software for adults with
ID; this study may also be of interest to researchers conducting usability
studies or other surveys with adults with ID. Keywords: assistive technology, intellectual disabilities, natural language
processing, text comprehension, text readability assessment | |||
| Designing judicious interactions for cognitive assistance: the acts of assistance approach | | BIBAK | Full-Text | 11-18 | |
| Jérémy Bauchet; Hélène Pigot; Sylvain Giroux; Dany Lussier-Desrochers; Yves Lachapelle; Mounir Mokhtari | |||
| The completion of complex activities of daily living (ADL) like meal
preparation is a key concept for achieving autonomous living. Due to cognitive
impairments, some people need to be supported when performing ADL. In this
paper, we present a technological approach to guide people with cognitive
impairments to complete complex activities. The assistance is provided in the
form of pervasive human-machine interactions (HMI), that encourage the person
to complete some actions to settle the difficulty identified by the system.
These HMI are called "acts of assistance". This approach was implemented in
Archipel, a cognitive orthosis developed at the DOMUS Laboratory, University of
Sherbrooke (Canada). An evaluation of the prototype was conducted around meal
preparation activities, involving 12 people with intellectual disabilities.
This study demonstrates that our approach is promising. Keywords: activity monitoring, assistance generation, cognitive assistance, pervasive
HMI, speech-acts theory | |||
| Context-aware prompting to transition autonomously through vocational tasks for individuals with cognitive impairments | | BIBAK | Full-Text | 19-26 | |
| Yao-Jen Chang; Wan Chih Chang; Tsen-Yung Wang | |||
| A challenge to individuals with cognitive impairments in workplaces is how
to remain engaged, recall task routines, and transition autonomously across
tasks in a way relying on limited cognitive capacity. A novel task prompting
system is presented with an aim to increase workplace and life independence for
people with traumatic brain injury, cerebral palsy, intellectual disability,
schizophrenia, and Down syndromes. This paper describes an approach to
providing distributed cognition support of work engagement for persons with
cognitive disabilities. The unique strength of the system is the ability to
provide unique-to-the-user prompts that are triggered by context. As this
population is very sensitive to issues of abstraction (e.g. icons) and presents
the designer with the need to tailor prompts to a 'universe-of-one' the use of
picture or verbal cues specific to each user and context is implemented. The
key to the approach is to spread the context awareness across the system, with
the context being flagged by beacon sources and the appropriate response being
evoked by displaying the appropriate task prompting cues indexed by the
intersection of specific end-user and context ID embedded in the beacons. By
separating the context trigger from the pictorial or verbal response, responses
can be updated independently of the rest of the installed system, and a single
beacon source can trigger multiple responses in the PDA depending on the
end-user and their specific tasks. A prototype is built and tested in field
experiments involving eight individuals with cognitive impairments. The
experimental results show the task load of the human-device interface is low or
very low and the capabilities of helping with task engagement are high and
reliable. Keywords: cognitively impaired, social services, task prompting, ubiquitous computing | |||
| Customizing directions in an automated wayfinding system for individuals with cognitive impairment | | BIBAK | Full-Text | 27-34 | |
| Alan L. Liu; Harlan Hile; Gaetano Borriello; Pat A. Brown; Mark Harniss; Henry Kautz; Kurt Johnson | |||
| Individuals with cognitive impairments would prefer to live independently,
however issues in wayfinding prevent many from fully living, working, and
participating in their community. Our research has focused on designing,
prototyping, and evaluating a mobile wayfinding system to aid such individuals.
Building on the feedback gathered from potential users, we have implemented the
system's automated direction selection functionality. Using a
decision-theoretic approach, we believe we can create better wayfinding
experience that assists users to reach their destination more intuitively than
traditional navigation systems. This paper describes the system and results
from a study using system-generated directions that inform us of key
customization factors that would provide improved wayfinding assistance for
individual users. Keywords: Markov decision process, cognitive impairments, user interface, wayfinding | |||
| Usability of a multimodal videogame to improve navigation skills for blind children | | BIBAK | Full-Text | 35-42 | |
| Jaime Sánchez; Mauricio Sáenz; Miguel Ripoll | |||
| This work presents an evaluation study on the usability of a haptic device
and a sound-based videogame for the development and use of orientation and
mobility (O&M) skills in closed, unfamiliar spaces by blind, school-aged
children. A usability evaluation was implemented for a haptic device especially
designed for this project (Digital Clock Carpet) and a 3D videogame (MOVA3D),
in order to redesign and improve the usability, as well as to learn of its
acceptance and the degree of the user's satisfaction with the interaction with
these products for O&M purposes. The results show that both the haptic
device and the videogame are usable, accepted and pleasant regarding their use
by blind children, and that they are ready to be used in the following stage,
which will determine their impact on the development and use of O&M skills. Keywords: blind children, haptic and audio interfaces, navigation, virtual
environments | |||
| Instant tactile-audio map: enabling access to digital maps for people with visual impairment | | BIBAK | Full-Text | 43-50 | |
| Zheshen Wang; Baoxin Li; Terri Hedgpeth; Teresa Haven | |||
| In this paper, we propose an automatic approach, complete with a prototype
system, for supporting instant access to maps for local navigation by people
with visual impairment. The approach first detects and segments texts from a
map image and recreates the remaining graphical parts in a tactile form which
can be reproduced immediately through a tactile printer. Then, it generates an
SVG (Scalable Vector Graphics) file, which integrates both text and graphical
information. The tactile hardcopy and the SVG file together are used to provide
a user with interactive access to the map image through a touchpad, resulting
in a tactile-audio representation of the original input image. This supports
real-time access to the map without tedious conversion by a sighted
professional. Evaluations with six users who are blind show that the created
tactile-audio maps from our prototype system convey the most important map
information and are deemed as potentially useful for local navigation. Keywords: accessibility, multi-modal system, tactile map, visual impairment | |||
| Rock Vibe: Rock Band® computer games for people with no or limited vision | | BIBAK | Full-Text | 51-58 | |
| Troy Allman; Rupinder K. Dhillon; Molly A. E. Landau; Sri H. Kurniawan | |||
| This paper reports Rock Vibe, a modification performed on Rock Bandî
computer game to represent visual information using haptic and audio feedback
to allow people with no or limited vision to enjoy the game. We modified the
drumming activity of Rock Bandî by providing users with vibrations on
upper and lower arms to represent the drumhead cues and on ankle to represent
the kick drum cue. Auditory information is used to provide feedback on correct
and timely hit (with various drumming sounds) or errors (with a click sound).
Computer's standard speech synthesizer is used to read the menu, song title,
instruction and score. A series of evaluations with people with various levels
of visual impairment were performed at different stages of the system
development. We found that users were able to master the system almost
immediately, with some users making no error halfway through the first song. Keywords: blindness, games, haptic, visual impairment | |||
| TextSL: a command-based virtual world interface for the visually impaired | | BIBAK | Full-Text | 59-66 | |
| Eelke Folmer; Bei Yuan; Dave Carr; Manjari Sapre | |||
| The immersive graphics, large amount of user-generated content, and social
interaction opportunities offered by popular virtual worlds, such as Second
Life, could eventually make for a more interactive and informative World Wide
Web. Unfortunately, virtual worlds are currently not accessible to users who
are visually impaired. This paper presents the work on developing TextSL, a
client for Second life that can be accessed with a screen reader. Users
interact with TextSL using a command-based interface, which allows for
performing a plethora of different actions on large numbers of objects and
avatars; characterizing features of such virtual worlds. User studies confirm
that a command-based interface is a feasible approach towards making virtual
worlds accessible, as it allows screen reader users to explore Second Life,
communicate with other avatars, and interact with objects as well as sighted
users. Command-based exploration and object interaction is significantly
slower, but communication can be performed with the same efficiency as in the
Second Life viewer. We further identify that at least 31% of the objects in
Second Life lack a descriptive name, which is a significant barrier towards
making virtual worlds accessible to users who are visually impaired. Keywords: games, screen reader, virtual worlds, visual impairments | |||
| ClassInFocus: enabling improved visual attention strategies for deaf and hard of hearing students | | BIBAK | Full-Text | 67-74 | |
| Anna C. Cavender; Jeffrey P. Bigham; Richard E. Ladner | |||
| Deaf and hard of hearing students must juggle their visual attention in
current classroom settings. Managing many visual sources of information
(instructor, interpreter or captions, slides or whiteboard, classmates, and
personal notes) can be a challenge. ClassInFocus automatically notifies
students of classroom changes, such as slide changes or new speakers, helping
them employ more beneficial observing strategies. A user study of notification
techniques shows that students who liked the notifications were more likely to
visually utilize them to improve performance. Keywords: classroom technology, deaf and hard of hearing users, multimedia
conferencing technology | |||
| Spatial and temporal pyramids for grammatical expression recognition of American sign language | | BIBAK | Full-Text | 75-82 | |
| Nicholas Michael; Dimitris Metaxas; Carol Neidle | |||
| Given that sign language is used as a primary means of communication by as
many as two million deaf individuals in the U.S. and as augmentative
communication by hearing individuals with a variety of disabilities, the
development of robust, real-time sign language recognition technologies would
be a major step forward in making computers equally accessible to everyone.
However, most research in the field of sign language recognition has focused on
the manual component of signs, despite the fact that there is critical
grammatical information expressed through facial expressions and head gestures.
We propose a novel framework for robust tracking and analysis of facial expression and head gestures, with an application to sign language recognition. We then apply it to recognition with excellent accuracy (≥95%) of two classes of grammatical expressions, namely wh-questions and negative expressions. Our method is signer-independent and builds on the popular "bag-of-words" model, utilizing spatial pyramids to model facial appearance and temporal pyramids to represent patterns of head pose changes. Keywords: expression recognition, face tracking, head pose estimation, kernel
codebooks, pyramid match kernel, sign language recognition, soft quantization,
spatio-temporal pyramids | |||
| Accessible motion-capture glove calibration protocol for recording sign language data from deaf subjects | | BIBAK | Full-Text | 83-90 | |
| Pengfei Lu; Matt Huenerfauth | |||
| Motion-capture recordings of sign language are used in research on automatic
recognition of sign language or generation of sign language animations, which
have accessibility applications for deaf users with low levels of
written-language literacy. Motion-capture gloves are used to record the
wearer's handshape. Unfortunately, these gloves require a time-consuming and
inexact manual calibration process each time they are worn. This paper
describes the design and evaluation of a new calibration protocol for
motion-capture gloves, which is designed to make the process more efficient and
to be accessible for participants who are deaf and use American Sign Language
(ASL). The protocol was evaluated experimentally; deaf ASL signers wore the
gloves, were calibrated (using the new protocol and using a calibration routine
provided by the glove manufacturer), and were asked to perform sequences of ASL
handshapes. A native ASL signer rated the correctness and understandability of
the collected handshape data. The new protocol received significantly higher
scores than the standard calibration. The protocol has been made freely
available online, and it includes directions for the researcher, images and
videos of how participants move their hands during the process, and directions
for participants (as ASL videos and English text). Keywords: CyberGlove?, accessibility technology for people who are deaf, american sign
language, animation, calibration, motion-capture glove | |||
| The one-key challenge: searching for a fast one-key text entry method | | BIBAK | Full-Text | 91-98 | |
| I. Scott MacKenzie | |||
| A new one-key text entry method is presented. SAK, for "scanning ambiguous
keyboard", combines one-key physical input (including error correction) with
three virtual letter keys and a SPACE key. The virtual letter keys are
highlighted in sequence ("scanned") and selected when the key bearing the
desired letter receives focus. There is only one selection per letter.
Selecting SPACE transfers scanning to a word-selection region, which presents a
list of candidate words. A novel feature of SAK is multiple-letter-selection in
a single scanning interval. In an evaluation with 12 participants, average
entry speeds reached 5.11 wpm (all trials, 99% accuracy) or 7.03 wpm
(error-free trials). A modification using "timer restart on selection" allowed
for more time and more selections per scanning interval. One participant
performed extended trials (5 blocks x 5 phrases/block) with the modification
and reached an average entry speed of 9.28 wpm. Keywords: ambiguous keyboards, assistive technologies, keyboards, mobile computing,
scanning keyboards, text entry | |||
| NavTap: a long term study with excluded blind users | | BIBAK | Full-Text | 99-106 | |
| Tiago Guerreiro; Hugo Nicolau; Joaquim Jorge; Daniel Gonçalves | |||
| NavTap is a navigational method that enables blind users to input text in a
mobile device by reducing the associated cognitive load.
In this paper, we present studies that go beyond a laboratorial setting, exploring the methods' effectiveness and learnability as well as its influence on the users' daily lives. Eight blind users participated in designing the prototype (3 weeks) while five took part in the studies along 16 more weeks. Results gathered in controlled weekly sessions and real life usage logs enabled us to better understand NavTap's advantages and limitations. The method revealed itself both as easy to learn and improve. Indeed, users were able to better control their mobile devices to send SMS and use other tasks that require text input such as managing a phonebook, from day one, in real-life settings. While individual user profiles play an important role in determining their evolution, even less capable users (with age-induced impairments or cognitive difficulties), were able to perform the assigned tasks (sms, directory) both in the laboratory and in everyday use, showing continuous improvement to their skills. According to interviews, none were able to input text before. Nav-Tap dramatically changed their relation with mobile devices and noticeably improved their social interaction capabilities. Keywords: blind, evaluation, mobile accessibility, text-entry | |||
| Haptic handheld wayfinder with pseudo-attraction force for pedestrians with visual impairments | | BIBAK | Full-Text | 107-114 | |
| Tomohiro Amemiya; Hisashi Sugiyama | |||
| When visually impaired pedestrians walk from one place to another by
themselves, they must update their orientation and position to find their way
and avoid obstacles and hazards. We present the design of a new haptic
direction indicator, whose purpose is to help blind pedestrians travel a path
and avoid hazards intuitively and safely by means of haptic navigation. The
haptic direction indicator uses a novel kinesthetic perception method called
the "pseudo-attraction force" technique, which exploits the nonlinear
relationship between perceived and physical acceleration to generate a force
sensation. In an experiment performed to evaluate with the haptic direction
indicator, we found that visually impaired users could safely walk along a
predefined route at their usual walking pace, independent of the existence of
auditory information. These results demonstrate the utility and usability of
the haptic direction indicator, but there is still room for improvement. Keywords: asymmetric oscillation, maze task, wayfinding | |||
| Freedom to roam: a study of mobile device adoption and accessibility for people with visual and motor disabilities | | BIBAK | Full-Text | 115-122 | |
| Shaun K. Kane; Chandrika Jayant; Jacob O. Wobbrock; Richard E. Ladner | |||
| Mobile devices provide people with disabilities new opportunities to act
independently in the world. However, these empowering devices have their own
accessibility challenges. We present a formative study that examines how people
with visual and motor disabilities select, adapt, and use mobile devices in
their daily lives. We interviewed 20 participants with visual and motor
disabilities and asked about their current use of mobile devices, including how
they select them, how they use them while away from home, and how they adapt to
accessibility challenges when on the go. Following the interviews, 19
participants completed a diary study in which they recorded their experiences
using mobile devices for one week. Our results show that people with visual and
motor disabilities use a variety of strategies to adapt inaccessible mobile
devices and successfully use them to perform everyday tasks and navigate
independently. We provide guidelines for more accessible and empowering mobile
device design. Keywords: accessibility, blindness, diary study, low vision, mobile devices, mobile
phones, motor impairment | |||
| Enriching web information scent for blind users | | BIBAK | Full-Text | 123-130 | |
| Markel Vigo; Barbara Leporini; Fabio Paternò | |||
| Link annotation with the accessibility level of the target Web page is an
adaptive navigation support technique aimed at increasing blind users'
orientation in Web sites. In this work, the accessibility level of a page is
measured by exploiting data from evaluation reports produced by two automatic
assessment tools. These tools support evaluation of accessibility and usability
guideline-sets. As a result, links are annotated with a score that indicates
the conformance of the target Web page to blind user accessibility and
usability guidelines. A user test with 16 users was conducted in order to
observe the strategies they followed when links were annotated with these
scores. With annotated links, the navigation paradigm changed from sequential
to browsing randomly through the subset of those links with high scores. Even
if there was not a general agreement on the correspondence between scores and
user perception of accessibility, users found annotations helpful when browsing
through links related to a given topic. Keywords: adaptive navigation, blind users, information scent, web accessibility | |||
| Validity and reliability of web accessibility guidelines | | BIBAK | Full-Text | 131-138 | |
| Giorgio Brajnik | |||
| Although widely used, Web Content Accessibility Guidelines (WCAG) have not
been studied from the viewpoint of their validity and reliability. WCAG 2.0
explicitly claim that they are based on "testable" criteria, but no scientific
evidence exists that this is actually the case. Validity (how well all and only
the true problems can be identified) and reliability (the extent to which
different evaluations of the same page lead to same results) are key factors
for quality of accessibility evaluation methods. They need to be well studied
and understood for methods, and guidelines, that are expected to have a major
impact.
This paper presents an experiment aimed at finding out what is the validity and reliability of different checkpoints taken from WCAG 1.0 and WCAG 2.0. The experiment employed 35 young web developers with some knowledge on web accessibility. Although this is a small-scale experiment, unlikely to provide definite and general answers, results un-equivocally show that with respect to the kind of evaluators chosen in the experiment, checkpoints in general fare very low in terms of reliability, and that from this perspective WCAG 2.0 are not an improvement over WCAG 1.0. Keywords: accessibility evaluation evaluation, web accessibility guidelines | |||
| Evaluating prosodic cues as a means to disambiguate algebraic expressions: an empirical study | | BIBAK | Full-Text | 139-146 | |
| Ed Gellenbeck; Andreas Stefik | |||
| The automatic translation of written mathematical expressions to their
spoken equivalent is a difficult task. Written mathematics makes use of
specialized symbols and a 2-dimensional layout that is hard to translate into
clear and unambiguous spoken words. Our approach is to use prosody to help
listeners follow along to mathematical expressions spoken aloud with
text-to-speech synthesized voices. To achieve this, we developed and
empirically tested XSL transformation rules that automatically translate
mathematical expressions marked-up with Presentation MathML into corresponding
markup using the Speech Synthesis Markup Language (SSML). In this paper, we
report on the results from an empirical study we conducted that showed that the
simple insertion of pauses inside spoken mathematical expressions dramatically
improved subjects' ability to disambiguate between two similar algebraic
expressions. Result from our study should benefit designers of screen readers
and related audio-based tools that produce spoken renderings of mathematical
expressions. Keywords: DAISY, MathML, SSML, accessibility, synthetic speech | |||
| Making Microsoft Excel: multimodal presentation of charts | | BIBAK | Full-Text | 147-154 | |
| Iyad Abu Doush; Enrico Pontelli; Dominic Simon; Tran Cao Son; Ou Ma | |||
| Several solutions, based on aural and haptic feedback, have been developed
to enable access to complex on-line information for people with visual
impairments. Nevertheless, there are several components of widely used software
applications that are still beyond the reach of screen readers and Braille
displays.
This paper investigates the non-visual accessibility issues associated with the graphing component of Microsoft Excel". The goal is to provide flexible multi-modal navigation schemes which can help visually impaired users in comprehending Excel charts. The methodology identifies the need for 3 strategies used in interaction: exploratory, guided, and summarization. Switching between them supports the development of a mental model of a chart. Aural cues and commentaries are integrated in a haptic presentation to help understanding the presented chart. The methodology has been implemented using the Novint Falcon haptic device. Keywords: accessible graphs, assistive technology, haptic | |||
| Including accessibility within and beyond undergraduate computing courses | | BIBAK | Full-Text | 155-162 | |
| Annalu Waller; Vicki L. Hanson; David Sloan | |||
| This paper presents a unique approach to undergraduate teaching in which
accessibility topics are completely integrated throughout the curriculum,
treating accessibility not as a separate topic, but rather as an integral part
of design and development. Means of accomplishing this integration throughout
the entire four-year curriculum are presented. We also describe how our
expertise in accessible design has extended beyond the education of computer
science and engineering students to include Web content authors across campus. Keywords: accessibility, disability, higher education, inclusion, older adults | |||
| Speaking through pictures: images vs. icons | | BIBAK | Full-Text | 163-170 | |
| Xiaojuan Ma; Jordan Boyd-Graber; Sonya Nikolova; Perry R. Cook | |||
| People with aphasia, a condition that impairs the ability to understand or
generate written or spoken language, are aided by assistive technology that
helps them communicate through a vocabulary of icons. These systems are akin to
language translation systems, translating icon arrangements into spoken or
written language and vice versa. However, these icon-based systems have little
vocabulary breadth or depth, making it difficult for people with aphasia to
apply their usage to multiple real world situations. Pictures from the web are
numerous, varied, and easily accessible and thus, could potentially address the
small size issues of icon-based systems. We present results from two studies
that investigate this potential and demonstrate that images can be as effective
as icons when used as a replacement for English language communication. The
first study uses elderly subjects to investigate the efficacy of images vs.
icons in conveying word meaning; the second study examines the retention of
word-level meaning by both images and icons with a population of aphasics. We
conclude that images collected from the web are as functional as icons in
conveying information and thus, are feasible to use in assistive technology
that supports people with aphasia. Keywords: aphasia, computerized visual communication (C-VIC), visual communication
(VIC) | |||
| Better vocabularies for assistive communication aids: connecting terms using semantic networks and untrained annotators | | BIBAK | Full-Text | 171-178 | |
| Sonya Nikolova; Jordan Boyd-Graber; Christiane Fellbaum; Perry Cook | |||
| The difficulties of navigating vocabulary in an assistive communication
device are exacerbated for individuals with lexical access disorders like those
due to aphasia. We present the design and implementation of a vocabulary
network based on WordNet, a resource that attempts to model human semantic
memory, that enables users to find words easily. To correct for the sparsity of
links among words, we augment WordNet with additional connections derived from
human judgments of semantic similarity collected in an online experiment. We
evaluate the resulting system, the visual vocabulary for aphasia (ViVA), and
describe its potential to adapt to a user's profile and enable faster search
and improved navigation. Keywords: adaptive tools, aphasia, assistive communication, semantic networks, visual
vocabularies | |||
| Let's stay in touch: sharing photos for restoring social connectedness between rehabilitants, friends and family | | BIBAK | Full-Text | 179-186 | |
| Margit Biemans; Betsy van Dijk; Pavan Dadlani; Aart van Halteren | |||
| A case study on the use of an existing photo sharing application in a spinal
cord lesion rehabilitation centre is presented. The study focuses on enhancing
social connectedness through sharing photos between rehabilitants and their
families and friends. Four rehabilitants participated in this study for 6-7
weeks. Most photos sent related to sharing things in everyday life and keeping
the rehabilitant informed about regular events. The combination of interviews
and content analysis reveals that only a minority of the photos lead to
follow-up communication about the contents of the photos. Rehabilitants were
positively surprised how spontaneous photo sharing simplified the way they
could reconnect to their friends and family, without the immediate need or
obligation to engage in a (phone) conversation. Keywords: photo sharing, rehabilitation, social connectedness | |||
| Talking points: the differential impact of real-time computer generated audio/visual feedback on speech-like & non-speech-like vocalizations in low functioning children with ASD | | BIBAK | Full-Text | 187-194 | |
| Joshua Hailpern; Karrie Karahalios; Laura DeThorne; James Halle | |||
| Real-time computer feedback systems (CFS) have been shown to impact the
communication of neurologically typical individuals. Promising new research
appears to suggest the same for the vocalization of low functioning children
with Autistic Spectrum Disorder (ASD). The distinction between speech-like
versus non-speech-like vocalizations has rarely, if ever, been addressed in the
HCI community. This distinction is critical as we strive to most effectively
and efficiently facilitate speech development in children with ASD, while
simultaneously helping decrease vocalizations that do not facilitate positive
social interactions. This paper provided an extension of Hailpern et al. in
2009 by examining the influence of a computerized feedback system on both the
speech-like and non-speech-like vocalizations of five nonverbal children with
ASD. Results were largely positive, in that some form of computerized feedback
was able to differentially facilitate speech-like vocalizations relative to
nonspeech-like vocalizations in 4 of the 5 children. The main contribution of
this work is in highlighting the importance of distinguishing between
speech-like versus nonspeech-like vocalizations in the design of feedback
systems focused on facilitating speech in similar populations. Keywords: accessibility, autism, children, feedback, speech, visualization,
vocalization | |||
| Collaborative web accessibility improvement: challenges and possibilities | | BIBAK | Full-Text | 195-202 | |
| Hironobu Takagi; Shinya Kawanaka; Masatomo Kobayashi; Daisuke Sato; Chieko Asakawa | |||
| Collaborative accessibility improvement has great potential to make the Web
more adaptive in a timely manner by inviting users into the improvement
process. The Social Accessibility Project is an experimental service for a new
needs-driven improvement model based on collaborative metadata authoring
technologies. In 10 months, about 18,000 pieces of metadata were created for
2,930 webpages through collaboration. We encountered many challenges as we
sought to create a new mainstream approach. The productivity of the volunteer
activities exceeded our expectation, but we found large and important problems
in the screen reader users' lack of awareness of their own accessibility
problems. In this paper, we first introduce examples, analyze some statistics
from the pilot service and then discuss our findings and challenges. Three
future directions including site-wide authoring are considered. Keywords: accessibility, collaboration, metadata, social computing, web | |||
| How much does expertise matter?: a barrier walkthrough study with experts and non-experts | | BIBAK | Full-Text | 203-210 | |
| Yeliz Yesilada; Giorgio Brajnik; Simon Harper | |||
| Manual accessibility evaluation plays an important role in validating the
accessibility of Web pages. This role has become increasingly critical with the
advent of the Web Content Accessibility Guidelines (WCAG) 2.0 and their
reliance on user evaluation to validate certain conformance measures. However,
the role of expertise, in such evaluations, is unknown and has not previously
been studied. This paper sets out to investigate the interplay between expert
and non-expert evaluation by conducting a Barrier Walkthrough (BW) study with
19 expert and 51 non-expert judges. The BW method provides an evaluation
framework that can be used to manually assess the accessibility of Web pages
for different user groups including motor impaired, hearing impaired, low
vision, cognitive impaired, etc. We conclude that the level of expertise is an
important factor in the quality of accessibility evaluation of Web pages.
Expert judges spent significantly less time than non-experts; rated themselves
as more productive and confident than non-experts; and ranked and rated pages
differently against each type of disability. Finally, both effectiveness and
reliability of the expert judges are significantly higher than non-expert
judges. Keywords: evaluation, expertise, guideline, web accessibility | |||
| 3D sound for human-computer interaction: regions with different limitations in elevation localization | | BIBAK | Full-Text | 211-212 | |
| Armando Barreto; Kenneth John Faller; Malek Adjouadi | |||
| Spatialized ("3D") audio may be useful as an alternative sensory channel to
provide blind individuals with relevant spatial information in the physical
world or while interacting with computers. However, an important limitation of
this approach is the lower spatial resolution achievable through sound
localization. While this limitation is widely acknowledged, few empirical
evaluations of the sound localization achievable through audio spatialization
techniques have been performed, particularly with respect to elevation
localization. We performed such an empirical study and found quantitative
confirmation that the localization accuracy deteriorates as the virtual sound
position is set farther above or below the ear (height) level. This information
may be valuable to HCI designers planning to use 3D sound. Keywords: 3D sound, HRIR, HRTF, elevation | |||
| 3DScan: an environment control system supporting persons with severe motor impairments | | BIBAK | Full-Text | 213-214 | |
| Torsten Felzer; Stephan Rinderknecht | |||
| This proposal is about a poster dealing with a scanning-based software
system, which aims at giving persons with severe motor impairments a means to
interact with their immediate environment by issuing tiny contractions of an
arbitrary muscle. The implementation makes use of an extension of row-column
scanning, called three-dimensional scanning, which reduces the time required to
select a certain item by eliminating the need to scan long rows or columns. A
simple experiment comparing the resulting text entry capabilities to
conventional row-column scanning shows that the entry rate can be increased by
over 30%. Keywords: activities of daily life, environment control, human-computer interaction,
scanning | |||
| A cheap, portable haptic device for a method to relay 2-D texture-enriched graphical information to individuals who are visually impaired | | BIBAK | Full-Text | 215-216 | |
| David S. Burch; Dianne T. V. Pawluk | |||
| This paper considers the development of a haptic device needed for a method
of relaying 2-D texture enriched graphical information. The focus here is on
the conversion of the optical, color-based representation of the formatted
diagram, which is efficient for storage and distribution, to its haptic texture
enriched form. For this, the device has two main components, an RGB color
sensor and piezoelectric actuator, mounted in a casing that is wrapped around
the finger. The resulting device has a spatial sensitivity (<2mm) comparable to
natural touch (1.0mm), a temporal frequency bandwidth of 1 to 200 Hz, and can
be used to output a variety of textures. The point contact device developed can
also easily be expanded to the use of multiple devices on multiple fingerpads,
while remaining affordable (<100) and portable (<500g). Keywords: haptic rendering, tactile devices and display, texture rendering | |||
| A reconfigurable augmentative communication device | | BIBAK | Full-Text | 217-218 | |
| Kris Schindler; Robert Dygert; Eric Nagler | |||
| In this demonstration we present a highly reconfigurable augmentative and
alternative communication (AAC) device. The people in need of these devices
often have a wide range of disabilities, and the setup required to start a
person using an AAC system is often time consuming and labor intensive. Our
device streamlines the setup process, allowing caregivers to quickly configure
the interface to best suit the user's needs. Setup times compared to previous
AAC devices are reduced from hours to minutes. This allows the caregiver to
maximize the effectiveness of the device for the end user. It also makes it
possible for the device to be adapted over time as the user's needs change. Keywords: assistive technology, augmentative communication, universal design | |||
| Accessibility: understanding attitudes of CS students | | BIBAK | Full-Text | 219-220 | |
| G. M. Poor; Laura M. Leventhal; Julie Barnes; Duke R. Hutchings | |||
| Accessibility and usability have become increasingly important in design and
development of technology. This poster briefly reviews how accessibility
concepts may be included in computer science courses as students are educated
to become practitioners. In a usability engineering course, the authors
included a group development project that included an accessibility component.
They conducted a survey of student attitudes toward these issues at the start
and end of the course. Results of the survey indicate that students' awareness
of issues related to usability and accessibility are increased after taking the
course and completing the project. In particular, students showed a significant
increase in their rating of importance for the item "broadening the range of
technology users". The authors also performed a factor analysis of the survey
responses and determined that items fell into three factors, one of which was
concerned with accessibility and usability. Keywords: accessibility, computer science education, usability | |||
| Accessible video description on-demand | | BIBAK | Full-Text | 221-222 | |
| Claude Chapdelaine; Langis Gagnon | |||
| Providing blind and visually impaired people with the descriptions of key
visual elements can greatly improve the accessibility of video, film and
television. This project presents a Website platform for rendering
videodescription (VD) using an adapted player. Our goal is to test the
usability of an accessible player that provides end-users with various levels
of VD, on-demand. This paper summarizes the user evaluations covering 1) the
usability of the player and its controls, and 2) the quality and quantity of
the VD selected. The complete results of these evaluations, including the
accessibility of the Website, will be presented in the poster. Final results
show that 90% of the participants agreed on the relevancy of a multi-level VD
player. All of them rated the player easy to use. Some improvements were also
identified. We found that there is a great need to provide blind and visually
impaired people with more flexible tool to access rich media content. Keywords: audio description, blind and visual impairment, rich media, web
accessibility | |||
| An improved, low-cost tactile 'mouse' for use by individuals who are blind and visually impaired | | BIBAK | Full-Text | 223-224 | |
| Justin M. Owen; Julie A. Petro; Steve M. D'Souza; Ravi Rastogi; Dianne T. V. Pawluk | |||
| Although tactile mice, such as the VT Player by virTouch, have been
developed to enable access to 2-D graphical information by individuals who are
blind and visually impaired, they have yet to really be adapted by the
community. We suggest that this is due to the significant lack of accuracy in
the haptic position information, which is critical for individuals to
haptically piece together a 2-D graphic. In addition, the VT Player suffers
from a noticeable lack of spatial and temporal concordance between the
kinesthetic and tactile information. In this paper, we present a low-cost (<400
US) alternative that avoids these problems. Furthermore, the dynamic response
of the pins of our improved mouse can range from 0 to < 300Hz. This will
facilitate the use of vibration and texture, which our preliminary results show
improves the saliency of graphical information. Keywords: VT player, braille, embossed graphics, haptic mouse, raised line drawings,
tactile graphics, visually impaired | |||
| Approaches to locating areas of interest related to questions in a document for non-visual readers | | BIBAK | Full-Text | 225-226 | |
| Debra Yarrington; Kathleen McCoy | |||
| This poster describes an approach to creating a tool to aid blind,
low-vision, dyslexic, and other non-visual readers in skimming a document to
answer questions. The goal is to give them information similar to that obtained
by a visual reader's skimming experience. Towards this goal, we examine
approaches to locating areas of interest within a document that are related to
a question. The areas of interest in relation to a question were determined
through user studies in which visual readers skimmed through documents for
answers to questions while being tracked by an eye tracking system. We then
examined methods of automatically identifying the areas of interest using
keywords related to the content of the question. This poster focuses on the
effectiveness of these different methods in identifying the areas of interest. Keywords: assistive technology, natural language processing, open domain question
answering, text skimming | |||
| Assistive device for the blind based on object recognition: an application to identify currency bills | | BIBAK | Full-Text | 227-228 | |
| Rémi Parlouar; Florian Dramas; Marc M-J Macé; Christophe Jouffrais | |||
| We have developed a real-time portable object recognition system based on a
bio-inspired image analysis software to increase blind people autonomy by
localizing and identifying surrounding objects. A working prototype of this
system has been tested on the issue of currency bill recognition encountered by
most of the blind people. Seven blind persons were involved in an experiment
which demonstrated that the usability of the system was good enough for such a
device to be used daily in real-life situations. Keywords: blindness, currency reader, low vision, object recognition | |||
| Don't listen! I am dictating my password! | | BIBAK | Full-Text | 229-230 | |
| Shaojian Zhu; Yao Ma; Jinjuan Feng; Andrew Sears | |||
| Speech recognition is a promising alternative input technology for
individuals with upper-body motor impairments that hinder the use of the
standard keyboard and mouse. A recent long-term field study found that the
users employed speech techniques for a variety of tasks beyond generating text
documents [1]. One challenge with hands-free speech-based interactions is user
authentication, which requires the users to speak their user IDs and passwords
character by character. Unfortunately, speaking a password presents both
security and privacy threats as well as usability problems. To address this
challenge, we propose a new speech-based authentication model. An initial
proof-of-concept prototype has been implemented and a pilot study was
conducted. Preliminary results suggest several problems for further
examination. Keywords: authentication, physical impairment, speech technology | |||
| Dundee user centre: a space where older people and technology meet | | BIBAK | Full-Text | 231-232 | |
| Paula Forbes; Lorna Gibson; Vicki L. Hanson; Peter Gregor; Alan F. Newell | |||
| In this paper, we describe the User Centre at the University of Dundee which
provides a space for older people and technology to come together for the
benefit of new learning opportunities, social interaction and research. Keywords: computing, digital inclusion, older people, technology | |||
| End-user moderation of cognitive accessibility in online communities: case study of brain fog in the lyme community | | BIBAK | Full-Text | 233-234 | |
| Kateryna Kuksenok; Jennifer Mankoff | |||
| With the advent of Web 2.0 technologies, more and more online content is
being generated by users. Even trained web developers often fail to take
accessibility issues into consideration, so it is no surprise that users may
fail to do so as well. In this paper, we examine two self-moderating
communities of individuals with Lyme disease who are affected by ""brain fog"".
Through qualitative analysis of over 100 discussion threads that deal with
issues of accessibility, we explore how the individuals in these communities
fail and succeed to establish and enforce, through moderation, the creation of
cognitively accessible content. Keywords: cognitive accessibility, user-generated content, web 2.0 | |||
| Evaluation of software tools with deaf children | | BIBAK | Full-Text | 235-236 | |
| Ornella Mich | |||
| Evaluating software applications with deaf or hard of hearing children
requires methods and procedures tuned to them. Indeed, they are unusual users
with special communication needs. This paper proposes a list of guidelines for
organizing effective evaluations of interactive tools with deaf children. The
novelty of this work is that such guidelines are not based on theoretical
thinking. Instead, they are built on data collected through questionnaires
proposed to experts working with deaf children. The questionnaire's data are
reinforced by my experience which was gained during usability tests with deaf
children. In future work, the effectiveness of these guidelines will be checked
during the evaluation of an e-learning tool for Italian deaf children. Keywords: deaf children, guidelines for testing, unusual users | |||
| EYECane: navigating with camera embedded white cane for visually impaired person | | BIBAK | Full-Text | 237-238 | |
| Jin Sun Ju; Eunjeong Ko; Eun Yi Kim | |||
| We demonstrate a novel assistive device which can help the visually impaired
or blind people to gain more safe mobility, which is called as "EYECane". The
EYECane is the white-cane with embedding a camera and a computer. It
automatically detects obstacles and recommends some avoidable paths to the user
through acoustic interface. For this, it is performed by three steps: Firstly,
it extracts obstacles from image streaming using online background estimation,
thereafter generates the occupancy grid map, which is given to neural network.
Finally, the system notifies a user of an paths recommended by machine
learning. To assess the effectiveness of the proposed EYECane, it was tested
with 5 users and the results show that it can support more safe navigation, and
diminish the practice and efforts to be adept in using the white cane. Keywords: EYECane, navigation, vision based navigation system, visually impaired | |||
| Eye-writing communication for patients with amyotrophic lateral sclerosis | | BIBAK | Full-Text | 239-240 | |
| Jang-Zern Tsai; Tsai-Shih Chen | |||
| The eye-writing method predefines a symbol set containing symbols with
distinct writing traces. A user of this method rotates his or her eye balls to
"write" a symbol according to its designated writing trace. Meanwhile, the eye
movement is detected using a suitable technique such as the
electro-oculography, which measures voltage differences on the skin around the
user's eyes. Distinct features of the acquired eye-movement signals are
extracted in order to determine which symbol, among those in the symbol set,
the user's eyes have just written. An eye-writing system has been implemented
in this study. Tests on subjects with no known disabilities have been conducted
and the performance has been evaluated. The study found that eye-writing system
is potentially useful for facilitating communication of sever ALS patients who
have lost most of their oral speaking and handwriting abilities. Keywords: ALS, communication, electro-oculography, eye, writing | |||
| Fall and emergency detection with mobile phones | | BIBAK | Full-Text | 241-242 | |
| Hamed Ketabdar; Tim Polzehl | |||
| In this demo, we present an application for mobile phones which can monitor
physical activities of users and detect unexpected emergency situations such as
a sudden fall or accident. Upon detection of such an event, the mobile phone
can inform a designated center (by automatically calling or sending message)
about the incident and its location. This can facilitate and speed up recovery
and help process especially if the user is alone or the accident has happened
in a deserted place. Such an application can be particularly useful for elderly
people or people with physical and movement disabilities. The application
operates based on analysis of user movements using data provided by
accelerometers integrated in mobile phones. Keywords: acceleration sensors, automatic emergency call, emergency situations, fall
and accident, mobile phones, physical shock | |||
| Johar: a framework for developing accessible applications | | BIBAK | Full-Text | 243-244 | |
| James H. Andrews; Fatima Hussain | |||
| We describe the Johar framework, which supports the development of
applications that are accessible to users with a wide range of abilities. A
user of Johar applications chooses an "interface interpreter" that best suits
them, and then uses it to interact with all Johar applications.
Interface interpreters and applications are written by developers using the Johar package. An application consists of an application engine, and an interface definition file that describes the interface to interface interpreters. In this poster, we compare our work on Johar with previous research, and briefly describe the work completed so far and the further development planned for the near future. Keywords: accessible computing, application programs | |||
| Learning how older adults undertake computer tasks | | BIBAK | Full-Text | 245-246 | |
| Nic Hollinworth; Faustina Hwang | |||
| This paper describes a study that was conducted to learn more about how
older adults use the tools in a GUI to undertake tasks in Windows applications.
The objective was to gain insight into what people did and what they found most
difficult. File and folder manipulation, and some aspects of formatting
presented difficulties, and these were thought to be related to a lack of
understanding of the task model, the correct interpretation of the visual cues
presented by the interface, and the recall and translation of the task model
into a suitable sequence of actions. Keywords: cognition, file systems, older adults, task models | |||
| Naming practice for people with aphasia as a mobile web application | | BIBAK | Full-Text | 247-248 | |
| Skye Chandler; Jesse Harris; Alex Moncrief; Clayton Lewis | |||
| Bangagears is a new version of Banga, a smart phone application that
supports word finding practice, a form of therapy for people with aphasia [1].
While Banga was implemented as a native application, a program specific to a
particular kind of phone, Bangagears uses the emerging HTML5 technology to
operate, in principle, on many different kinds of phones and other Web
platforms, and to offer simpler development and deployment. Lessons from
Bangagears will be useful to other developers of applications for people with
disabilities. Keywords: aphasia, mobile platform, therapy, web applications, word finding | |||
| Providing synthesized audio description for online videos | | BIBAK | Full-Text | 249-250 | |
| Masatomo Kobayashi; Kentarou Fukuda; Hironobu Takagi; Chieko Asakawa | |||
| We describe an initial attempt to develop a common platform for adding an
audio description (AD) to an online video so that blind and visually impaired
people can enjoy such material. A speech synthesis technology allows content
providers to offer the AD at minimal cost. We exploit external metadata so that
the AD can be independent of the video format. The external approach also
allows external supporters to add ADs to any online videos. Our technology
includes an authoring tool for writing AD scripts, a Web browser add-on for
synthesizing ADs synchronized with original videos, and a text-based format to
exchange AD scripts. Keywords: audio description, external metadata, online videos, speech synthesis,
text-to-speech (tts), web accessibility | |||
| SmartKey: a multi purpose target expansion based virtual keyboard | | BIBAK | Full-Text | 251-252 | |
| Khaldoun Al Faraj; Nadine Vigouroux; Mustapha Mojahid | |||
| SmartKey is a new virtual keyboard animation for handheld devices designed
to make key selection easier for people with situational and motor impairments.
The nearest key to cursor expands in four directions using the available space
of nearby keys. Target expansion is accomplished with an occlusion factor of 0%
and without any sideways motion. The two studies conducted with able-bodied and
motor-impaired users showed that subjects performed better with SmartKey than
with traditional virtual keyboard. Keywords: accessibility, handheld devices, target expansion, text input | |||
| Tactile and visual alerts for deaf people by mobile phones | | BIBAK | Full-Text | 253-254 | |
| Hamed Ketabdar; Tim Polzehl | |||
| In this demo, we present an application for mobile phones which can analyse
audio context and issue tactile or visual alerts if an audio event happens.
This application can be useful especially for deaf or hard of hearing people to
be alerted of audio events happening around them. The audio context analysis
algorithm captures data using mobile phone's microphone and checks for changes
in audio activities around the user. If such a change happens and also some
other circumstances are met, the application issues visual or vibro-tactile
alerts (by vibrating mobile phone) proportional to the change in audio context.
This informs the user about an event. The functionality of this algorithm can
be further enhanced by analysis of user movements. Keywords: audio events, change of audio pattern, deaf or hard of hearing people,
mobile phones, vibro-tactile and visual alerts | |||
| The one octave scale interface for graphical representation for visually impaired people | | BIBAK | Full-Text | 255-256 | |
| Ikuko Eguchi Yairi; Yoshiteru Azuma; Masamitsu Takano | |||
| Protecting the lives and the rights of the impaired people and promoting
their social participation is a paramount principle today. Especially for
visually impaired people, mobility is important function for promoting social
participation. To support their mobility, improvements on map usage and route
recognition are indispensable. But visually impaired people have so many
difficulties to read maps and to use spatial information in the field. Maps
have been made in the past for visually impaired people but people felt
uncomfortable using them. So we have been developing a new method which
visually impaired people can intuitively recognize the map using audio and
touch panels which are recently seen in PCs and smart-phones. The method is
universal-designed to enable not only the visually impaired people but also the
non-impaired people to enjoy using interactive digital map contents together.
This paper introduces our recent progress about the method called the One
Octave Scale Interface. The effectiveness of the interface was confirmed by
doing experiments of graph and map recognition and a walking experiment after
presenting route guide map. Keywords: blind, map, recognition, touchscreen, universal design, visual impairment | |||
| Towards identifying distinguishable tactons for use with mobile devices | | BIBAK | Full-Text | 257-258 | |
| Huimin Qian; Ravi Kuber; Andrew Sears | |||
| This paper describes a study designed to identify salient tactile cues which
can be integrated with a cellular telephone interface, to provide non-visual
feedback to users when accessing mobile applications. A set of tactile icons
(tactons) have been developed by manipulating the pulse duration and interval
of vibrotactile signals. Participants were presented with pairs of tactons, and
asked to differentiate between each respective pair and rank their salience.
Results suggested that the combination of two static tactons is the most
effective way to convey tactile information, when compared with dynamic or
mixed tactile cues. Further studies will be conducted to refine feedback in
order to communicate the presence of graphical objects on a mobile device
interface, or to present events and alerts more effectively. The long term goal
is to improve access to an interface by using the tactile channel, thereby
freeing the visual and auditory channels to perform other tasks. Keywords: mobile and wearable devices, non-visual interaction, tactile sense,
usability, vibration pattern | |||
| VocaliD: personalizing text-to-speech synthesis for individuals with severe speech impairment | | BIBAK | Full-Text | 259-260 | |
| Camil Jreige; Rupal Patel; H. Timothy Bunnell | |||
| Speech synthesis options on assistive communication devices are very limited
and do not reflect the user's vocal quality or personality. Previous work
suggests that speakers with severe speech impairment can control prosodic
aspects of their voice, and often retain the ability to produce sustained
vowel-like utterances. This project leverages these residual phonatory
abilities in order to build an adaptive text-to-speech synthesizer that is
intelligible, yet conveys the user's vocal identity. Our VocaliD system
combines the source characteristics of the disordered speaker with the filter
characteristics of an age-matched healthy speaker using voice transformation
techniques, in order to produce a personalized voice. Usability testing
indicated that listeners were 94% accurate in transcribing morphed samples and
79.5% accurate in matching morphed samples from the same speaker. Keywords: assistive communication, dysarthria, speech generation devices, synthesis,
text-to-speech | |||
| A respite care information system for families with developmental delay children through mobile networks | | BIBAK | Full-Text | 261-262 | |
| Jyun-Yan Yang | |||
| In the collaboration of Internet technology and humanities, a matching and
appraisal system of the respite care service for families with Children of
Developmental Delay (CDD) is designed and implemented through mobile networks.
Volunteers and families with CDD form a mobile social network to share their
experiences, know-how and expertise in childcare. Moreover, a service
management system is also included. Results of half-year field trials
demonstrate the superiority and feasibility of the implemented system in
quality improvement of the respite care services. Keywords: appraisal system, matching system, mobile networks, respite care service,
service management | |||
| Defining virtualization based system abstractions for an indoor assistive living for elderly care | | BIBAK | Full-Text | 263-264 | |
| Nova Ahmed | |||
| We consider an indoor assistive living center for elderly care which
requires constant monitoring facility without hampering the privacy of the
individuals. We use virtualization technique to provide assistance to
individuals and monitor them as virtual units of location. We use a distributed
server as information provider and handheld computational devices for local
assistance for users. Keywords: RFID technology, indoor assistive living, monitoring | |||
| Designing AAC interfaces for commercial brain-computer interaction gaming hardware | | BIBAK | Full-Text | 265-266 | |
| Stephen Steward | |||
| Augmentative and Alternative Communication devices strive to provide
improved independence for people with severe speech and motor impairments.
Recent advances in neural technology have led to devices that can allow the use
of electrical signals from the brain to be used as a means of interaction with
computers. In this paper we report on the application of a low-cost
implementation of this technology to allow for greater independence when using
computers. Keywords: brain actuated | |||
| Haptic user interface design for students with visual impairments | | BIBAK | Full-Text | 267-268 | |
| Hyung Nam Kim | |||
| Many students with visual impairments (VIs) study in public schools instead
of special education programs. They should have equal opportunities of learning
in competitive educational environments with the most appropriate assistive
technology. Yet, today's assistive technology is less likely to support
students with VIs in learning visually complex science concepts (e.g., heat and
temperature). There is a need of a new means to facilitate their understanding.
Haptic technology is considered a useful tool of effectively obtaining
information because it enables students with VIs to directly interact with
virtual objects and also access sensory cues to feel the objects' various
characteristics. This paper aims to explore the haptic user interfaces (UIs)
that help students easily understand the concepts of heat and temperature.
Archival research and participatory design with teachers were performed to
develop haptic UIs and educational contents. Archival research resulted in 44
misconceptions related to heat and temperature. A multimodal interaction was
recommended to contribute to user's successful interacting with haptic UIs. Keywords: accessibility, haptic, visual impairments | |||
| iSET: enabling in situ & post hoc video labeling | | BIBAK | Full-Text | 269-270 | |
| Mish Madsen; Abdelrahman N. Mahmoud; Youssef Kashef | |||
| Video annotation is an important component of many behavioral interventions
for autistic populations. This demonstration presents the interactive
Social-Emotional Toolkit (iSET), a highly portable system for in situ video
recording and labeling. This tool enables the recording of event labels in a
variety of contexts, including behavioral interventions, usability assessments,
and interaction studies. With iSET, video can be easily collected and annotated
in situ with custom labels and reviewed on-site or later, with labels added or
removed to assist video analysis. We describe the current usage as a tool
enabling a social-behavioral intervention allowing persons with Autism Spectrum
Disorders to capture and review expressions of affect during social
interactions. Keywords: autism, user interaction studies, video annotation, video review | |||
| MGuider: mobile guiding and tracking system in public transit system for individuals with cognitive impairments | | BIBAK | Full-Text | 271-272 | |
| Wei-Hsun Chen | |||
| In collaboration with NGOs dedicated to supported employment programs, we
propose the use of MGuider in public transit systems: a novel mobile guiding
and tracking service, to increase work and life independence for
cognitive-impaired patients such as people with traumatic brain injury,
cerebral palsy, mental retardation, schizophrenia, dementia, and Alzheimer's
disease. Keywords: cognitively impaired, mobile social network services, social services,
ubiquitous computing | |||
| Real-time anomaly detection for traveling individuals | | BIBAK | Full-Text | 273-274 | |
| Tian-Shya Ma | |||
| We study real-time anomaly detection in a context that considers user
trajectories as input and tries to identify anomaly for users following normal
routes such as taking public transportation from the workplace to home or vice
versa. Trajectories are modeled as a discrete-time series of axis-parallel
constraints (""boxes") in the 2D space. The incremental comparison between two
trajectories where one trajectory has the current movement pattern and the
other is a norm can be calculated according to similarity between two boxes.
The proposed system was implemented and evaluated with eight individuals with
cognitive impairments. The experimental results showed that recall was 95.0%
and precision was 90.9% on average without false alarm suppression. False
alarms and false negatives dropped when axis rotation was applied. The
precision with axis rotation was 97.6% and the recall was 98.8%. The average
time used for sending locations, running anomaly detection, and issuing
warnings was in the range of 15.1 to 22.7 seconds. Keywords: assistive technology, deviation detection, emergency notification, location
awareness, ubiquitous computing | |||
| Sensation augmentation to relieve pressure sore formation in wheelchair users | | BIBAK | Full-Text | 275-276 | |
| Raphael P. Rush | |||
| Pressure sores are a significant cause of injury and death in patients with
spinal cord lesions above the waist. Sores can be prevented by body movement.
However, patients with spinal cord lesions lose awareness of body parts and can
forget to move body areas frequently enough to prevent sore formation. The
SoreStop system unobtrusively encourages patients to move their limbs. It
consists of a vibrating armband, sensor inputs, and an Arduino microcontroller.
The armband vibrates if the user has not moved enough during the previous 15
minutes. The microcontroller responds to force-sensitive resistors attached to
the user's buttocks or position sensors mounted on paralyzed limbs. SoreStop
will allow for the development of further devices that may significantly reduce
the formation of deadly pressure sores in paraplegics and other patients with
sensory loss. Keywords: decubiti, decubitus ulcers, pressure sores, sensory augmentation,
wheelchairs | |||