| Methodological Considerations for Involving SpLD Practitioners and Specialists in Designing Interactive Learning Systems | | BIBAK | Full-Text | 1-4 | |
| Latifa Al-Abdulkarim; Areej Al-Wabil; Maha Al-Yahya; Abeer Al-Humaimeedy; et al | |||
| User involvement in designing learning environments to support individuals
with Specific Learning Difficulties (SpLDs) is essential, particularly in
inadequately examined languages such as Arabic. Three interactive systems to
support students with SpLDs, two for students with dyslexia and one for
students with dyscalculia were developed in a design-based research approach.
In this paper, we describe a number of user involvement issues that emerged in
the context of developing interactive learning systems for children with SpLDs
in Arabic-speaking target populations. Findings indicate that language, context
and culture emerge as challenges in creative and exploratory design approaches.
Some of the ways these challenges have been approached are outlined. Keywords: SpLD; Dyslexia; Arabic Software | |||
| Involving Users in the Design of ICT Aimed to Improve Education, Work, and Leisure for Users with Intellectual Disabilities | | BIBAK | Full-Text | 5-12 | |
| Emanuela Mazzone; Emmanuelle Gutiérrez y Restrepo; Carmen Barrera; Cecile Finat; et al | |||
| In this paper we describe different research projects involving users with
intellectual disabilities. All these projects aim to enhance daily life
activities and make their social integration possible, as the use of
interactive technologies plays a significant role in supporting their
independence. The users' participation throughout the design process was
essential for achieving a usable and accessible product. We conclude by
underlying the importance of adopting a user centred design methodology for the
specific user group and the potential of an adaptive system to improve their
learning experience. Keywords: Accessibility; Adaptability; Usability; e-learning; independent life | |||
| PDA Software Aimed at Improving Workplace Adaptation for People with Cognitive Disabilities | | BIBAK | Full-Text | 13-20 | |
| Alberto Ferreras; Juan-Manuel Belda; Ricard Barberà; Rakel Poveda; Miguel Urra; et al | |||
| This article presents the objectives, methodology and results of a project
aimed at improving personal autonomy and adaptation to the workplace of people
with cognitive disabilities. These people may have difficulties to be
independent in various areas of life, especially at the workplace. Many of
these problems are related to issues such as time control, independence in
performing tasks, work habits, interpersonal communication, etc. To promote
autonomy in these areas, the tool GTT (Time and Task Manager) has been
developed. This is a software application running on a mobile device (a mobile
phone or PDA) aiming to ease the adaptation to the workplace of people with
cognitive disabilities through tutoring tasks, major event management and
strengthening of key aspects of work. Keywords: Workplace adaptation; personal autonomy; software tool; cognitive
disabilities; Personal Digital Assistant (PDA); mobile phone | |||
| EasyICT: A Framework for Measuring ICT-Skills of People with Cognitive Disabilities | | BIBA | Full-Text | 21-24 | |
| Jan Dekelver; Tim Vannuffelen; Joan De Boeck | |||
| Over the last decade, the basic skills to operate a computer (ICT skills) are an essential requirement to participate in the current digital era. People not possessing these skills are likely to miss access to education, entertainment, business and social life. In particular for people with cognitive limitations, this is a real threat. In the EasyICT project, we aimed at the development of a training and assessment framework, supported by an on-line tool. Using this tool youngsters with mental disabilities can be systematically tested for their ICT skills. As a result, they receive a report and additional training material in order to improve their skills. | |||
| Towards an Interactive Screening Program for Developmental Dyslexia: Eye Movement Analysis in Reading Arabic Texts | | BIBAK | Full-Text | 25-32 | |
| Areej Al-Wabil; Maha Al-Sheaha | |||
| In this paper, we describe how eyetracking has been used in exploratory
experiments to inform the design of screening tests for dyslexic students by
examining their eye gaze while reading Arabic texts. Findings reveal
differences in the intensity of eye gaze and reading patterns between dyslexic
readers and non-dyslexic controls. Dyslexics consistently exhibited longer
fixation durations, shorter saccades, and more regressions. Moreover, results
suggest that eye movement patterns are a reflection of the cognitive processes
occurring during reading of texts in both Arabic deep and shallow
orthographies. Applicability of eye movement analysis in investigating the
nature of the reading problems and tailoring interventions to the particular
needs of individuals with dyslexia is discussed. Keywords: Dyslexia; Eye tracking; Arabic; Learning Difficulties; SpLD | |||
| Developing a Multimedia Environment to Aid in Vocalization for People on the Autism Spectrum: A User-Centered Design Approach | | BIBAK | Full-Text | 33-36 | |
| Areej Al-Wabil; Hadeel Al-Shabanat; Rawan Al-Sarrani; Manal Al-Khonin | |||
| This paper outlines the analysis and design stages of an interactive
multimedia environment for encouraging vocalization of people with autism. We
described the user-centered design approach for involving users in different
roles in the design of an Arabic interactive rehabilitation tool adapted to the
needs of speech therapy programs used in the local context. Needs analysis
included exploratory surveys and interviews conducted to understand how
technology can help in encouraging the vocalization of children with autism.
The design stage involved iterative development with prototype evaluations with
specialists to refine system functionality and ensure the accessibility and
usability of the system. Insights from involving users in different roles are
described. Keywords: Autism; User-Centered Design; Vocalization; Arabic; Multimedia | |||
| The Performance of Mouse Proficiency for Adolescents with Intellectual Disabilities | | BIBAK | Full-Text | 37-44 | |
| Ting-Fang Wu; Ming-Chung Chen; Chi-Fen Wu | |||
| Information and computer technology has grown rapidly and played an
essential role in our education, vocation, and daily life. However, for
students with intellectual disabilities, effective cursor control is
challenged. The purpose of this study is to investigate the performance of
mouse control of 10 adolescents with intellectual disabilities compared with
their aged peer. A mouse proficiency assessment software was utilized to
collect the data. The results indicated that the adolescents with intellectual
disabilities who had mouse using experience do not perform as efficient as
their peers without disabilities, although they could use the mouse with high
accuracy rates. The adolescents with intellectual disabilities spend less
reaction time, longer total time and movement time, larger ratio of PL/TA, more
movement units to complete pointing and clicking tasks. The results provide
essential reference for designers of computer assisted learning software when
developing e-learning material for adolescents with intellectual disabilities. Keywords: students with intellectual disabilities; pointing and clicking tasks; cursor
control | |||
| When Words Fall Short: Helping People with Aphasia to Express | | BIBAK | Full-Text | 45-48 | |
| Tom Koppenol; Abdullah Al Mahmud; Jean-Bernard Martens | |||
| In this paper we present the design of an application that helps people with
expressive aphasia to express their feelings and share information through
digital photographs. Our design is based on feedback from the therapist and
persons with aphasia and their partners. The preliminary prototypes are
evaluated with persons with aphasia and their partners. The concept is well
accepted by the aphasics and it could be easily used during therapy and
post-therapy period. Keywords: Aphasia; design; storytelling; digital photos; communication | |||
| MarkerMouse: Mouse Cursor Control Using a Head-Mounted Marker | | BIBAK | Full-Text | 49-56 | |
| Rados Javanovic; Ian Scott MacKenzie | |||
| We propose MarkerMouse, an inexpensive method for controlling the mouse
cursor using a web cam and a marker placed on the user's forehead. Two modes of
cursor control were compared: position-control and velocity-control. In
position-control mode the cursor is positioned where the user's head is
pointing. In velocity-control mode the mouse cursor moves in a constant speed
in the direction the user's head is pointing. In an experiment designed
according to ISO 9241-9, we found a mean throughput 1.61 bps in
position-control mode. Throughput was 34% less, or 1.07 bps, in
velocity-control mode. We explain how from the marker image we control the
mouse cursor position and reduce noise in our computations. Keywords: User interfaces; cursor control; web cam; marker tracking; head position
tracking; head-operated mouse; mouse emulation; ISO 9241-9 | |||
| Using Triaxial Acceleration Sensors as Input Devices for Disabled Persons | | BIBAK | Full-Text | 57-60 | |
| Matthias Söllner; Philipp Hofmann; Josef Pösl | |||
| We propose the use of triaxial acceleration sensors as input devices for
disabled persons. The use of accelerometers for interacting with computer
systems is explained. Different possible applications are shown. Experiments
with a commercially available sensor module are presented. Problems are
discussed and future improvements are suggested. Keywords: triaxial acceleration sensor; input device; disabled persons | |||
| Canceling Interference Caused by Tremors in Joystick Controller: Study Case in a Power Wheelchair | | BIBAK | Full-Text | 61-68 | |
| Ludmila Correa de Alkmin Silva; Fernanda Cristina Corrêa; et al | |||
| People with disabilities such as Parkinson's, Holmess or other disease have
difficulty operating conventional joysticks because of the tremor. This paper
will present a study of the joystick control for the minimization of this
tremor. For this minimization, different types filter and control were study
and compared. These controls were tested in a wheelchair laboratory to see the
behavior of the wheelchair with diferent inputs. Simulation results were
presented to show the tremor cancel as well as the performance of the control
development. With this results is possible to develop new products for people
with special needs and to develop better controls for people with tremor hand. Keywords: wheelchair; dynamics; vehicle model; tremor; control | |||
| Developing Rehabilitation Robots for the Brain Injured | | BIBAK | Full-Text | 69-76 | |
| Paul Gnanayutham; Jennifer George | |||
| Although rehabilitation robotics have been used for helping disabled persons
in various areas of disability, such as stroke rehabilitation, very little
research has been done with the brain injured persons and robotics. This paper
discusses the implementation of a simple model, which consists of brain body
interface, a computer, an interface program and an electronic circuit to
interface the computer to a robotic arm. This was an exploratory research that
was carried out that allowed a brain-injured person to do simple tasks using
robotic arms. This paper also looks at rehabilitation robotics both past and
present. The paper goes on to explore the new avenues available to enhance this
exploratory research. In this paper, we take the brain body interface
communications a step further where the brain injured persons will not only
communicate but will also be able to do simple tasks. Keywords: Rehabilitation; Robot; Brain Injury; Cyberlink™ ; Brain Body
Interface; Brain Computer Interface | |||
| A Rehabilitation Method with Visual and Haptic Guidance for Children with Upper Extremity Disability | | BIBAK | Full-Text | 77-84 | |
| Kup-Sze Choi; Chum-Ming Chow; King-Hung Lo | |||
| Activities of daily living present difficulties to children with upper
extremity disabilities. Among the activities, handwriting is considered
essential to the children, and occupational therapy is important for them to
adapt to the required motor skills. In this paper, a virtual-reality based
system is developed for training fine motor skills in the context of writing
Chinese characters. A haptic device equipped with a pen-like stylus is employed
as user interface. Haptic cues in the form of feedback forces are provided
through the device to drag the child's hand towards the correct path and
direction. Real-time quantitative measurements can be recorded to allow for
data analysis and performance tracking. Keywords: hand rehabilitation; motor skills; virtual reality; haptic feedback; Chinese
handwriting | |||
| SSVEP Based Brain-Computer Interface for Robot Control | | BIBAK | Full-Text | 85-90 | |
| Rupert Ortner; Christoph Guger; Robert Prueckl; Engelbert Grünbacher; Günter Edlinger | |||
| A brain computer interface (BCI) using steady state visual evoked potentials
(SSVEP) is presented. EEG was derived from 3 subjects to test the suitability
of SSVEPs for robot control. To calculate features and to classify the EEG data
Minimum Energy and Fast Fourier Transformation (FFT) with linear discriminant
analysis (LDA) were used. Finally the change rate (fluctuation of the
classification result) and the majority weight of the analysis algorithms were
calculated to increase the robustness and to provide a zero-class
classification. The implementation was tested with a robot that was able to
move forward, backward, to the left and to the right and to stop. A high
accuracy was achieved for all commands. Of special interest is that the robot
stopped with high reliability if the subject did not watch at the stimulation
LEDs and therefore successfully zero-class recognition was implemented. Keywords: SSVEP; BCI; robot control | |||
| Text Composing Software for Disabled People, Using Blink and Motion Detection | | BIBAK | Full-Text | 91-97 | |
| Philipp Hofmann; Matthias Söllner; Josef Pösl | |||
| We present a PC software to give disabled persons the possibility of writing
texts without mouse and keyboard. Single letters and words are presented,
scrolling over the screen. The suggested words are chosen from a self-learning
database by frequency of their use. Although different kinds of binary input
signals can be used, we focused on blink detection for choosing letters and
words. Algorithms based on Haar Cascades are used for finding the eyes. We aim
on low-cost realisation, therefore a state-of-the-art Windows based computer
and a commercially available webcam is used. As an alternative input device we
used a acceleration sensor for motion detection. We could successfully
demonstrate quick text writing with different test persons. Keywords: text composing; blink detection; acceleration sensor; OpenCV; disabled
persons | |||
| Augmented and Alternative Communication System Based on Dasher Application and an Accelerometer | | BIBAK | Full-Text | 98-103 | |
| Isabel Gómez; Pablo Anaya; Rafael Cabrera; Alberto Molina; Octavio Rivera; et al | |||
| This paper describes a system composed by predictive text input software
called "Dasher" and a hardware used to connect an accelerometer to the
computer. The main goal of this work is to allow people with motor disabilities
to have a flexible and cheap way to communicate. The accelerometer can be
placed on any body part depending on user preferences. For this reason
calibration functionality has been added to dasher software. The calibration
process is easy and requires only some minutes but it is necessary in order to
allow system can be used in different ways. Tests have been carried out by
placing the accelerometer on the head. A rate of 26 words per minute is
reached. Keywords: dasher; accelerometer; open and flexible system; text input | |||
| Evaluation of WordTree System with Motor Disabled Users | | BIBAK | Full-Text | 104-111 | |
| Georges Badr; Mathieu Raynal | |||
| In this paper we propose a novel interaction technique to use with a
prediction list. Our aim is to optimize the use of this assistive aid. A
prediction list is generally used to reduce effort by presenting a list of
candidate words that may continue the prefix of the user. However, this
technique presents a disadvantage: sometimes the word that the user is aiming
to write is not presented in the list. The user has to continue his typing
manually. Our technique allows the user to click on any letter in a word of the
list so that the substring from this letter is inserted in the text. We also
present the experiments carried out to compare our system to the classical
list. Keywords: Soft keyboard; interaction; word list prediction; motor disabled people | |||
| Evaluation of SpreadKey System with Motor Impaired Users | | BIBAK | Full-Text | 112-119 | |
| Bruno Merlin; Mathieu Raynal | |||
| We recently presented a new soft keyboard: SpreadKey. The keyboard is based
on a QWERTY layout and dynamically recycles the needless characters. The
needless characters are determines by a predictive system in function of the
previous input characters. The recycling aims at providing several times the
characters with a high probability to be typed. This article resume the concept
of SpreadKey and explain the experimentation we conduct to evaluate its. The
experimental results show that SpreadKey is an interesting solution to assist
motor impaired users (with motor impairment of the superior limbs) in the text
input task. Keywords: Text entry; Fitts' law; soft keyboard; assistive technology | |||
| An Integrated Text Entry System Design for Severe Physical Disabilities | | BIBAK | Full-Text | 120-127 | |
| Yun-Lung Lin; Ming-Chung Chen; Yao-Ming Yeh; Chih-Ching Yeh | |||
| Text entry is an important and prerequisite skill for people to utilize the
computer information technology; however, people with severe physical
disabilities need appropriated text entry system to interact with computer. The
authors designed a text entry system which integrated on-screen keyboarding and
encoding technique. Furthermore, this integrated text entry system provided
learning strategy and on-screen keyboard layout adaption tools. This study also
introduced the effect of the application of this text entry system on two
clients with severe physical disabilities. The results demonstrated that the
text entry system benefited the clients. Keywords: text entry; severe physical disabilities; on-screen keyboard; encoding
technique | |||
| Qanti: A Software Tool for Quick Ambiguous Non-standard Text Input | | BIBAK | Full-Text | 128-135 | |
| Torsten Felzer; Ian Scott MacKenzie; Philipp Beckerle; Stephan Rinderknecht | |||
| This paper introduces a single-key text entry application for users with
severe physical impairments. The tool combines the idea of a scanning ambiguous
keyboard (which promises unusually high entry rates) with intentional muscle
contractions as input signals (which require much less physical effort compared
to key presses). In addition to the program architecture, the paper presents
the results of several evaluations with participants with and without
disabilities. An entry speed of 6.59 wpm was achieved. Keywords: Human-computer interaction; scanning; ambiguous keyboards; intentional
muscle contractions | |||
| A Prototype Scanning System with an Ambiguous Keyboard and a Predictive Disambiguation Algorithm | | BIBAK | Full-Text | 136-139 | |
| Julio Miró-Borrás; Pablo Bernabeu-Soler; Raul Llinares; Jorge Igual | |||
| This paper presents a prototype single-switch scanning system with a
four-key ambiguous keyboard. The keyboard consists of only three keys for
letters and an additional key for editing and extra functions. The
letter-to-key assignment is based on the shapes of lowercase consonants, and
vowels are alphabetically distributed among the keys. An adaptation of the
"one-key with disambiguation" algorithm has been used to increase text entry
speed. The system has been implemented for Spanish language. Keywords: Ambiguous Keyboards; Scanning Systems; Text Entry; AAC | |||
| An Ambiguous Keyboard Based on "Character Graphical Association" for the Severely Physically Handicapped | | BIBAK | Full-Text | 140-143 | |
| Julio Miró-Borrás; Pablo Bernabeu-Soler; Raul Llinares; Jorge Igual | |||
| Ambiguous keyboards can be used instead of scan matrices in Scanning Systems
in order to increase the text entry rate. We present a novel alternative to
assigning letters to keys by taking into consideration the shapes of lowercase
letters in order to create simple layouts, leading to families of keyboards
with 2, 3 and 4 keys. We have chosen the best 3-key layout using a single
switch scanning system and a word level disambiguation algorithm with a 10,911
word dictionary and we have tested it. We predict 16.6 wpm for an expert user
with a 0.5 second scan period. Keywords: Ambiguous Keyboards; Scanning Systems; Text Entry; AAC | |||
| The Deaf and Online Comprehension Texts, How Can Technology Help? | | BIBAK | Full-Text | 144-151 | |
| Simona Ottaviano; Gianluca Merlo; Antonella Chifari; Giuseppe Chiazzese; Luciano Seta; et al | |||
| This paper focuses on metacognition and reading comprehension processes in
hearing impaired students, as deafness causes a number of linguistic
difficulties, affecting the comprehension and production of written text and
reducing interest in reading. Then a web tool called Gym2Learn is described and
the results of a single-subject-design are discussed. In conclusion, the
effects of this system on deaf students' learning performance are highlighted. Keywords: Hearing impaired; Web tool; Metacognition | |||
| Extraction of Displayed Objects Corresponding to Demonstrative Words for Use in Remote Transcription | | BIBA | Full-Text | 152-159 | |
| Yoshinori Takeuchi; Hajime Ohta; Noboru Ohnishi; Daisuke Wakatsuki; Hiroki Minagawa | |||
| A previously proposed system for extracting target objects displayed during lectures by using demonstrative words and phrases and pointing gestures has now been evaluated. The system identifies pointing gestures by analyzing the trajectory of the stick pointer and extracts the objects to which the speaker points. The extracted objects are displayed on the transcriber's monitor at a remote location, thereby helping the transcriber to translate the demonstrative word or phrase into a short description of the object. Testing using video of an actual lecture showed that the system had a recall rate of 85.7% and precision of 84.8%. Testing using two extracted scenes showed that transcribers replaced significantly more demonstrative words with short descriptions of the target objects when the extracted objects were displayed on the transcriber's screen. A transcriber using this system can thus transcribe speech more easily and produce more meaningful transcriptions for hearing-impaired listeners. | |||
| E-Scribe: Ubiquitous Real-Time Speech Transcription for the Hearing-Impaired | | BIBAK | Full-Text | 160-168 | |
| Zdenek Bumbalek; Jan Zelenka; Lukas Kencl | |||
| Availability of real-time speech transcription anywhere, anytime, represents
a potentially life-changing opportunity for the hearing-impaired to improve
their communication capability. We present e-Scribe, a prototype web-based
online centre for real-time speech transcription for the hearing-impaired,
which provides ubiquitous access to speech transcription utilizing contemporary
communication technologies. Keywords: Hearing-Impaired; Real-Time Text; Voice over IP | |||
| SportSign: A Service to Make Sports News Accessible to Deaf Persons in Sign Languages | | BIBAK | Full-Text | 169-176 | |
| Achraf Othman; Oussama El Ghoul; Mohamed Jemni | |||
| Sports are important in the life of deaf, as well as hearing persons, on
physical, social and mental levels. However, despite that there exist many deaf
sports organization in the world, the announcement of sports' results and
actualities is still done by written or vocal languages. In this context, we
propose a Web based application to edit and diffuse sports news in sign
languages. The system uses the technology of avatar and it is based on WebSign
kernel developed by our research Laboratory of Technologies of Information and
Communication (UTIC). Information is broadcasted using MMS containing video on
sign languages and also published in SportSign's Web site. Keywords: Sport; Deaf; Sign Language | |||
| Synote: Accessible and Assistive Technology Enhancing Learning for All Students | | BIBA | Full-Text | 177-184 | |
| Mike Wald | |||
| Although manual transcription and captioning can increase the accessibility of multimedia for deaf students it is rarely provided in educational contexts in the UK due to the cost and shortage of highly skilled and trained stenographers. Speech recognition has the potential to reduce the cost and increase the availability of captioning if it could satisfy accuracy and readability requirements. This paper discusses how Synote, a web application for annotating and captioning multimedia, can enhance learning for all students and how, finding ways to improve the accuracy and readability of automatic captioning, can encourage its widespread adoption and so greatly benefit disabled students. | |||
| Teaching English to Deaf Adults: "SignOnOne" -- An Online Course for Beginners | | BIBAK | Full-Text | 185-192 | |
| Marlene Hilzensauer | |||
| A basic knowledge of English is indispensable nowadays. While there are many
courses for hearing people, there are few aimed at deaf people who use a
national sign language as their first or preferred language. This paper
describes "SignOnOne", a two-year Grundtvig project which is a follow-up to
"SignOn!" (Internet English for the Deaf). "SignOnOne" teaches English for
beginners with sign language as language of instruction. This allows the deaf
users to access the course directly (the written language of their country is
often a first foreign language). The target group are deaf adults, who can use
the course in class but also explore English on their own. This online course
will be accessible free-of-charge via the Internet homepage of the project
without any log-in or download. Keywords: sign language; deaf and hearing-impaired people; e-learning; accessibility;
ICT; multimedia; deaf bilingualism; EFL | |||
| A Web Based Platform for Sign Language Corpus Creation | | BIBAK | Full-Text | 193-199 | |
| Davide Barberis; Nicola Garazzino; Elio Piccolo; Paolo Prinetto; Gabriele Tiotto | |||
| This paper presents the design and implementation issues of a tool for the
annotation of sign language based on speech recognition. It is at his first
version of stable development and it was designed within the Automatic
Translation into Sign Languages (ATLAS) project. The tool can be used to gather
data from the language for corpus creation and stores information in a
Multilanguage Multimedia Archive. The objective is to create a corpus of
Italian Sign Language for supporting an automatic translation system that
visualizes signs by means of a virtual character. The tool allows the user to
search for media content related to signs and stores metadata about the
translation and annotation processes. Keywords: Accessibility; Sign Language; Annotation; Language Formal Representation;
Annotation Editor | |||
| Context Analysis of Universal Communication through Local Sign Languages Applying Multivariate Analysis | | BIBAK | Full-Text | 200-204 | |
| Naotsune Hosono; Hiromitsu Inoue; Yuji Nagashima | |||
| This paper discusses universal communication with ICONs or pictograms in the
field of assistive technology (AT) with human centred design (HCD) and context
analysis by Persona model. Typical two personas are created as a deaf person in
an emergency and a travelling woman from Hong Kong. Then scenarios like diary
are written and about 40 words are selected as minimum communication words in
the dialogue. Several local sign languages of related selected words are
referred in order to investigate universal signs. For this purpose a sensory
evaluation method with multivariate analysis is applied. The outcome is plotted
on one plane with relationships of subjects and samples of several local sign
languages. Through proposed method by sensory evaluation, the relationship
between fundamental words and local sign languages are initially explained. Keywords: assistive technology (AT); human centred design (HCD); context of use;
Persona model; sign language | |||
| Toward Automatic Sign Language Recognition from Web3D Based Scenes | | BIBAK | Full-Text | 205-212 | |
| Kabil Jaballah; Mohamed Jemni | |||
| This paper describes the development of a 3D continuous sign language
recognition system. Since many systems like WebSign[1], Vsigns[2] and eSign[3]
are using Web3D standards to generate 3D signing avatars, 3D signed sentences
are becoming common. Hidden Markov Models is the most used method to recognize
sign language from video-based scenes, but in our case, since we are dealing
with well formatted 3D scenes based on H-anim and X3D standards, Hidden Markov
Models (HMM) is a too costly double stochastic process. We present a novel
approach for sign language recognition using Longest Common Subsequence method.
Our recognition experiments were based on a 500 signs lexicon and reach 99% of
accuracy. Keywords: Sign Language; X3D/VRML; Gesture recognition; Web3D scenes; H-Anim; Virtual
reality | |||
| Introducing Arabic Sign Language for Mobile Phones | | BIBAK | Full-Text | 213-220 | |
| Hend S. Al-Khalifa | |||
| With the wide spread of smart mobile phones equipped with advanced
processing power among deaf people, the need for utilizing these devices to
serve the deaf people requirements is becoming an evolving research trend. In
this paper we present the idea of a portable Arabic sign language translator
for mobile phones to convert typed Arabic text to sign language animation using
3D signer avatar. This will empower people with hearing disability by using
their mobile phones to bridge the divide between them and the hearing people. Keywords: Sign Language; Arabic Language; Mobile phones; Hearing Impaired; Usability | |||
| Sign Language Interpreter Module: Accessible Video Retrieval with Subtitles | | BIBAK | Full-Text | 221-228 | |
| Primoz Kosec; Matjaz Debevc; Andreas Holzinger | |||
| In this paper, we introduce a new approach to the integration of sign
language on the Web. Written information is presented by a Sign Language
Interpreter Module (SLI Module). The improvement in comparison to
state-of-the-art solutions on the Web is that our sign language video has a
transparent background and is shown over the existing web page. The end user
can activate the video playback by clicking on an interactive icon. The
mechanism also provides a simplified approach to enable accessibility
requirements of existing web pages. In addition, the subtitles are stored
externally in the Timed Text Authoring Format (TTAF), which is a candidate for
recommendation by the W3C community. Empirical results from our evaluation
study showed that the prototype was well accepted and was pleasant to use. Keywords: Web; Multimedia; Video; Sign Language; Subtitles; Deaf; Hard of hearing;
Human-Computer Interaction (HCI); Usability Engineering | |||
| Real-Time Walk Light Detection with a Mobile Phone | | BIBAK | Full-Text | 229-234 | |
| Volodymyr Ivanchenko; James Coughlan; Huiying Shen | |||
| Crossing an urban traffic intersection is one of the most dangerous
activities of a blind or visually impaired person's travel. Building on past
work by the authors on the issue of proper alignment with the crosswalk, this
paper addresses the complementary issue of knowing when it is time to cross. We
describe a prototype portable system that alerts the user in real time once the
Walk light is illuminated. The system runs as a software application on an
off-the-shelf Nokia N95 mobile phone, using computer vision algorithms to
analyze video acquired by the built-in camera to determine in real time if a
Walk light is currently visible. Once a Walk light is detected, an audio tone
is sounded to alert the user. Experiments with a blind volunteer subject at
urban traffic intersections demonstrate proof of concept of the system, which
successfully alerted the subject when the Walk light appeared. Keywords: blindness; visual impairment; traffic intersection; pedestrian signals | |||
| An Ultrasonic Blind Guidance System for Street Crossings | | BIBAK | Full-Text | 235-238 | |
| Satoshi Hashino; Sho Yamada | |||
| This paper addresses the technical feasibility of a guidance system based on
ultrasonic sensors to aid visually impaired people to cross a road easily and
safely. A computer processes ultrasonic signals emitted by a transmitter, which
is carried by the impaired user, and provides real-time information on the
direction and distance to keep user on the correct track. Instead of time of
flight, the system estimates user position by the order of received ultrasonic
signals at multiple receivers. Experimental results are presented to discuss
feasibility of this method. Keywords: guidance system; ultrasonic sensor; cross-correlation | |||
| Pedestrian Navigation System Using Tone Gradient and Robotic GIS | | BIBAK | Full-Text | 239-246 | |
| Takafumi Ienaga; Yukinobu Sugimura; Yoshihiko Kimuro; Chikamune Wada | |||
| In this study, we propose a new type of pedestrian navigation system for the
visually impaired, and which uses "grid maps of accumulated costs" and "tone
gradient" techniques. And, we developed the navigation system using a platform
called "The RT-middleware". The developed system was demonstrated at an
exhibition of welfare equipments and devices in Japan. And more, we conducted
an experiment to evaluate our method for navigation. The results of the
experiment suggested that our method was more comprehensible than a method
using the oral explanations. And, the results also suggested that the
pedestrians felt safer during walk with our method than the oral explanation
method. Keywords: Tone Gradient; R-GIS; Visually Impaired; RFID; RT- middleware | |||
| MOST-NNG: An Accessible GPS Navigation Application Integrated into the MObile Slate Talker (MOST) for the Blind | | BIBA | Full-Text | 247-254 | |
| Norbert Márkus; András Arató; Zoltán Juhász; Gábor Bognár; László Késmárki | |||
| Over the recent years, GPS navigation has been attracting a growing attention among the visually impaired. This is because assistive technologies can obviously be based on commercially available solutions, as the GPS capable hand-held devices entered the size range of the ordinary mobile phones, and are available at an ever more affordable price, now providing a real choice for a wider audience. For many, an accessible GPS navigator has even become an indispensable tool, an integral part of their every-day life. Since the most appropriate (or at least the most favored) device type for GPS navigation is the so-called PDA, whose user interface is dominated by a touch screen and usually lacks any keyboard, accessibility for the blind remains an issue. This issue has successfully been tackled by the MOST-NNG project in which, the MObile Slate Talker's blind-friendly user interface has been combined with Hungary's leading iGO navigator. | |||
| Improving Computer Vision-Based Indoor Wayfinding for Blind Persons with Context Information | | BIBAK | Full-Text | 255-262 | |
| YingLi Tian; Chucai Yi; Aries Arditi | |||
| There are more than 161 million visually impaired people in the world today,
of which 37 million are blind. Camera-based computer vision systems have the
potential to assist blind persons to independently access unfamiliar buildings.
Signs with text play a very important role in identification of bathrooms,
exits, office doors, and elevators. In this paper, we present an effective and
robust method of text extraction and recognition to improve computer
vision-based indoor wayfinding. First, we extract regions containing text
information from indoor signage with multiple colors and complex background and
then identify text characters in the extracted regions by using the features of
size, aspect ratio and nested edge boundaries. Based on the consistence of
distances between two neighboring characters in a text string, the identified
text characters have been normalized before they are recognized by using
off-the-shelf optical character recognition (OCR) software products and output
as speech for blind users. Keywords: Indoor navigation and wayfinding; indoor; computer vision; text extraction;
optical character recognition (OCR) | |||
| Computer Vision-Based Door Detection for Accessibility of Unfamiliar Environments to Blind Persons | | BIBA | Full-Text | 263-270 | |
| Yingli Tian; Xiaodong Yang; Aries Arditi | |||
| Doors are significant landmarks for indoor wayfinding and navigation to assist blind people accessing unfamiliar environments. Most camera-based door detection algorithms are limited to familiar environments where doors demonstrate known and similar appearance features. In this paper, we present a robust image-based door detection algorithm based on doors' general and stable features (edges and corners) instead of appearance features (color, texture, etc). A generic geometric door model is built to detect doors by combining edges and corners. Furthermore, additional geometric information is employed to distinguish doors from other objects with similar size and shape (e.g. bookshelf, cabinet, etc). The robustness and generalizability of the proposed detection algorithm are evaluated against a challenging database of doors collected from a variety of environments over a wide range of colors, textures, occlusions, illuminations, scale, and views. | |||
| Development and Installation of Programmable Light-Emitting Braille Blocks | | BIBAK | Full-Text | 271-274 | |
| Makoto Kobayashi; Hiroshi Katoh | |||
| A set of programmable light-emitting Braille blocks which comprise of LEDs
were developed. They paved on the approach in front of the main building in the
campus of Tsukuba University of Technology. Originally the Braille bocks are
useful for the blind, but it also is available for the low-visions. With
light-emitting functions, Braille blocks can be good walking guide for them
even in the night. These blocks have each ID number and its luminance can be
controlled in eight levels. Most remarkable feature is that the program which
controls illuminating timing and the brightness is rewritable. By that feature,
these blocks became an emergency guide system by changing the animation
patterns. Keywords: Braille blocks; LED; Low-vision; emergency guide system | |||
| FLIPPS for the Blind -- An Object Detector for Medium Distances Using Concerted Fingertip Stimulation | | BIBAK | Full-Text | 275-281 | |
| Hans-Heinrich Bothe; Sermed Al-Hamdani | |||
| The idea of FLIPPS is to design, implement, and evaluate a vibro-tactile
device that is composed of a photo diode, vibro-tactile stimulator,
microcontroller and power supply. The stimulator is attached to the finger tip
and activated when light illumination exceeds a defined threshold. The subject
(blind person) receives the reflected light from the objects and, based on
brain plasticity principles, can interactively learn to construct a mental
image of the objects and of the scenery. The FLIPPS idea is based on sensory
substitution theory; here, substituting visual inputs by haptic vibrations. Keywords: Haptic stimulation; vibro-tactile stimulation; sound source localization;
sensory substitution; scene analysis; visually impaired | |||
| Camera Based Target Acquisition Augmented with Phosphene Sensations | | BIBA | Full-Text | 282-289 | |
| Tatiana G. Evreinova; Grigori Evreinov; Roope Raisamo | |||
| This paper presents the results of evaluation of the user performance in the target acquisition task using camera-mouse real time face tracking technique augmented with phosphene-based guiding signals. The underlying assumption was that during non-visual inspection of the virtual workspace (screen area), the transcutaneous electrical stimulation of the optic nerve can be considered as alternative feedback when the visual ability is low or absent. The performance of the eight blindfolded subjects was evaluated. The experimental findings show that the camera-based target acquisition augmented with phosphene sensations is an efficient input technique when visual information is not available. | |||
| A Mobile Phone Application Enabling Visually Impaired Users to Find and Read Product Barcodes | | BIBA | Full-Text | 290-295 | |
| Ender Tekin; James M. Coughlan | |||
| While there are many barcode readers available for identifying products in a supermarket or at home on mobile phones (e.g., Red Laser iPhone app), such readers are inaccessible to blind or visually impaired persons because of their reliance on visual feedback from the user to center the barcode in the camera's field of view. We describe a mobile phone application that guides a visually impaired user to the barcode on a package in real-time using the phone's built-in video camera. Once the barcode is located by the system, the user is prompted with audio signals to bring the camera closer to the barcode until it can be resolved by the camera, which is then decoded and the corresponding product information read aloud using text-to-speech. Experiments with a blind volunteer demonstrate proof of concept of our system, which allowed the volunteer to locate barcodes which were then translated to product information that was announced to the user. We successfully tested a series of common products, as well as user-generated barcodes labeling household items that may not come with barcodes. | |||
| A Model to Develop Videogames for Orientation and Mobility | | BIBAK | Full-Text | 296-303 | |
| Jaime Sánchez; Luis Guerrero; Mauricio Sáenz; Héctor Flores | |||
| There is a real need to have systems for people with visual disabilities to
be able to improve their orientation and mobility skills, and especially for
children to be able to improve their autonomy into the future. However, these
systems must be designed according to available objectives, methodologies and
resources, as well as by taking the interests and ways of interacting of the
end users into account. This work presents a model for the development of
videogame-based applications, which includes differing levels of abstraction
and different stages in the design and development of systems that allow for
the improvement of orientation and mobility skills for people with visual
disability. The feasibility of the model was studied by modeling two videogames
for children with visual disabilities. Keywords: Software engineering model; serious videogames; orientation and mobility;
audiogames | |||
| Underestimated Cerebral Visual Impairment Requires Innovative Thinking When Using AT | | BIBAK | Full-Text | 304-307 | |
| Michael Cyrus; Frank Lunde | |||
| The impact of cerebral visual impairment (CVI) obviously is widely
underestimated and knowledge about CVI is not widespread. This article
illustrates in which way the dysfunction of visually guided handmovements as
well as simultaneous attention demands rethinking hardware design especially of
input devices. CVI dysfunctions are manifold and special software is needed.
The challenges of some aspects of CVI may be met by everyday hardware as modern
smartphones providing numerous software which might show to be useful, even if
designed for other purposes than CVI-rehabilitation. It is possible that useful
software is already created (or may be created) when the open system invites to
creative program development from the public. The role of user centered design
should therefore be given more attention. Keywords: CVI (cerebral visual impairment); visually guided hand movements;
simultaneous attention; writing difficulties; user centered design; alternative
input devices; iPod | |||
| Eye Tracking for Low Vision Aids -- Toward Guiding of Gaze | | BIBAK | Full-Text | 308-315 | |
| Yasuyuki Murai; Masaji Kawahara; Hisayuki Tatsumi; Iwao Sekita; Masahiro Miyakawa | |||
| Eye tracking technique in the visibility study of low vision was newly
introduced in the previous report, where we examined the ease of finding public
signs on the streets and in the interior of buildings by low vision people. We
got a conclusion that they hardly notice these signs. In this report we
continue our research in this direction. We describe details of eye tracking
technology applied to low vision. We devise calibration method for low vision.
We describe analysis of eye tracking data on the basis of simplified gaze
circle model of sight of low vision, leading to a conclusion that it is
possible as well for low vision to locate regions of interest (ROI) by applying
classical method of scanpath analysis. We also show a preliminary result of
public sign recognition in the view by using a fast pattern matching technology
called "boosting," linking to a future system for guiding the gaze of low
vision to a missing public sign and zooming into it. Keywords: Low vision; Eye tracking; Visual aids; Public signs; View image segmenting | |||
| Assistive Technologies as Effective Mediators in Interpersonal Social Interactions for Persons with Visual Disability | | BIBAK | Full-Text | 316-323 | |
| Sreekar Krishna; Sethuraman Panchanathan | |||
| In this paper, we discuss the use of assistive technologies for enriching
the social interactions of people who are blind and visually impaired with
their sighted counterparts. Specifically, we describe and demonstrate two
experiments with the Social Interaction Assistant for, a) providing
rehabilitative feedback for reducing stereotypic body mannerisms which are
known to impede social interactions, and b) provide an assistive technology for
accessing facial expressions of interaction partners. We highlight the
importance of these two problems in everyday social interactions of the
visually disabled community. We propose novel use of wearable computing
technologies (both sensing and actuating technologies) for augmenting sensory
deficiencies of the user population, while ensuring that their cognitive
faculties are not compromised in any manner. Computer vision, motion sensing
and haptic technologies are combined in the proposed platform towards enhancing
social interactions of the targeted user population. Keywords: Assistive Technology; Social Interactions; Dyadic Interpersonal Interaction;
Computer Vision; Haptic Technology; Motion Sensors | |||
| Clothes Matching for Blind and Color Blind People | | BIBAK | Full-Text | 324-331 | |
| Yingli Tian; Shuai Yuan | |||
| Matching clothes is a challenging task for blind people. In this paper, we
propose a new computer vision-based technology of clothes matching to help
blind or color blind people by using a pair of images from two different
clothes captured by a camera. A mini-laptop or a PDA can be used to perform the
texture and color matching process. The proposed method can handle clothes in
uniform color without any texture, as well as clothes with multiple colors and
complex textures patterns. Furthermore, our method is robust to variations of
illumination, clothes rotation, and clothes wrinkles. The proposed method is
evaluated on a challenging database of clothes. The matching results are
displayed as audio outputs (sound or speech) to the users for "match (for both
color and texture)", "color match, texture not match", "texture match, color
not match", or "not match (for both color and texture)". Keywords: Computer Vision; Clothes Matching; Color Matching; Texture Matching; Blind;
Color Blind | |||
| A Basic Inspection of Wall-Climbing Support System for the Visually Challenged | | BIBAK | Full-Text | 332-337 | |
| Makoto Kobayashi | |||
| Wall climbing became more and more popular sports among the visually
challenged in these days. They can play and enjoy it together with sighted
people without any additional rules. However, severely visually impaired
climbers still have a problem. That is a difficulty to know where the climbing
hold after next one is in advance. A visually impaired climber champion pointed
out that to know the positions of holds of two or three steps ahead is very
important and that information will be useful to make a strategy. To solve that
problem, a basic inspection of support method is conducted. Web camera, a pair
of ultra sonic devices and a bone conduction headphone with Bluetooth
technology are tested. The results of these tests and comments by climbers
suggested that to make a system which support bouldering using general-purpose
equipments is available for the visually challenged climbers. Keywords: wall climbing; visually challenged; bouldering | |||
| Makeup Support System for Visually Impaired Persons: Overview of System Functions | | BIBAK | Full-Text | 338-345 | |
| Akihiko Hanafusa; Shuri Terada; Yuuri Miki; Chiharu Sasagawa; Tomozumi Ikeda; et al | |||
| A questionnaire survey carried out among 25 visually impaired women
indicates that they feel uncertainty to their makeup. It is observed that there
is a need to develop a system to teach them how to apply makeup and to check
their makeup. We have been developing a prototype of a makeup support system
for visually impaired persons. The system provides information on makeup and
imparts knowledge on how to apply makeup. Further, for checking the current
makeup condition, an image processing system that can recognize the face and
its parts and check for excess lipstick and the shape of the eyebrows has been
developed. From a series of input images, the best image for checking can be
selected on the basis of the sum of squares of residual errors after an affine
transformation. Further, a usability assessment was carried out by considering
eight visually impaired women, and the result could achieve a high score
related to the content of information on makeup. Keywords: Makeup support system; Visually impaired persons | |||
| WebTrax: Visualizing Non-visual Web Interactions | | BIBAK | Full-Text | 346-353 | |
| Jeffrey P. Bigham; Kyle Murray | |||
| Web accessibility and usability problems can make evaluation difficult for
non-experts who may be unfamiliar with assistive technology. Developers often
(i) lack easy access to the diversity of assistive technology employed by
users, and (ii) are unaware of the different access patterns and browsing
strategies that people familiar with a specific assistive technology tool might
use. One way to overcome this problem is to observe a person with a disability
using their tools to access content, but this can often be confusing because
developers are not familiar with assistive technology and tools are not built
supporting this use. In this paper we introduce WebTrax, a tool that we have
developed to support developers who engage blind web users as part of their
accessibility evaluation or education strategy. WebTrax helps visualize the
process that screen reader users employ to access content, helping to make
problems more obvious and understandable to developers. Keywords: web accessibility; visualization; screen reader; web trails | |||
| A Context-Based Grammar Generation in Mixed Initiative Dialogue System for Visually Impaired | | BIBAK | Full-Text | 354-360 | |
| Jaromír Plhák | |||
| In this paper, we present some principles of designing mixed-initiative
dialogue systems suited to the needs of visually impaired users related with
dialogue based web page generation. These principles have been implemented in
the online BrowserWebGen system that allows users to generate their web pages.
The primary user interface is implemented as a dialogue system with implicit
confirmation. The users are able to enter one piece of semantic information at
each step of a dialogue interaction. Second part of the application allows the
user to activate mixed initiative dialogue interface in each dialogue state.
This interface is called Dialogue Prompt Window and it provides an additional
support to the users as well as control over the application in natural
language. Keywords: web page creation; dialogue system; grammar; accessibility; visually
impaired | |||
| Design and Development of Spoken Dialog Systems Incorporating Speech Synthesis of Viennese Varieties | | BIBAK | Full-Text | 361-366 | |
| Michael Pucher; Friedrich Neubarth; Dietmar Schabus | |||
| This paper describes our work on the design and development of a spoken
dialog system, which uses synthesized speech of various different Viennese
varieties. In a previous study we investigated the usefulness of synthesis of
varieties. The developed spoken dialog system was especially designed for the
different personas that can be realized with multiple varieties. This brings
more realistic and fun-to-use spoken dialog systems to the end user and can
serve as speech-based user interface for blind users and users with visual
impairment. The benefits for this group of users are the increased
acceptability and also comprehensibility that comes about when the synthesized
speech reflects the user's linguistic and/or social identity. Keywords: Spoken dialog system; speech synthesis; dialect | |||
| Methods for the Visually Impaired to Retrieve Audio Information Efficiently | | BIBAK | Full-Text | 367-372 | |
| Atsushi Imai; Nobumasa Seiyama; Tohru Takagi; Tohru Ifukube | |||
| Visually impaired persons are able to get large amounts of information
through sound, especially speech, but because sound information is
time-sequential it is difficult to gain an overall understanding of content in
a short time, presenting inevitable issues. This paper described a solution we
have proposed a framework for fast listening methods (e.g. for 3 times normal
speed or more over) that are the equivalent of visual "skimming" of textual
reading materials. Currently we have completed a basic study and preliminary
tests indicate positive results. Keywords: blind user; speech rate conversion; adaptive speech rate conversion; fast
listening; recording book; scanning; skimming | |||
| Binocular Vision Impairments Therapy Supported by Contactless Eye-Gaze Tracking System | | BIBAK | Full-Text | 373-376 | |
| Lukasz Kosikowski; Andrzej Czyzewski | |||
| Binocular vision impairments often result in partial or total loss of
stereoscopic vision. The lack of binocular vision is a serious vision
impairment that deserves more attention. Very important result of the binocular
vision impairments is a binocular depth perception. This paper describes also a
concept of a measurement and therapy system for the binocular vision
impairments by using eye-gaze tracking system. Keywords: Amblyopia; lazy-eye syndrome; eye-gaze tracking system | |||
| Realization of Direct Manipulation for Visually Impaired on Touch Panel Interface | | BIBAK | Full-Text | 377-384 | |
| Tatuso Nishizawa; Daisuke Nagasaka; Hiroshi Kodama; Masami Hashimoto; Kazunori Itoh; et al | |||
| Touch panel interface is expanding its application rapidly, with the great
flexibility and GUI that allows direct manipulation, and allows easy
interaction for sighted people. Various approaches have been tried to achieve
accessible touch panel for the visually impaired by combination of tactile
sense and audio feedback. In this paper, we report the real benefit of direct
manipulation of touch panel for the visually impaired by typical finger
gestures such as finger flicks and taps only. Implementations of DAISY digital
talking book navigation, by up/down and left/right finger flicks operations
were compared with conventional up/down and left/right button operations. As a
result, finger flicks achieved the same as or faster in regards to operation. Keywords: touch panel; visually impaired; finger gesture; mobile phone; DAISY | |||
| Inspired by Sharp Vision: The Full HD Desktop Video Magnifier "Eagle" | | BIBAK | Full-Text | 385-388 | |
| Maria Schiewe; Thorsten Völkel; Dirk Kochanek | |||
| This paper describes Eagle. Eagle marks the beginning of a new video desktop
magnifier generation. Equipped with a high-definition camera, it offers a
resolution up to 1920×1200 pixels supporting the 16:10 aspect ratio by
default. We show how Eagle fulfils well-known requirements of desktop
magnifiers combining state-of-the-art technologies. We also emphasise in which
way users benefit from this novel development. Keywords: video magnifier; closed-circuit television; electronic vision enhancement
system; high-definition camera; low vision; visual impairment | |||
| Non-sequential Mathematical Notations in the LAMBDA System | | BIBAK | Full-Text | 389-395 | |
| Cristian Bernareggi | |||
| Blind people are used to read mathematical notations through sequential
representations (e.g. most Braille codes represent fractions, superscripts and
underscripts in a sequential pattern). In mathematics, many notations cannot be
represented through a sequential pattern without a loss of suggestivity and
expressiveness (e.g. matrices, polynomial division, long division, automata,
etc.). This paper introduces a multimodal interaction paradigm which aims to
enable blind people to explore and edit mathematical notations which cannot be
represented through a sequential pattern. So far, this interaction paradigm has
been implemented in the LAMBDA system. Keywords: Accessibility; mathematics; blind | |||
| MathML to ASCII-Braille and Hierarchical Tree Converter | | BIBA | Full-Text | 396-402 | |
| Silvia Fajardo-Flores; Maria Andrade-Arechiga; Alfonso Flores-Barriga; et al | |||
| Mathematics is a complex study subject for most people. People with disabilities, specifically those who are blind, face additional challenges: lack of study material and difficulty to access digital math notation. Significant efforts have been made in order to improve the quality of access to materials; however a complete solution has not been provided; in addition to this, most software applications support European and North American mathematical Braille codes, but very few of them cater for Spaniard and Latin American users with blindness. The present paper describes a prototype that converts equations in MathML into two alternative formats for representing them: an ASCII-Braille file ready for embossing, and a navigable tree structure conveying the hierarchy of equations that can be read with a screen-reader. The prototype uses the Unified Mathematical Code. The current version of the prototype presented here is functional and can be used for producing material for people with blindness; however, it still requires testing with final users. | |||
| Tactile Graphic Tool for Portable Digital Pad | | BIBAK | Full-Text | 403-406 | |
| Thamburaj Robinson; Atulya K. Nagar | |||
| Hand exploratory movements are an important aspect in the pedagogical
exercise of teaching and learning diagrammatic representations among vision
impaired students. The Tactile Graphic Tool is one such device allowing hand
exploratory movement in making tactile diagrams of graphical and geometric
constructions. In this paper the same device is modified to be used along with
a portable digital pad making the tactile picture drawn accessible as digital
picture in computer through its interface. Illustrations of tactile geometrical
construction in tactile and digital format are provided. Keywords: Tactile diagrams; visual impairments; mathematics; accessibility | |||
| Spoken Mathematics Using Prosody, Earcons and Spearcons | | BIBAK | Full-Text | 407-414 | |
| Enda Bates; Dónal Fitzpatrick | |||
| Printed notation provides a highly succinct and unambiguous description of
the structure of mathematical formulae in a manner which is difficult to
replicate for the visually impaired. A number of different approaches to the
verbal presentation of mathematical material have been explored, however, the
fundamental differences between the two modalities of vision and audition are
often ignored. This use of additional lexical cues, spatial audio or complex
hierarchies of non-speech sounds to represent the structure and scope of
equations may be cognitively demanding to process, and this can detract from
the perception of the mathematical content. In this paper, a new methodology is
proposed which uses the prosodic component found in spoken language, in
conjunction with a limited set of spatialized earcons and spearcons, to
disambiguate the structure of mathematical formulae. This system can
potentially represent this information in an intuitive and unambiguous manner
which takes advantage of the specific strengths and capabilities of audition. Keywords: Math; auditory interfaces; visual impairment; earcons; spearcons | |||
| On Necessity of a New Method to Read Out Math Contents Properly in DAISY | | BIBAK | Full-Text | 415-422 | |
| Katsuhito Yamaguchi; Masakazu Suzuki | |||
| The necessity of a new method for defining how to read out technical terms
or mathematical formulas properly in mathematical DAISY content is discussed.
Two concepts of enabling aloud reading, "Ruby" and "Yomi," are introduced. A
tentative way to assign pronunciation based on "Ruby tag," which will be
adopted in the next version of DAISY, is proposed. Our new version of DAISY
software, in which Ruby and Yomi functions are implemented, is presented.
Combining this software with our optical character recognition system for
mathematical documents allows both sighted and print-disabled people to easily
produce and author mathematical DAISY books. Furthermore, people with various
print disabilities can browse in mathematical content with enhanced navigation. Keywords: DAISY; mathematics; print disability; aloud reading; inclusion | |||
| Innovative Methods for Accessing Mathematics by Visually Impaired Users | | BIBAK | Full-Text | 423-430 | |
| Giuseppe Nicotra; Giovanni Bertoni; Enrico Bortolazzi; Luciana Formenti | |||
| Blind people have always considered the study of Mathematics as a difficult
problem, hindering the chance to approach scientific studies for generations of
visually impaired people. At present computer use is very widespread among
blind students, who appreciate more and more its advantages (speed, efficiency,
access to almost limitless papers), yet in the field of Mathematics, there are
still few benefits, due to its complex symbols and because notation is
graphical, whereas Braille notation is linear, to suit characteristics of the
sense of touch.
The LAMBDA-project team designed a system based on the functional integration of a linear mathematical code and an editor for the visualization, writing and manipulation of texts. Keywords: Braille; Mathematics; blind people | |||
| ICCHP Keynote: Designing Haptic Interaction for a Collaborative World | | BIBAK | Full-Text | 431-438 | |
| Gerhard Weber | |||
| The design of haptic interaction for blind users of a large tactile display
capable of multitouch input may face a design barrier. Visual media and their
modalites have to be mapped to tactile graphics and audiohaptic modalities due
to the low resolution. We present the work of the Hyperbraille project on
understanding the limitations of current screenreaders, implementation of a
modular Braille window system merging multiple tactile views through Add-Ins,
and report about the main findings of some of the evaluations related to
reading Braille and gestural input. Keywords: tactile display; screen reader; audio-haptic interaction; gestures | |||
| Improving the Accessibility of ASCII Graphics for the Blind Students | | BIBAK | Full-Text | 439-442 | |
| Karin Müller; Angela Constantinescu | |||
| Graphics are pervasively used in natural sciences. Making graphics
accessible to blind people is difficult because blind students have to know how
the layout looks like, to understand what the layout means and to have a
possibility to create a graphic by themselves. Therefore we suggest to use
ASCII graphics. Our approach tries to improve the accessibility of ASCII
graphics by automatically extracting and transforming them in order to be
printed with a tactile embosser. Keywords: accessibility of graphics; ASCII graphics; blind | |||
| Development of a Musical Score System Using Body-Braille | | BIBAK | Full-Text | 443-446 | |
| Satoshi Ohtsuka; Nobuyuki Sasaki; Sadao Hasegawa; Tetsumi Harakawa | |||
| We have developed a musical score system for visually impaired people and
deaf-blind people using Body-Braille. The normal musical score is very
convenient for non-disabled people, but it is difficult for disabled people to
use it. In order to resolve this issue, 9 micro vibrators are used to express
the note and the duration time of each sound, which we call "vibration score
system". This system is very suitable not only for the study of a melody but
also for disabled people's reference while playing. We performed several
experiments with a subject, and in each case, we obtained successful results. Keywords: Body-Braille; score system; vibration; visually impaired people; deaf-blind
people | |||
| Analysis of Awareness while Touching the Tactile Figures Using Near-Infrared Spectroscopy | | BIBAK | Full-Text | 447-450 | |
| Takayuki Shiose; Yasuhiro Kagiyama; Kentaro Toda; Hiroshi Kawakami; Kiyohide Ito; et al | |||
| In this paper, we propose a method to measuring how the blind to touch the
tactile images using Near-Infrared Spectroscopy (NIRS). The tactile image is
one of powerful tool to give any visual information for the blind. On the other
hand, it is difficult to estimate the process that the blind has understood the
tactile images because the only trajectories of the hand are too vague to
identify when the blind know the target image. Some experimental results insist
that the brain is strongly activated at beginning of a task rather than the
process of the task. Then, NIRS is confirmed to be useful to analyze process of
understanding tactile figures by touching. Keywords: assessment; design for all; blind people; neural science | |||
| Development of Directly Manipulable Tactile Graphic System with Audio Support Function | | BIBA | Full-Text | 451-458 | |
| Shigenobu Shimada; Haruka Murase; Suguru Yamamoto; Yusuke Uchida; Makoto Shimojo; et al | |||
| A basic device combining a tactile graphic display function and a touch position sensing function is proposed. The trial device consists of two major components, a tactile graphic display and a force sensor. The force sensor measures a center of gravity generated by touch action on the display surface and a PC estimates the point based on the data. The fundamental function for achieving the interactive communication is to detect the location where user is touching a tangible surface. By applying this functions, the click and scroll function by an empty hand are realized. In addition, an audio-tactile graphic system which can be used mainly to overcome tactile cognitive limitation is implemented as an application of the system. Moreover, the area of tactile display is expanded for the purpose of improvement usability. Following this, we propose the tactile scrollbar where the users can recognize their position of plane space. The validity of the developed tactile graphic system has been confirmed through subjective experiments. | |||
| AutOMathic Blocks: Supporting Learning Games for Young Blind Students | | BIBAK | Full-Text | 459-465 | |
| Arthur I. Karshmer; Ken Paap | |||
| The AutOMathic Blocks system has been developed to help young blind students
in the acquisition of elementary math skills. Through the use of Braille
labeled blocks, a plastic grid, a touchpad device and any computer, the system
has been defined to aid the young student in this most important domain of
education. Without these basic skills, students have a much higher probability
of not being able to enter math related professions. Keywords: Mathematics; Learning; Blind | |||
| Audio-Haptic Browser for a Geographical Information System | | BIBAK | Full-Text | 466-473 | |
| Limin Zeng; Gerhard Weber | |||
| People who are blind or low vision currently hasn't obtained an effectual
solution to access map applications. Although there are existing several
paper-based tactile map projects, most of them need additional processing when
product of new area of a map. Besides because of the size limitation of paper
materials, these kinds of map fail to provide detailed information. In order to
improvement accessibility of geographic data, we develop an audio-haptic map
browser to access geo-data from an off-the-shelf GIS through a large-scale
Braille display. The browser enables to not only maintain lively haptic
sensation via raised pins, but also speech out detailed information of each map
element stored in the GIS database. Furthermore, in principle it is possible to
carry out worldwide map without any additional processing, if the GIS database
supports. We employ a novel method, blinking pins, aimed at locating map
elements quickly when implementing map search operations. Excepting
introduction of our methodologies, we evaluate the system in 2 phases by
participation of 4 blind persons. The results of evaluations have been issued
in the end. Keywords: accessible geographic data; audio-haptic interaction; GIS; the visually
impaired | |||
| A Graphical Tactile Screen-Explorer | | BIBAK | Full-Text | 474-481 | |
| Martin Spindler; Michael Kraus; Gerhard Weber | |||
| A graphical screen explorer, as it is developed in the HyperBraille project,
has different demands on representing information than conventional screen
readers and thereby new concepts of interaction, the representation of widgets
and the synchronization of multimodal operations become necessary. We describe
a concept for complex user interaction with tactile widgets and how to adapt
the screen explorer to the requirements of third party applications by the use
of Add-Ins. Keywords: accessibility; blind; tactile graphics display; Braille display; screen
explorer; HyperBraille; interaction; extensibility; Add-In | |||
| Reading Braille and Tactile Ink-Print on a Planar Tactile Display | | BIBAK | Full-Text | 482-489 | |
| Denise Prescher; Oliver Nadig; Gerhard Weber | |||
| Reading characters by fingers depends on the tactile features of the medium
in use. Braille displays were often found to be slower than Braille on paper.
We study reading Braille on a novel planar tactile display by conducting three
reading tests. A first study compares the reading speed on four different
devices, namely paper, 40 cell Braille display and two varied conditions on the
planar pin device. In a second study we isolate the factor of 'equidistance'
which is due to the design of our planar tactile display. Our intention is to
find out if equidistant Braille can be as fast as standard Braille. Because of
the two-dimensionality, the pin device also can show graphics and tactile
ink-print. The latter was evaluated with blind subjects in the third study. Keywords: equidistant Braille; tactile ink-print; blind users; planar tactile display;
reading speed | |||
| Enhancing Single Touch Gesture Classifiers to Multitouch Support | | BIBAK | Full-Text | 490-497 | |
| Michael Schmidt; Gerhard Weber | |||
| Recent progress in the area of touch sensitive hardware has accelerated a
trend of rethinking classical user interfaces. Applications controlled by
gestures have become of special interest. System defined gestures however may
prove not to be as intuitive and flexible as preferred by eager users. A same
amount of flexibility would be handy for offering special input methods
appropriate to users' options and skills. The presented work shows an
alternative to system defined gestures even for multitouch input. Extended by a
few feature comparisons, existing single touch classifiers are enhanced to
support a much wider range of input. The main goal is to provide an easy to use
and implement concept of multitouch recognition. Along with the algorithmic
design testing results on the classifiers performance and proposals for
applications are given. Keywords: gesture; recognition; classifier; multitouch; recognition rates; user
interfaces | |||
| User-Interface Filter for Two-Dimensional Haptic Interaction | | BIBA | Full-Text | 498-505 | |
| Wiebke Köhlmann; Francis Zinke; Maria Schiewe; Helmut Jürgensen | |||
| General principles for the design of user interfaces for systems software are discussed, which take into account the constraints imposed by special user groups like visually impaired or blind persons and by special displays like haptic displays or very small screens. | |||
| Improving Screen Magnification Using the HyperBraille Multiview Windowing Technique | | BIBAK | Full-Text | 506-512 | |
| Christiane Taras; Michael Raschke; Thomas Schlegel; Thomas Ertl; Denise Prescher; et al | |||
| Screen magnification is an important means to support visually impaired
people when working with computers. Many improvements have been made on
appropriate software. But unfortunately, in the last years, those improvements
where mainly in realization detail. A number of problems remain, that, to our
minds, need conceptual rethinking. In this paper, we present a new concept for
magnification software. It uses different views to support the user efficiently
in different situations. Thereby, it reveals the possibility to build upon
current magnification methods and so retain features of current magnification
software. The described views are derived from a concept that was originally
developed for a tactile graphics display. We found that both topics, rendering
for a tactile graphics display and screen magnification, have very much in
common. Initial user feedback confirms the usefulness of the concept. Keywords: Screen Magnification | |||
| Three-Dimensional Tactile Models for Blind People and Recognition of 3D Objects by Touch: Introduction to the Special Thematic Session | | BIBAK | Full-Text | 513-514 | |
| Yoshinori Teshima | |||
| Blind people can recognize three-dimensional shapes through tactile
sensations. Therefore, effective models are useful in tactile learning. In our
special thematic session, we will focus upon three-dimensional tactile models
which developed by using the most advanced technologies. Another focus of the
special thematic session is devoted to the recognition mechanisms of 3D objects
based on tactile information. This includes the problem to recognize the shapes
of 3D objects from 2D tactile plane figures. Keywords: tactile sensation; tactile learning; tactile teaching; teaching materials;
3D tactile models in science; 3D tactile models in mathematics; 3D tactile
models in art; recognition mechanisms of 3D objects; tactile plane figures | |||
| Models of Mathematically Defined Curved Surfaces for Tactile Learning | | BIBAK | Full-Text | 515-522 | |
| Yoshinori Teshima; Tohru Ogawa; Mamoru Fujiyoshi; Yuji Ikegami; Takeshi Kaneko; et al | |||
| Several accurate models of mathematically defined curved surfaces were
constructed for use in tactile learning. Exact shape data were generated on a
personal computer (PC) using mathematical or computer-aided design (CAD)
software. Then tactile models were constructed by layered manufacturing, which
is well suited for curved surfaces. This method is flexible in that the
equation parameters and model scale can be changed easily. A recognition test
performed on several models showed their potential usefulness for tactile
learning. Keywords: tactile 3D model; mathematically defined curved surface; layered
manufacturing | |||
| Enlarged Skeleton Models of Plankton for Tactile Teaching | | BIBAK | Full-Text | 523-526 | |
| Yoshinori Teshima; Atsushi Matsuoka; Mamoru Fujiyoshi; Yuji Ikegami; Takeshi Kaneko; et al | |||
| Blind people can learn about real objects through tactile sensations.
However, the objects are sometimes rather small to observe by touch. Enlarged
three-dimensional (3D) models for such tiny objects are useful for tactile
teaching. This paper presents the first exact models of radiolaria and
foraminifera. Their 3D shape data are measured using micro X-ray CT, and their
exact 3D models are constructed using layered manufacturing. A recognition test
showed the immense usefulness of these enlarged models for blind students in
learning the shapes of micro-organisms. Keywords: microorganism; plankton; radiolarian; foraminifera; enlarged model; 3D
model; X-ray CT; layered manufacturing; tactile teaching; recognition test | |||
| Reproduction of Tactile Paintings for Visual Impairments Utilized Three-Dimensional Modeling System and the Effect of Difference in the Painting Size on Tactile Perception | | BIBAK | Full-Text | 527-533 | |
| Susumu Oouchi; Kenji Yamazawa; Lorreta Secchi | |||
| It is difficult for blind persons to appreciate painting. To facilitate the
appreciation by tactile perception, the Francesco Cavazza Institute for blind
in Italy is developing three-dimensional tactile painting for blind persons. We
are developing a system which product tactile paintings utilized
three-dimensional model making technology cooperation with Cavazza Institute.
This method enables us to manufacture paintings of varied sizes. In this study,
we examined the possibility of down-sizing products. We find blind persons to
be useful means of materials which reaffirm the painting image. Keywords: tactile painting; three-dimensional modeling; blind student | |||
| Tactile Map Automated Creation System to Enhance the Mobility of Blind Persons -- Its Design Concept and Evaluation through Experiment | | BIBAK | Full-Text | 534-540 | |
| Kazunori Minatani; Tetsuya Watanabe; Toshimitsu Yamaguchi; Ken Watanabe; Joji Akiyama; et al | |||
| The authors have developed a tactile map creation system (TMACS). It is
intended to assist blind persons' independent mobility. For this purpose, the
system was designed to produce tactile maps, to be manipulated by the blind
person and to support producing tactile maps of arbitrary locations of Japan.
Through group interviews with blind persons, the authors collected information
on what kind of strategies are useful for independent walk and what kind of
objects can function as landmarks. TMACS is developed to make good use of these
information. From the walking experiment, some assumptions which were made by
the authors were confirmed. On the other hand, some unexpected or contradicted
results were observed on the usefulness of landmarks and the cause of losing
right routes. Keywords: Blind person; Visually Impaired; Tactile Map; Independent Mobility;
Orientation and Mobility | |||
| Development of Tactile Graphics Production Software for Three-Dimensional Projections | | BIBAK | Full-Text | 541-547 | |
| Mamoru Fujiyoshi; Takeshi Kaneko; Akio Fujiyoshi; Susumu Oouchi; Kenji Yamazawa; et al | |||
| In this paper, we introduce tactile graphics production software for
three-dimensional projections. A blind person can use this software without
assistance from the sighted and produce tactile graphics of three-dimensional
projections. With this software, we want to study the limitation of tactile
recognition of projections and improve the guidelines of teaching projections. Keywords: tactile graphics; the blind; three-dimensional projections; universal design | |||
| Comprehending and Making Drawings of 3D Objects by Visually Impaired People: Research on Drawings of Geometric Shapes by Various Methods of Projection | | BIBAK | Full-Text | 548-555 | |
| Takeshi Kaneko; Mamoru Fujiyoshi; Susumu Oouchi; Yoshinori Teshima; Yuji Ikegami; et al | |||
| In this study, we investigated the possibility of the early and late blind
comprehending and producing tactile drawings by oblique, axonometric or
perspective projection, which has been considered difficult to understand for
them, especially for the early blind. For this purpose, an experiment was
carried out analyzing the following issues: 1) how early and late blind high
school students draw four geometric 3D shapes; 2) how they rank tactile
drawings of such shapes produced via various methods; development, orthogonal
projection, in addition to above three projection methods; 3) their
explanations of how these drawings by use of such methods are produced. The
results demonstrate that both groups understand drawings via latter two methods
well and in addition to the late blind, even the early blind understand
drawings via the former three projection methods at least partially, which
would lead to better understanding. Keywords: Visually Impaired People; Drawing of 3D Object; Tactile Recognition;
Educational Material | |||
| Human-Computer Interaction and Usability Engineering for Elderly (HCI4AGING): Introduction to the Special Thematic Session | | BIBAK | Full-Text | 556-559 | |
| Andreas Holzinger; Martina Ziefle; Carsten Röcker | |||
| In most countries demographic developments tend towards more and more
elderly people in single households. Improving the quality of life for elderly
people is an emerging issue within our information society. Good user
interfaces have tremendous implications for appropriate accessibility. Though,
user interfaces should not only be easily accessible, they should also be
useful, usable and most of all enjoyable and a benefit for people.
Traditionally, Human-Computer Interaction (HCI) bridges Natural Sciences
(Psychology) and Engineering (Informatics/Computer Science), whilst Usability
Engineering (UE) is anchored in Software Technology and supports the actual
implementation. Together, HCI and UE have a powerful potential to help towards
making technology a little bit more accessible, useful, useable and enjoyable
for everybody. Keywords: Human-Computer Interaction; Usability Engineering; User Interfaces; Elderly
People; Older Adults | |||
| A Device to Evaluate Broadcast Background Sound Balance Using Loudness for Elderly Listeners | | BIBAK | Full-Text | 560-567 | |
| Tomoyasu Komori; Tohru Takagi; Koichi Kurozumi; Kiyotake Shoda; Kazuhiro Murakawa | |||
| Elderly people complain that they sometimes have a hard time hearing the
narration of broadcast TV programs because the background sounds (background
music, sound effects) are too loud. We conducted subjective evaluations to
determine the relationship between TV volume and loudness of background sounds
as it regards the comprehension of programs by elderly subjects. On the basis
of objectively measured loudness levels, we confirmed two conditions under
which elderly listeners perceive background sounds as too loud. One is a less
than 6 phon difference between the narration and background sounds; the other
is when background sounds are more than 2.5 phon louder than the average
narration in the program. Based on these findings, we constructed a prototype
system for objectively evaluating loudness. The device features a meter with
seven color-coded levels that clearly shows the best sound balance for elderly
listeners. The evaluation system was tested at a broadcasting station. Keywords: Loudness level; elderly listeners; broadcast program speech; subjective
evaluation | |||
| Automatic Live Monitoring of Communication Quality for Normal-Hearing and Hearing-Impaired Listeners | | BIBA | Full-Text | 568-575 | |
| Jan Rennies; Eugen Albertin; Stefan Goetze; Jens-E. Appell | |||
| This contribution presents a system, which allows for a continuous monitoring of speech intelligibility from a single microphone signal. The system accounts for the detrimental effects of environmental noise and reverberation by estimating the two relevant parameters signal-to-noise ratio and reverberation time, and feeding them to a speech intelligibility model. Due to its real-time functionality and the fact that no reference signal is required, the system offers a wide range of opportunities to monitor communication channels and control further signal enhancement mechanisms. A priori knowledge of the individual hearing loss can be used to make the system applicable also for hearing-impaired users. | |||
| Portrait: Portraying Individuality | | BIBAK | Full-Text | 576-583 | |
| Gemma Webster; Deborah I. Fels; Gary Gowans; Norman Alm | |||
| People with dementia who live in care homes can have very little social
interaction. Care staff have limited time to spend with each person and
communication difficulties can make it difficult to get to know the person with
dementia as a person. This paper presents Portrait a software tool to enable
care staff to get to know a person with dementia quickly. An initial usability
study was carried out to evaluate the system with inexperienced computer users.
The study was conducted in two iterations collecting data on ease of use,
preference of features, level of training required and how engaging Portrait
was to use. Overall Portrait was very positively received with no major
usability issues and all participants rated the system as either engaging or
very engaging and fun to use. Keywords: Multimedia; Dementia; Personality; Person Centred; Care Aid | |||
| Mental Models of Menu Structures in Diabetes Assistants | | BIBAK | Full-Text | 584-591 | |
| André Calero Valdez; Martina Ziefle; Firat Alagöz; Andreas Holzinger | |||
| Demographic change in regard to an aging population with an increasing
amount of diabetes patients will put a strain on health care rentability in all
modern societies. Electronic living assistants for diabetes patients might help
lift the burden on taxpayers, if they are usable for the heterogeneous user
group. Research has shown that correct mental models of device menu structures
might help users in handling electronic devices. This exploratory study
investigates construction and facilitation of spatial mental models for a menu
structure of a diabetes living assistant and relates them to performance in
usage of a device. Furthemore impact of age, domain knowledge and technical
expertise on complexity and quality of the mental model are evaluated. Results
indicate that even having a simplified spatial representation of the menu
structure increases navigation performance. Interestingly not the overall
correctness of the model was important for task success but rather the amount
of route knowledge within the model. Keywords: HCI; Mental Models; eHealth; Diabetes; User Performance; Menu Navigation;
Aging; Assisted Living | |||
| Touch Screen User Interfaces for Older Subjects | | BIBAK | Full-Text | 592-599 | |
| Guillaume Lepicard; Nadine Vigouroux | |||
| This study investigated the optimal numbers of blocks and tactile targets
for touch screen user interfaces intended for use by all (reference and older
population). Three independent variables (number of targets, number of
interaction blocks on the touch screen and number of hands used) were studied
in our experiment. Huge amount of data were stored. In this paper, we will only
report statistical analyses on two variables: Time needed to Realize the Test
and Error Rate. Each variable will be analyzed in two times: for the whole
population (reference and older) and for the comparison between these two
populations. Keywords: older people; touch screen; interface design; bi-manual | |||
| Using a Wearable Insole Gait Analyzing System for Automated Mobility Assessment for Older People | | BIBAK | Full-Text | 600-603 | |
| Johannes Oberzaucher; Harald Jagos; Christian Zödl; Walter Hlauschek; Wolfgang Zagler | |||
| Falls among the older population are one of the most common causes for
injuries, frailty and for morbidity. Fall incidents have various reasons and
are often related to decreased mobility and hence an increasing fall risk could
be detected in time. The objective of this paper is to show results and future
prospects of the funded project "vitaliSHOE" and in detail of an automated
multi-sensor-based method to determine fall risk indicators in older people's
gait and body movements. Keywords: wearable; fall risk assessment; gait analysis; accelerometer; ambient
assisted living | |||