| Universal Access and Inclusion in Design | | BIB | Full-Text | 1-2 | |
| Julie A. Jacko; Vicki L. Hanson | |||
| Design for older and disabled people - where do we go from here? | | BIBAK | Full-Text | 3-7 | |
| A. F. Newell; P. Gregor | |||
| The significant changes in the social, legal, demographic, and economic
landscape over the past 10-15 years present enormous opportunities for the
human-computer interface design community. These changes will have a
significant impact on the design and development of systems for older and
disabled people. This paper brings together a number of proposals to improve
both specialist and mainstream design methods in the field as a contribution to
the debate about design for older and disabled people and the concept of
universal usability. Keywords: Gerontechnology - User centered design - Universal design - Older and
disabled people - Design methodologies | |||
| A multimedia social interaction service for inclusive community living: Initial user trials | | BIBAK | Full-Text | 8-17 | |
| N. Hine; J. L. Arnott | |||
| The move from institution to community care has resulted in many people
receiving care at home. For some, disability or frailty restricts their
involvement in social activities outside the home, resulting in unacceptable
social isolation. This problem is compounded if the person has a speech or
language impairment. In this paper, we will describe a communication service
designed to provide nonspeaking people with a means to interact socially when
living independently, based on the sharing of stories using pictures and other
media. Initial exploration on the usability of the system by a pair of
representative users will be described. Keywords: Assistive communication - Social isolation - Videoconferencing - Internet | |||
| The use of cursor measures for motion-impaired computer users | | BIBAK | Full-Text | 18-29 | |
| S. Keates; F. Hwang; P. Langdon; P. J. Clarkson; P. Robinson | |||
| "Point and click" interactions remain one of the key features of graphical
user interfaces (GUIs). People with motion-impairments, however, can often have
difficulty with accurate control of standard pointing devices. This paper
discusses work that aims to reveal the nature of these difficulties through
analyses that consider the cursor's path of movement. A range of cursor
measures was applied, and a number of them were found to be significant in
capturing the differences between able-bodied users and motion-impaired users,
as well as the differences between a haptic force feedback condition and a
control condition. The cursor measures found in the literature, however, do not
make up a comprehensive list, but provide a starting point for analysing cursor
movements more completely. Six new cursor characteristics for motion-impaired
users are introduced to capture aspects of cursor movement different from those
already proposed. Keywords: Cursor studies - Motion-impaired users - Force feedback | |||
| Speech-based cursor control: understanding the effects of target size, cursor speed, and command selection | | BIBAK | Full-Text | 30-43 | |
| A. Sears; M. Lin; A. S. Karimullah | |||
| Speech recognition can be a powerful tool when physical disabilities,
environmental factors, or the tasks in which an individual is engaged hinders
the individual's ability to use traditional input devices. While
state-of-the-art speech-recognition systems typically provide mechanisms for
both data entry and cursor control, speech-based interactions continue to be
slow when compared to similar keyboard- or mouse-based interactions. Although
numerous researchers continue to investigate methods of improving speech-based
interactions, most of these efforts focus on the underlying technologies or
dictation-oriented applications. As a result, the efficacy of speech-based
cursor control has received little attention. In this article, we describe two
experiments that provide insight into the issues involved when using
speech-based cursor control. The first compares two variations of a common
speech-based cursor-control mechanism. One employs the standard mouse cursor
while the second provides a predictive cursor designed to help users compensate
for the delays often associated with speech recognition. As expected, larger
targets and shorter distances resulted in shorter target selection times, while
larger targets also resulted in fewer errors. Interestingly, there were no
differences between the standard and predictive cursors. The second experiment
investigates the delays associated with spoken input, explains why the original
predictive-cursor implementation failed to provide the expected benefits, and
provides insight that guided the design of a new predictive cursor. Keywords: Speech recognition - Navigation - Mouse - Cursor control - Predictive cursor | |||
| Extending keyboard adaptability: An investigation | | BIBAK | Full-Text | 44-55 | |
| S. Trewin | |||
| One common typing error is the overlap error, in which two keys are pressed
at once. The existing keyboard accessibility filters do not directly address
overlap errors. Several techniques for automatic correction of overlap errors
are compared in an offline analysis. Leveraging keystroke timing
characteristics is shown to achieve a 50% to 75% reduction in errors for study
participants with relatively high error rates. A simple heuristic for
estimating the accuracy improvement for an individual using this filter is
presented and considerations for live implementation and further work are
discussed. Keywords: Keyboard - Accessibility - Typing errors - Motor disabilities | |||
| Intelligent non-visual navigation of complex HTML structures | | BIBAK | Full-Text | 56-69 | |
| E. Pontelli; D. Gillan; G. Gupta; A. Karshmer; E. Saad; W. Xiong | |||
| This paper provides an overview of a project aimed at using knowledge-based
technology to improve accessibility of the Web for visually impaired users. The
focus is on the multi-dimensional components of Web pages (tables and frames);
our cognitive studies demonstrate that spatial information is essential in
comprehending tabular data, and this aspect has been largely overlooked in the
existing literature. Our approach addresses these issues by using explicit
representations of the navigational semantics of the documents and using a
domain-specific language to query the semantic representation and derive
navigation strategies. Navigational knowledge is explicitly generated and
associated to the tabular and multi-dimensional HTML structures of documents.
This semantic representation provides to the blind user an abstract
representation of the layout of the document; the user is then allowed to issue
commands from the domain-specific language to access and traverse the document
according to its abstract layout. Keywords: Non-visual Web - Universal accessibility - Domain-specific languages | |||
| Distributed accessibility control points help deliver a directly accessible Web | | BIBAK | Full-Text | 70-75 | |
| Peter G. Fairweather; John T. Richards; Vicki L. Hanson | |||
| This paper describes a set of interfaces and mechanisms to enhance access to
the World Wide Web for persons with sensory, cognitive, or motor limitations.
Paradoxically, although complex Web architectures are often accused of impeding
accessibility, their layers expand the range of points where interventions can
be staged to improve it. This paper identifies some of these access control
points and evaluates the particular strengths and weaknesses of each. In
particular, it describes an approach to enhance access that is distributed
across multiple control points and implemented as an aggregation of services. Keywords: World Wide Web - Adaptive interfaces - Accessibility - Seniors -
Architecture | |||
| Foundation for improved interaction by individuals with visual impairments through multimodal feedback | | BIBAK | Full-Text | 76-87 | |
| H. S. Vitense; J. A. Jacko; V. K. Emery | |||
| Through an investigation of how the performance of people who have normal
visual capabilities is affected by unimodal, bimodal, and trimodal feedback,
this research establishes a foundation for presenting effective feedback to
enhance the performance of individuals who have visual impairments. Interfaces
that employ multiple feedback modalities, such as auditory, haptic, and visual,
can enhance user performance for individuals with barriers limiting one or more
channels of perception, such as a visual impairment. Results obtained
demonstrate the effects of different feedback combinations on mental workload,
accuracy, and performance time. Future, similar studies focused on participants
with visual impairments will be grounded in this work. Keywords: Multimodal - Visual impairment - Feedback - Auditory - Visual - Haptic | |||
| Multimodality: a step towards universal access | | BIB | Full-Text | 89-90 | |
| Noelle Carbonell | |||
| Flexible and robust multimodal interfaces for universal access | | BIBAK | Full-Text | 91-95 | |
| S. Oviatt | |||
| Multimodal interfaces are inherently flexible, which is a key feature that
makes them suitable for both universal access and next-generation mobile
computing. Recent studies also have demonstrated that multimodal architectures
can improve the performance stability and overall robustness of the
recognition-based component technologies they incorporate (e.g., speech,
vision, pen input). This paper reviews data from two recent studies in which a
multimodal architecture suppressed errors and stabilized system performance for
accented speakers and during mobile use. It concludes with a discussion of key
issues in the design of future multimodal interfaces for diverse user groups. Keywords: Multimodal interfaces - Robustness - Mutual disambiguation of recognition
errors - Error suppression - Accented speakers | |||
| Universal multimedia information access | | BIBAK | Full-Text | 96-104 | |
| Mark T. Maybury | |||
| Efficient, effective and intuitive access to multimedia information is
essential for business, education, government and leisure. Unfortunately,
interface design typically does not account for users with disabilities,
estimated at 40 million in America alone. Given broad societal needs, our
community has a social responsibility to provide universal designs that ensure
efficient and effective access for all to heterogeneous and increasingly
growing repositories of global information. This article describes information
access functions, discusses associated grand challenges, and outlines potential
benefits of technologies that promise to increase overall accessibility and
success of interaction with multimedia. The article concludes by projecting the
future of multimodal technology via a roadmap of multimodal resources, methods,
and systems from 2003 through 2006. Keywords: Multimodal information access - Disabilities - Multimedia retrieval -
Information extraction - Summarization | |||
| Evaluation of multimodal graphs for blind people | | BIBAK | Full-Text | 105-124 | |
| Wai Yu; Stephen Brewster | |||
| This paper introduces the development of a multimodal data visualisation
system and its evaluations. This system is designed to improve blind and
visually impaired peoples access to graphs and tables. Force feedback,
synthesized speech and non-speech audio are utilised to present graphical data
to blind people. Through the combination of haptic and audio representations,
users can explore virtual graphs rendered by a computer. Various types of
graphs and tables have been implemented, and a three-stage evaluation has been
conducted. The experimental results have proven the usability of the system and
the benefits of the multimodal approach. The paper presents the details of the
development and experimental findings, as well as the changes of role of
haptics in the evaluation. Keywords: Haptics - Multimodal interaction - Assistive technology - Human computer
interaction | |||
| Multimodality and interactional differences in older adults | | BIBAK | Full-Text | 125-133 | |
| Mary Zajicek; Wesley Morrissey | |||
| Many age-associated impairments such as loss of memory and vision make
computer use difficult for older adults. This paper is concerned with interface
design in a voice Web browser, which compensates for age-associated
impairments, particularly loss of memory and vision. It describes a special
Voice Help facility talking to older adults through their browser interaction,
and reports experiments to establish the mixes of output media (text and
speech) that are most effective for information transfer. In particular, the
paper demonstrates that older adults retention of spoken output is different to
that of younger people. The paper provides information on absorption rates for
different media for older adults, which supports the design of multimodal
systems suited to older adults. This is important for the development of
systems that enable older adults to absorb information easily. Keywords: Older adults - Web access - Interface design - Output mode - Information
absorption | |||
| Designing for pen and speech input in an object-action framework: the case of email | | BIBAK | Full-Text | 134-142 | |
| David V. Keyson; Marc de Hoogh; Jans Aasman | |||
| This study presents a user interface that was intentionally designed to
support multimodal interaction by compensating for the weaknesses of speech
compared with pen input and vice versa. The test application was email using a
web pad with pen and speech input. In the case of pen input, information was
represented as visual objects, which were easily accessible. Graphical
metaphors were used to enable faster and easier manipulation of data. Speech
input was facilitated by displaying the system speech vocabulary to the user.
All commands and accessible fields with text labels could be spoken in by name.
Commands and objects that the user could access via speech input were shown on
a dynamic basis in a window. Multimodal interaction was further enhanced by
creating a flexible object-action order such that the user could utter or
select a command with a pen followed by the object which was to be enacted
upon, or the other way round (e.g., New Message or Message New). The flexible
action-object interaction design combined with voice and pen input led to eight
possible action-object-modality combinations. The complexity of the multimodal
interface was further reduced by making generic commands such as New applicable
across corresponding objects. Use of generic commands led to a simplification
of menu structures by reducing the number of instances in which actions
appeared. In this manner, more content information could be made visible and
consistently accessible via pen and speech input. Results of a controlled
experiment indicated that the shortest task completion times for the eight
possible input conditions were when speech-only was used to refer to an object
followed by the action to be performed. Speech-only input with action-object
order was also relatively fast. In the case of pen input-only, the shortest
task completion times were found when an object was selected first followed by
the action to be performed. In multimodal trials in which both pen and speech
were used, no significant effect was found for object-action order, suggesting
benefits of providing users with a flexible action-object interaction style in
multimodal or speech-only systems. Keywords: Multimodal interaction - generic actions - objects - email - pen input -
speech input - navigation - Webpad - user interface design | |||
| Towards the design of usable multimodal interaction languages | | BIBAK | Full-Text | 143-159 | |
| Noelle Carbonell | |||
| This paper presents novel recommendations for the design of usable
multimodal command or query languages. These recommendations have been inferred
from the results of three empirical studies focused on the use of spontaneous
speech (first study) and the synergic use of spontaneous versus controlled
speech and gestures for interacting with current application software (second
and third studies). In particular, we propose a method for designing multimodal
languages that can be considered as an appropriate substitute for direct
manipulation in all contexts precluding the use of mouse and keyboard, and for
all standard categories of users, especially the general public. Keywords: Speech user interfaces - Gesture human-computer interaction - Multimodal
human-computer interaction - Usability studies - Ergonomic design
recommendations | |||
| How do colors influence the haptic perception of textured surfaces? | | BIBAK | Full-Text | 160-172 | |
| Zhaowu Luo; Atsumi Imamiya | |||
| Multimodality is considered a promising approach for universal access, and
haptic interaction has the potential to constitute an added dimension to
multimodal interfaces. This paper describes the influence of colors on the
haptic perception of textured surfaces, based on 8 experiments. Our results
show that (1) colors do have an influence on haptic perception, but they do not
make the perception error rate higher than when no color is used; (2) up to 6
different types of colors can be used in haptic interfaces without worsening
the haptic perception; (3) yellow has an error rate that is statistically
significantly lower than that of 3 other color conditions, and can be used
without worsening the haptic perception; (4) our finding of two special orders
for haptic perception demonstrates that human haptic perception is very
sensitive to continuously increasing or decreasing changes of roughness, but
has difficulty discerning randomly changed roughness. Keywords: Haptics - Texture perception - Color perception - Multimodal interfaces -
Human computer interaction | |||
| Loosely-coupled approach towards multi-modal browsing | | BIBAK | Full-Text | 173-188 | |
| Jan Kleindienst; Ladislav Seredi; Pekka Kapanen; Janne Bergman | |||
| Contemplating the concept of universal-access multi-modal browsing comes as
one of the emerging killer technologies that promises broader and more flexible
access to information, faster task completion, and advanced user experience.
Inheriting the best from GUI and speech, based on the circumstances, hardware
capabilities, and environment, multi-modality's great advantage is to provide
application developers with a scalable blend of input and output channels that
may accommodate any user, device, and platform. This article describes a
flexible multi-modal browser architecture, named Ferda the Ant, which reuses
uni-modal browser technologies available for VoiceXML, WML, and HTML browsing.
A central component, the Virtual Proxy, acts as a synchronization coordinator.
This browser architecture can be implemented in either a single client
configuration, or by distributing the browser components across the network. We
have defined and implemented a synchronization protocol to communicate the
changes occurring in the context of a component browser to the other browsers
participating in the multi-modal browser framework. Browser wrappers implement
the required synchronization protocol functionality at each of the component
browsers. The component browsers comply with existing content authoring
standards, and we have designed a set of markup-level authoring conventions
that facilitate maintaining the browser synchronization. Keywords: Multi-modal - Browser - VoiceXML - HTML - WML - MM, multi-modal - DOM,
Document Object Model - VP, Virtual Proxy - GUI, Graphical User Interface -
NLU, Natural Language Understanding - WML,Wireless Markup Language - HTML,
HyperText Markup Language - WWW, World-Wide Web - WAP, Wireless Application
Protocol - W3C, World-Wide Web Consortium - VoiceXML, Voice eXtensible Markup
Language - COM, Component Object Model - HTTP, HyperText Transfer Protocol -
API, Application Programming Interface - UI, User Interface - FIA, Form
Interpretation Algorithm | |||
| Accessing information through multimodal 3D environments: towards universal access | | BIBAK | Full-Text | 189-204 | |
| Fabio Pittarello | |||
| 3D environments represent a great opportunity for universal access to
information, as they offer an intuitive interaction paradigm, similar to what
is experienced by humans in their everyday lives. In spite of that, several 3D
interfaces are characterized by poor structures and are hard to navigate. This
paper presents the multimodal concept of the Interaction Locus (IL) as a means
to give structure to 3D scenes, helping the user to interact with and access
information inside them. The concept was initially developed with particular
reference to desktop virtual reality (2.5 D virtual reality), but it is general
enough to be extended to other contexts, such as real 3D scenes. The final part
of this work shows how the IL concept addresses the need for a unified
authoring methodology, capable of allowing access to different target user
groups from a variety of different devices. Keywords: 3D environments - Earcons - Interaction locus - Multimodality - Universal
access | |||
| Guest editorial | | BIB | Full-Text | 205-206 | |
| Simeon Keates; P. John Clarkson | |||
| A commercial perspective on universal access and assistive technology: towards implementation | | BIBAK | Full-Text | 207-214 | |
| J. Coy | |||
| This paper presents one company's perspective on the implementation and
provision of universal access (UA) and assistive technology in an industrial
setting. The paper addresses the need to provide accessible work-places and
also accessible customer services, from legal, commercial and ethical
standpoints. The company in question, Royal Mail, is one of the UKs largest
employers and service providers and so has been able to gather employee and
customer data often unavailable to smaller organisations. Keywords: Inclusive design - Assistive technology - Industrial perspective | |||
| Countering design exclusion: bridging the gap between usability and accessibility | | BIBAK | Full-Text | 215-225 | |
| S. Keates; P. J. Clarkson | |||
| It is known that many people are being excluded unnecessarily from using
products, services and environments that are essential for supporting
independence and quality of life. Such exclusion often arises from designers
taking inadequate account of the end users functional capabilities when making
design decisions. This paper addresses how traditional usability techniques can
be extended to include accessibility issues by considering the spread of user
functional capabilities across the population. A series of measures for
evaluating the level of design exclusion based on those capabilities is also
presented. Keywords: Inclusive design cube - Inclusive merit - Inclusive design knowledge loop | |||
| Issues surrounding the user-centred development of a new interactive memory aid | | BIBAK | Full-Text | 226-234 | |
| E. A. Inglis; A. Szymkowiak; P. Gregor; A. F. Newell; N. Hine; P. Shah; B. A. Wilson; J. Evans | |||
| Memory problems are often associated with the ageing process and are one of
the commonest effects of brain injury. Electronic memory aids have been
successfully used as a compensatory approach to provide reminders to
individuals with prospective memory problems. This paper describes the
usability issues surrounding the development of a new memory aid rendered on a
personal digital assistant (PDA); in addition, it discusses the importance of a
user-centred design process for the development of the memory aid and
preliminary qualitative findings from interviews and focus groups of disabled
or elderly users. Keywords: Elderly users - Brain-injury - Usability - Personal digital assistant -
User-centred design | |||
| Designing assistive technologies for medication regimes in care settings | | BIBAK | Full-Text | 235-242 | |
| K. Cheverst; K. Clarke; G. Dewsbury; T. Hemmings; S. Kember; T. Rodden; M. Rouncefield | |||
| This paper presents some early design work of the Care in the Digital
Community research project begun under the EPSRC IRC Network project Equator.
Gaining a comprehensive understanding of user requirements in care settings
poses interesting methodological challenges. This paper details some
methodological options for working in the domestic domain and documents the
translation of research into design recommendations. We report on the
importance of medication issues in a hostel for former psychiatric patients and
present an early prototype of a medication manager designed to be sensitive to
the particular requirements of the setting. Keywords: Assistive technology - Universal design - Ubiquitous computing - Ethnography | |||
| Bridging the educational divide | | BIBAK | Full-Text | 243-254 | |
| M. Pieper; H. Morasch; G. Piela | |||
| The sharpest visible divide in Internet utilisation, which has deepened in
recent years, is an educational one. Especially with regard to the learning
disabled, the educational digital divide requires the improvement of inclusive
didactical measures to promote media competence. A major prerequisite, which as
a basic architectural principle determines systems design, in this respect
demands support of evolutionary learning by tutorial learning systems designed
as guidance systems which accord closely with the individual pupils
evolutionary process. Keywords: Digital divide - Learning disability - Assistive technology - Universal
access - Tutorial systems for the learning disabled - Evolutionary learning | |||
| Design issues encountered in the development of a mobile multimedia augmentative communication service | | BIBAK | Full-Text | 255-264 | |
| N. Hine; J. L. Arnott; D. Smith | |||
| Augmentative and alternative communication (AAC) systems can be mounted on a
range of different hardware platforms, from custom-designed units to desktop or
laptop personal computers and hand-held and palmtop systems. Palmtop devices
such as personal data assistants (PDAs) offer great advantages of portability.
The small display size and limited storage and processing capacity of a PDA
compared to larger systems are likely to impose some limitations on the range
of AAC applications which can be supported, however, particularly when
multimedia-based applications are considered. This paper addresses issues
involved in migrating a multimedia AAC application onto a palm-top PDA and
discusses the user involvement in the re-engineering of the system for that
environment. Outcomes from an initial practical trial with a person who uses
AAC are reported. Keywords: Augmentative and alternative communication - Conversation - Communication
impairment - Story-telling - Mobile interaction | |||
| Online help for the general public: specific design issues and recommendations | | BIBA | Full-Text | 265-279 | |
| A. Capobianco; N. Carbonell | |||
| This paper addresses the issue of how to design online help that will really prove effective, accessible, and usable for all categories of users in the coming Information Society and, most of all, that will actually be used by novice users. The paper demonstrates the intrinsic necessity of online help and the actual failure of approaches claiming that transparent user interfaces eliminate the need for online support chiefly on the grounds that they encourage exploration. Empirical results in the literature or stemming from analyses of data we collected are put forward in the discussion. Based on a brief survey of the relevant literature, the major specific design issues that designers of online help systems are confronted with are presented, existing design approaches that might contribute to solving these issues are discussed, and a realistic short-term approach for improving the accessibility, effectiveness, and usability of online help systems is recommended. Our recommendation is mainly based on the results of a recently performed experimental study. These results led us to advise, at least for the near future, the design of noncontextual help systems for improving the accessibility, effectiveness, and usability of online help, rather than the implementation of dynamic adaptation to the current users cognitive profile or the development of contextual help systems that generate the information content of help messages dynamically according to the users current intention and goal. We assume that it is possible, within the framework of universal design principles, to significantly enhance the effectiveness and usability of standard noncontextual help systems, mainly by making the most of the recent advances in research on multimodal interaction, especially on the integration of speech into input modalities. | |||
| Why are eye mice unpopular? A detailed comparison of head and eye controlled assistive technology pointing devices | | BIBAK | Full-Text | 280-290 | |
| R. Bates; H. O. Istance | |||
| This paper examines and compares the usability problems associated with
eye-based and head-based assistive technology pointing devices when used for
direct manipulation on a standard graphical user interface. It discusses and
examines the pros and cons of eye-based pointing in comparison to the
established assistive technology technique of head-based pointing and
illustrates the usability factors responsible for the apparent low usage or
unpopularity of eye-based pointing. It shows that user experience and target
size on the interface are the predominant factors affecting eye-based pointing
and suggests that these could be overcome to enable eye-based pointing to be a
viable and available direct manipulation interaction technique for the
motor-disabled community. Keywords: Eye tracking - Eye mouse - Head mouse - Assistive technology - Computer
input devices | |||
| Intelligent agents for the management of complexity in multimodal biometrics | | BIBAK | Full-Text | 293-304 | |
| F. Deravi; M. C. Fairhurst; R. M. Guest; N. J. Mavity; A. M. D. Canuto | |||
| Current approaches to personal identity authentication using a single
biometric technology are limited, principally because no single biometric is
generally considered both sufficiently accurate and user-acceptable for
universal application. Multimodal biometrics can provide a more adaptable
solution to the security and convenience requirements of many applications.
However, such an approach can also lead to additional complexity in the design
and management of authentication systems. Additionally, complex hierarchies of
security levels and interacting user/provider requirements demand that
authentication systems are adaptive and flexible in configuration.
In this paper we consider the integration of multimodal biometrics using intelligent agents to address issues of complexity management. The work reported here is part of a major project designated IAMBIC (Intelligent Agents for Multimodal Biometric Identification and Control), aimed at exploring the application of the intelligent agent metaphor to the field of biometric authentication. The paper provides an introduction to a first-level architecture for such a system, and demonstrates how this architecture can provide a framework for the effective control and management of access to data and systems where issues of privacy, confidentiality and trust are of primary concern. Novel approaches to software agent design and agent implementation strategies required for this architecture are also highlighted. The paper further shows how such a structure can define a fundamental paradigm to support the realisation of universal access in situations where data integrity and confidentiality must be robustly and reliably protected. Keywords: Multimodal biometrics - Intelligent software agents - Universal access | |||
| Elderly Japanese computer users: assessing changes in usage, attitude, and skill transfer over a one-year period | | BIBAK | Full-Text | 305-314 | |
| Hiroyuki Umemuro; Yoshiko Shirokane | |||
| Changes and interrelations among computer usage, computer attitude, and
skill transfer of elderly Japanese computer users were investigated over a
one-year period. Each participant, aged 60 to 76 years, was provided with one
touchscreen-based computer specialized for e-mail handling for 12 months.
Participants' usage of the computer, mouse and/or keyboard, and computer
attitudes were investigated. The results showed that the Liking factor of the
computer attitude scale was a possible predictor of computer usage. The results
suggested the existence of four different types of users adaptation to
computers, according to a combination of the Liking and Confidence dimensions
of computer attitude. Keywords: Elderly - Computer attitude - Computer anxiety - Self efficacy - Touchscreen | |||
| Social capital and access | | BIBAK | Full-Text | 315-330 | |
| Mark Warschauer | |||
| Physical access to computers does not guarantee access to the information
society. To help ensure that the first type of access translates into the
second, it is necessary to pay attention to how computer and Internet use can
enhance social capital. Drawing on examples from technology projects in India
and other countries, this paper examines the concept of social capital and its
relationship to information and communication technology, focusing on the role
of both micro-level and macro-level social capital. Keywords: Access - Social capital - Community informatics - Community development -
Social development | |||
| Web accessibility in the Mid-Atlantic United States: a study of 50 homepages | | BIBAK | Full-Text | 331-341 | |
| Jonathan Lazar; Patricia Beere; Kisha-Dawn Greenidge; Yogesh Nagappa | |||
| This paper reports on a study of 50 homepages in the Mid-Atlantic United
States to determine what accessibility problems exist. The 50 homepages were
evaluated using both the U.S. governments Section 508 guidelines as well as the
Web Accessibility Initiatives (WAI) Priority Level 1 of the Web Content
Accessibility Guidelines (WCAG). According to both sets of guidelines, 49 out
of 50 sites were found to have accessibility problems, although some of the
accessibility problems were minor and easy to fix. There are two troubling
findings from this study. The Web sites that had the most accessibility
problems were organizations in the Web development and information technology
field, which ideally should be the leaders in making the Web more accessible.
The Web accessibility software testing tools, which are available to assist
people in making their Web sites more accessible, are flawed and inconsistent
and require large numbers of manual checks, which many developers may not be
able to do. More people need to become aware of the topic of Web accessibility,
and the testing tools need to be improved so that once people are aware, it is
easier for them to move their sites toward full accessibility. Keywords: Web accessibility - Section 508 - Accessible sites - Usability - Automated
tools | |||
| Using consensus methods to construct adaptive interfaces in multimodal web-based systems | | BIBAK | Full-Text | 342-358 | |
| Ngoc Thanh Nguyen; Janusz Sobecki | |||
| This paper presents a concept of adaptive development of user interfaces in
multimodal web-based systems. Today, it is crucial for general access web-based
systems that the user interface is properly designed and adjusted to user needs
and capabilities. It is believed that adaptive interfaces could offer a
possible solution to this problem. Here, we introduce the notion of the user
profile for classification, the interface profile for describing the system
interface, and the compound usability measure for evaluation of the interface.
Consensus-based methods are applied for constructing the interface profiles
appropriate to classes of users. Keywords: Web-based systems - Multimodal interaction - Consensus-based interface
adaptation | |||
| Communication via eye blinks and eyebrow raises: video-based human-computer interfaces | | BIBAK | Full-Text | 359-373 | |
| K. Grauman; M. Betke; J. Lombardi; J. Gips; G. R. Bradski | |||
| Two video-based human-computer interaction tools are introduced that can
activate a binary switch and issue a selection command. BlinkLink, as the first
tool is called, automatically detects a users eye blinks and accurately
measures their durations. The system is intended to provide an alternate input
modality to allow people with severe disabilities to access a computer.
Voluntary long blinks trigger mouse clicks, while involuntary short blinks are
ignored. The system enables communication using blink patterns: sequences of
long and short blinks which are interpreted as semiotic messages. The second
tool, EyebrowClicker, automatically detects when a user raises his or her
eyebrows and then triggers a mouse click. Both systems can initialize
themselves, track the eyes at frame rate, and recover in the event of errors.
No special lighting is required. The systems have been tested with interactive
games and a spelling program. Results demonstrate overall detection accuracy of
95.6% for BlinkLink and 89.0% for EyebrowClicker. Keywords: Computer vision - Assistive technology - Camera-computer interface | |||
| How level and type of deafness affect user perception of multimedia video clips | | BIBAK | Full-Text | 374-386 | |
| S. R. Gulliver; G. Ghinea | |||
| Our research investigates the impact that hearing has on the perception of
multimedia, with and without captions, by discussing how hearing loss, captions
and deafness type affect user quality of perception (QoP). QoP encompasses both
the users level of satisfaction and their ability to assimilate informational
content of multimedia.
Experimental results show that hearing has a significant effect on participants ability to assimilate information, independent of video type or use of captions. It is shown that captioned video does not necessarily provide deaf users with a greater level of information but changes user QoP, providing a greater level of video contextualisation. Keywords: Quality - Perception - Multimedia - Video - Deafness | |||