HCI Bibliography Home | HCI Journals | About UAIS | Journal Info | UAIS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
UAIS Tables of Contents: 01020304050607080910111213

Universal Access in the Information Society 6

Editors:Constantine Stephanidis
Dates:2007/2008
Volume:6
Publisher:Springer Verlag
Standard No:ISSN 1615-5289 (print); 1615-5297 (electronic)
Papers:35
Links:Table of Contents
  1. UAIS 2007 Volume 6 Issue 1
  2. UAIS 2007 Volume 6 Issue 2
  3. UAIS 2007 Volume 6 Issue 3
  4. UAIS 2008 Volume 6 Issue 4

UAIS 2007 Volume 6 Issue 1

A user experience-based approach to home atmosphere control BIBAFull-Text 1-13
  Martijn Vastenburg; Philip Ross; David Keyson
The complex control problem of creating home atmospheres using light, music, and projected wall-art can be reduced by focusing on desired experience, rather than product functions and features. A case study is described in which subjective interpretations of living room atmospheres were measured and embedded into a prototype display system. A personalization mechanism is proposed to manage individual differences in atmosphere ratings, enabling a user model to evolve over time. To create a meaningful and simple control mechanism for a wide range of users, three interfaces were developed and studied, ranging from concrete to abstract control and from structured to exploratory navigation.
Educational software and low vision students: evaluating accessibility factors BIBAFull-Text 15-29
  Silvia Dini; Lucia Ferlino; Anna Gettani; Cristina Martinoli; Michela Ott
The aim of this paper is to draw a few guidelines for the evaluation of the accessibility and usability of educational software programs from the point of view of low vision students. The presented findings are based on the results of a long term research project carried out by the Italian National Research Council's Institute for Educational Technology (ITD-CNR) and the David Chiossone Institute for the Blind, both based in Genoa, Italy. The educational project, whose general aims and results are not a matter of discussion here, involves a significant number of visually impaired students from primary to upper secondary school; in such a context, the researchers have the opportunity to assess and evaluate whether, and to what extent, the selected educational software products meet the needs of low vision students. In this perspective, the paper takes into account the features which can be considered significant from an educational point of view: general readability, working field extension and position, menu location and coherence, character dimension, colour brightness, etc. Bearing in mind the ultimate goal of providing children with appropriate, effective educational tools, an educational software accessibility checklist is proposed which is meant to be used by teachers with no, or scarce, experience of low vision, and not by professionals; it has already proved to be an effective tool for helping teachers select suitable educational software products "usable" by low vision students.
Mobile computer Web-application design in medicine: some research based guidelines BIBADOI 31-41
  Andreas Holzinger; Maximilian Errath
Designing Web-applications is considerably different for mobile computers (handhelds, Personal Digital Assistants) than for desktop computers. The screen size and system resources are more limited and end-users interact differently. Consequently, detecting handheld-browsers on the server side and delivering pages optimized for a small client form factor is inevitable. The authors discuss their experiences during the design and development of an application for medical research, which was designed for both mobile and personal desktop computers. The investigations presented in this paper highlight some ways in which Web content can be adapted to make it more accessible to mobile computing users. As a result, the authors summarize their experiences in design guidelines and provide an overview of those factors which have to be taken into consideration when designing software for mobile computers. "The old computing is about what computers can do, the new computing is about what people can do" (Leonardo's laptop: human needs and the new computing technologies, MIT Press, 2002).
Internet use and non-use: views of older users BIBAFull-Text 43-57
  Anne Morris; Joy Goodman; Helena Brading
This paper reports the results of two connected surveys of computer and Internet use among the older population in the UK. One hundred and twenty questionnaires and interviews were completed with participants aged over 55 in Derbyshire and 353 questionnaires and interviews with over 50s in Scotland. Rates of use, computer and Internet activities, and reasons for use and non-use were investigated. These were backed up by four semi-structured interviews with IT trainers, describing experiences and issues of training this age group. The results indicate a "grey" digital divide, with many older people missing out on the benefits that computers and the Internet can provide. They also indicate some of the reasons why older people do not use computers and the Internet more. These suggest some practical ways forward, highlighting the importance of changing older people's misconceptions about computers, better informing them about what they are, what they can do and how they can be of real practical use.
A systematic approach to the development of research-based web design guidelines for older people BIBAFull-Text 59-75
  Panayiotis Zaphiris; Sri Kurniawan; Mariya Ghiawadwala
This paper presents a systematic approach to the development of a set of research-based ageing-centred web design guidelines (SilverWeb Guidelines). The approach included an initial extensive literature review in the area of human-computer interaction and ageing, the development of an initial set of guidelines based on the reviewed literature, a card sorting exercise for their classification, an affinity diagramming exercise for the reduction and further finalisation of the guidelines, and finally a set of heuristic evaluations for the validation and test of robustness of the guidelines. The 38 final guidelines are grouped into eleven distinct categories (target design, use of graphics, navigation, browser window features, content layout design, links, user cognitive design, use of colour and background, text design, search engine, user feedback and support).
Planning effective HCI to enhance access to educational applications BIBAFull-Text 77-85
  Elspeth McKay
Information and communications technologies (ICT) are widely believed to offer new options for Web-mediated courseware design. Multimedia and online courseware development accentuates a belief that highly graphical (or visual) delivery media will meet the individualised instructional requirements of diverse student cohorts. While most electronic courseware may allow the user to proceed at their own pace, two assumptions are commonly made by courseware designers. Firstly, to facilitate learning, all users are assumed capable of assimilating the graphical content with their current experiential knowledge. There is little or no consideration of different cognitive styles. Understanding learner attributes is essential to increasing accessibility to computerised information. Secondly, learning is assumed rather than demonstrated. To deal with this issue, data analysis techniques can be used to differentiate between what an individual knows from what they do not. This paper presents two research projects that demonstrate the importance of awareness for the human-dimension of human-computer interaction (HCI) in designing effective online experiential learning for special education.
Modeling decisions under uncertainty in adaptive user interfaces BIBAFull-Text 87-101
  Vasilios Zarikas
The present work proposes a methodological approach for modeling adaptation decisions and for solving the problem of integrating existing as well as acquired knowledge in the decision module of an adaptive interface. So far, most applications do not exploit in full the value of data originating from user models or knowledge acquisition engines that monitor the user and the context. The proposed decision theoretic model is represented through specifically structured influence diagrams. It provides to designers and developers a specific method to encode user and context information, as well as other crucial decision factors, to be subsequently used in the decision making process regarding user interface adaptation actions. Such a process is driven by the definition of relevant utilities referring to the design of a user interface. The proposed model guides designers and developers of an adaptive or intelligent interface to integrate, without conflicts and incoherence, design strategies, design goals, user goals, alternative constituents, user profile, context and application domain knowledge. An illustrative example of the analyzed modeling method is presented.
Research on Internet use by Spanish-speaking users with blindness and partial sight BIBAFull-Text 103-110
  Rafael Romero Zúnica; Vicenta Ávila Clemente
The results of an online questionnaire for Spanish-speaking Internet users with visual impairment are presented in this work. The research was carried out during 2002. The data collected shows that Internet is a very valuable tool for visually impaired people, as it allows them to access written information in an autonomous and instantaneous manner and to communicate with others through individual email and discussion lists. However, all users found frequent and important problems of accessibility in many web pages that prevent them from having complete access to the information and functionalities of those websites.
Is there design-for-all? BIBFull-Text 111-113
  Simon Harper
Web usability: a user-centered design approach by Jonathan Lazar (2006) BIBFull-Text 115
  Panayiotis Zaphiris

UAIS 2007 Volume 6 Issue 2

Designing accessible technology BIBFull-Text 117-118
  Patrick Langdon; John Clarkson; Peter Robinson
Characterising user capabilities to support inclusive design evaluation BIBADOI 119-135
  Umesh Persad; Patrick Langdon; John Clarkson
Designers require knowledge and data about users to effectively evaluate product accessibility during the early stages of design. This paper addresses this problem by setting out the sensory, cognitive and motor dimensions of user capability that are important for product interaction. The relationship between user capability and product demand is used as the underlying conceptual model for product design evaluations and for estimating the number of people potentially excluded from using a given product.
Towards a design tool for visualizing the functional demand placed on older adults by everyday living tasks BIBAFull-Text 137-144
  A. Macdonald; D. Loudon; P. Rowe; D. Samuel; V. Hood; A. Nicol; M. Grealy; B. Conway
This paper discusses the development of a design tool using data calculated from the biomechanical functional demand on joints in older adults during activities of daily living, portrayed using a visual 'traffic-light' system. Whole body movements of 84 older adults were analysed using a 3D motion capture system and reaction forces were measured by force platforms, and translated into a 3D software model. Although originally intended as a tool for designers, the early evaluation of this method of visualizing the data suggests that it may be of value across those involved in the professional care of older adults.
The narrative construction of our (social) world: steps towards an interactive learning environment for children with autism BIBADOI 145-157
  Megan Davis; Kerstin Dautenhahn; Chrystopher Nehaniv; Stuart Powell
Children with autism exhibit a deficit in narrative comprehension which adversely impacts upon their social world. The authors' research agenda is to develop an interactive software system which promotes an understanding of narrative structure (and thus the social world) while addressing the needs of individual children. This paper reports the results from a longitudinal study, focussing on 'primitive' elements of narrative, presented as proto-narratives, in an interactive software game which adapts to the abilities of individual children. A correlation has been found with a real-world narrative comprehension task, and for most children a clear distinction in their understanding of narrative components.
Introducing COGAIN: communication by gaze interaction BIBAFull-Text 159-166
  Richard Bates; M. Donegan; H. Istance; J. Hansen; K.-J. Räihä
This paper introduces the work of the COGAIN "communication by gaze interaction" European Network of Excellence that is working toward giving people with profound disabilities the opportunity to communicate and control their environment by eye gaze control. It shows the need for developing eye gaze based communication systems, and illustrates the effectiveness of newly developed COGAIN eye gaze control systems with a series of case studies, each showing differing aspects of the benefits offered by gaze control. Finally, the paper puts forward a strong case for users, professionals and researchers to collaborate towards developing gaze based communication systems to enable and empower people with disabilities.
Non-formal therapy and learning potentials through human gesture synchronised to robotic gesture BIBAFull-Text 167-177
  E. Petersson; A. Brooks
Children with severe physical disabilities have limited possibilities for joyful experiences and interactive play. Physical training and therapy to improve such opportunities for these children is often enduring, tedious and boring through repetition -- and this is often the case for both patient and the facilitator or therapist. The aim of the study reported in this paper was to explore how children with a severe physical disability could use an easily accessible robotic device that enabled control of projected images towards achieving joyful experiences and interactive play, so as to give opportunities for use as a supplement to traditional rehabilitation therapy sessions. The process involves the capturing of gesture data through an intuitive non-intrusive interface. The interface is invisible to the naked eye and offers a direct and immediate association between the child's physical feed-forward gesture and the physical reaction (feedback) of the robotic device. Results from multiple sessions with four children with severe physical disability suggest that the potential of non-intrusive interaction with a multimedia robotic device that is capable of giving synchronized physical response offers additional opportunities, and motivated non-formal potentials in therapy and learning to supplement the field.
The effects of prior experience on the use of consumer products BIBADOI 179-191
  Patrick Langdon; Tim Lewis; John Clarkson
Many products today are laden with a host of features which, for the majority of users, remain unused and often obscure the use of the simple features of use for which the product was devised (Norman in The design of everyday things. Basic Books, 2002; Keates and Clarkson in Countering design exclusion '' an introduction to inclusive design. Springer, 2004). Since the cognitive capabilities of the marketed target group are largely not affected by age-related impairment, the intellectual demands of such products are frequently high (Rabbitt in Quart J Exp Psychol 46A(3):385-434, 1993). In addition, the age and technology generation of a product user will colour their expectations of the product interface and affect the range of skills they have available (Docampo in Technology generations handling complex User Interfaces. Ph. D. thesis, 2001). This paper addresses the issue of what features of products make them easy or difficult to learn to use, for the wider population as well as the older user, and whether and in what way individual prior experience affect the learning and use of a product design. To achieve the above, the interactions of users of varying ages and capabilities with two different everyday products were recorded in detail as they performed set tasks. Retrospective verbal protocols were then analysed using a category scheme based on an analysis of types of learning and cognition errors. This data was then compared with users' performance on individual detailed experience questionnaires and a number of tests of general and specific cognitive capabilities. The principal finding was that similarity of prior experience to the usage situation was the main determinant of performance, although there was also some evidence for a gradual, age-related capability decline. Users of all ages adopted a means-end or trial and error interaction when faced with unfamiliar elements of the interaction. There was a strong technology generation effect such that older users were reluctant or unable to complete the required tasks for a digital camera.
Attitudes to telecare among older people, professional care workers and informal carers: a preventative strategy or crisis management? BIBADOI 193-205
  Julienne Hanson; John Percival; Hazel Aldred; Simon Brownsell; Mark Hawley
This paper reports findings from an attitudinal survey towards telecare that emerged from 22 focus groups comprising 92 older people, 55 professional stakeholders and 39 carers. These were convened in three different regions of England as a precursor to telecare service development. The results from this study suggest that informants' views were shaped by prior knowledge of conventional health and social care delivery in their locality, and the implication is that expectations and requirements with respect to telecare services in general are likely to be informed by wider perceptions about the extent to which community care should operate as a preventative strategy or as a mechanism for crisis management.
Designing technology with older people BIBAFull-Text 207-217
  Guy Dewsbury; Mark Rouncefield; Ian Sommerville; Victor Onditi; Peter Bagnall
Designing applications to support older people in their own homes is increasing in popularity and necessity. The increase in supporting older people in the community means that cash-strapped resources are required to be utilised in the most effective manner, which often lends itself to technology deployment, rather than human deployment as the former is perceived as more cost effective. Therefore, the concern arises as to how technology can be designed inclusively and acceptably to the people who are to receive it. This paper discusses the issue of design, and how these concerns have been addressed in a series of projects targeted towards directly supporting people in the community.

UAIS 2007 Volume 6 Issue 3

Building the Mobile Web: rediscovering accessibility? BIBDOI 219-220
  Simon Harper; Yeliz Yesilada; Carole Goble
Capability survey of user agents with the UAAG 1.0 test suite and its impact on Web accessibility BIBAFull-Text 221-232
  Takayuki Watanabe; Masahiro Umegaki
To determine how well user agents conform to UAAG 1.0, capabilities of user agents were investigated with UAAG 1.0 Test Suite. It was found that 20 Priority 1 checkpoints were met by all the user agents, while 12 Priority 1 checkpoints, relating to multimedia control and time-dependent interactions, were failed by all of them. The results showed that two major Japanese user agents did not have enough functions to navigate through the Web, whereas the latest ones did have those functions. These results show that there are user agents which meet many requirements of UAAG 1.0 but Web authors still have to pay attention to the capability of the user agents that are considered to be used to browse their content.
GraSSML: accessible smart schematic diagrams for all BIBAFull-Text 233-247
  Z. Fredj; D. Duce
Graphical representation is a powerful way of conveying information. Its use has made life much easier for most sighted users, but people with disabilities or users who work in environments where visual representations are inappropriate, cannot access information contained in graphics, unless alternative descriptions are included. This paper describes an approach called Graphical Structure Semantic Markup Languages (GraSSML), which aims at defining high-level diagram description languages capturing the structure and the semantics of a diagram, and enabling the generation (by transformation) of accessible and "smart" presentations in different modalities such as speech, text, graphics, etc. The structure and the semantics of a diagram are made available at the creation stage. This offers new possibilities for allowing Web Graphics to become "smart".
Using RSS feeds for effective mobile web browsing BIBAFull-Text 249-257
  John Garofalakis; Vassilios Stefanis
While mobile phones are becoming more popular, wireless communication vendors and device manufacturers are seeking new applications for their products. Access to the large corpus of Internet information is a very prominent field. However, the technical limitations of mobile devices pose many challenges. Browsing the Internet using a mobile phone is a large scientific and cultural challenge. Web content must be adapted before it can be accessed by a mobile browser. This work presents a new methodology that uses Really Simple Syndication (RSS) feeds for the adaptation of web content for use in mobile phones. This methodology is based on concrete design guidelines and supports different viewing modes. The mobile tool provided using the RSS feeds is evaluated based on user-centered evaluation and the results are presented.
A web browsing system for cellular-phone users based on adaptive presentation BIBAFull-Text 259-271
  Yuki Arase; Takuya Maekawa; Takahiro Hara; Toshiaki Uemukai; Shojiro Nishio
Cellular phones are widely used to access the Web. However, most available Web pages are designed for desktop PCs, and it is inconvenient to browse these large Web pages on a cellular phone with a small screen and poor interfaces. Users who browse a Web page on a cellular phone have to scroll through the whole page to find the desired content, and must then search and scroll within that content in detail to get useful information. This paper describes the design and implementation of a novel Web browsing system for cellular phones. This system includes a Web page overview to reduce scrolling operations when finding objective content within the page. Furthermore, it adaptively presents content according to its characteristics to reduce burdensome operations when searching within content.
Browsing shortcuts as a means to improve information seeking of blind people in the WWW BIBADOI 273-283
  Christos Kouroupetroglou; Michail Salampasis; Athanasios Manitsaris
This paper presents a "Semantic Web application framework" which allows different applications to be designed and developed for improving the accessibility of the World Wide Web (WWW). The framework promotes the idea of creating a community of people federating into groups (ontology creators, annotators, user-agent developers, end-users) each playing a specific role, without the coordination of any central authority. The use of a specialised voice web browser for blind people, called SeEBrowser, is presented and discussed as an example of an accessibility tool developed based on the framework. SeEBrowser utilises annotations of web pages and provides browsing shortcuts. Browsing shortcuts are mechanisms, which facilitate blind people in moving efficiently through various elements of a web page (e.g. functional elements such as forms, navigational aids etc.) during the information-seeking process, hence operating effectively as a vital counterbalance to low accessibility. Finally, an experimental user study is presented and discussed which evaluates SeEBrowser with and without the use of browsing shortcuts.
Personalizable edge services for Web accessibility BIBADOI 285-306
  Ugo Erra; Gennaro Iaccarino; Delfina Malandrino; Vittorio Scarano
Web Content Accessibility guidelines by W3C (W3C Recommendation, May 1999. http://www.w3.org/TR/WCAG10/) provide several suggestions for Web designers regarding how to author Web pages in order to make them accessible to everyone. In this context, this paper proposes the use of edge services as an efficient and general solution to promote accessibility and breaking down the digital barriers that inhibit users with disabilities to actively participate to any aspect of society. The idea behind edge services mainly affect the advantages of a personalized navigation in which contents are tailored according to different issues, such as client's devices capabilities, communication systems and network conditions and, finally, preferences and/or abilities of the growing number of users that access the Web. To meet these requirements, Web designers have to efficiently provide content adaptation and personalization functionalities mechanisms in order to guarantee universal access to the Internet content. The so far dominant paradigm of communication on the WWW, due to its simple request/response model, cannot efficiently address such requirements. Therefore, it must be augmented with new components that attempt to enhance the scalability, the performances and the ubiquity of the Web. Edge servers, acting on the HTTP data flow exchanged between client and server, allow on-the-fly content adaptation as well as other complex functionalities beyond the traditional caching and content replication services. These value-added services are called edge services and include personalization and customization, aggregation from multiple sources, geographical personalization of the navigation of pages (with insertion/emphasis of content that can be related to the user's geographical location), translation services, group navigation and awareness for social navigation, advanced services for bandwidth optimization such as adaptive compression and format transcoding, mobility, and ubiquitous access to Internet content. This paper presents Personalizable Accessible Navigation (Pan) that is a set of edge services designed to improve Web pages accessibility, developed and deployed on top of a programmable intermediary framework. The characteristics and the location of the services, i.e., provided by intermediaries, as well as the personalization and the opportunities to select multiple profiles make Pan a platform that is especially suitable for accessing the Web seamlessly also from mobile terminals.
How people use presentation to search for a link: expanding the understanding of accessibility on the Web BIBAFull-Text 307-320
  Caroline Jay; Robert Stevens; Mashhuda Glencross; Alan Chalmers; Cathy Yang
It is well known that many Web pages are difficult to use for visually disabled people. Without access to a rich visual display, the intended structure and organisation of the page is obscured. To fully understand what is missing from the experience of visually disabled users, it is pertinent to ask how the presentation of Web pages on a standard display makes them easier for sighted people to use. This paper reports on an exploratory eye tracking study that addresses this issue by investigating how sighted readers use the presentation of the BBC News Web page to search for a link. The standard page presentation is compared with a "text-only" version, demonstrating both qualitatively and quantitatively that the removal of the intended presentation alters "reading" behaviours. The demonstration that the presentation of information assists task completion suggests that it should be re-introduced to non-visual presentations if the Web is to become more accessible. The conducted study also explored the extent to which algorithms that generate maps of what is perceptually salient on a page match the gaze data recorded in the eye tracking study. The correspondence between a page's presentation, knowledge of what is visually salient, and how people use these features to complete a task might offer an opportunity to re-model a Web page to maximise access to its most important parts.

UAIS 2008 Volume 6 Issue 4

Special issue: "Emerging Technologies for Deaf Accessibility in the Information Society" BIBDOI 321-322
  Eleni Efthimiou; Evita Fotinea; John Glauert
Recent developments in visual sign language recognition BIBADOI 323-362
  Ulrich von Agris; Jörg Zieren; Ulrich Canzler; Britta Bauer; Karl-Friedrich Kraiss
Research in the field of sign language recognition has made significant advances in recent years. The present achievements provide the basis for future applications with the objective of supporting the integration of deaf people into the hearing society. Translation systems, for example, could facilitate communication between deaf and hearing people in public situations. Further applications, such as user interfaces and automatic indexing of signed videos, become feasible. The current state in sign language recognition is roughly 30years behind speech recognition, which corresponds to the gradual transition from isolated to continuous recognition for small vocabulary tasks. Research efforts were mainly focused on robust feature extraction or statistical modeling of signs. However, current recognition systems are still designed for signer-dependent operation under laboratory conditions. This paper describes a comprehensive concept for robust visual sign language recognition, which represents the recent developments in this field. The proposed recognition system aims for signer-independent operation and utilizes a single video camera for data acquisition to ensure user-friendliness. Since sign languages make use of manual and facial means of expression, both channels are employed for recognition. For mobile operation in uncontrolled environments, sophisticated algorithms were developed that robustly extract manual and facial features. The extraction of manual features relies on a multiple hypotheses tracking approach to resolve ambiguities of hand positions. For facial feature extraction, an active appearance model is applied which allows identification of areas of interest such as the eyes and mouth region. In the next processing step, a numerical description of the facial expression, head pose, line of sight, and lip outline is computed. The system employs a resolution strategy for dealing with mutual overlapping of the signer's hands and face. Classification is based on hidden Markov models which are able to compensate time and amplitude variances in the articulation of a sign. The classification stage is designed for recognition of isolated signs, as well as of continuous sign language. In the latter case, a stochastic language model can be utilized, which considers uni- and bigram probabilities of single and successive signs. For statistical modeling of reference models each sign is represented either as a whole or as a composition of smaller subunits -- similar to phonemes in spoken languages. While recognition based on word models is limited to rather small vocabularies, subunit models open the door to large vocabularies. Achieving signer-independence constitutes a challenging problem, as the articulation of a sign is subject to high interpersonal variance. This problem cannot be solved by simple feature normalization and must be addressed at the classification level. Therefore, dedicated adaptation methods known from speech recognition were implemented and modified to consider the specifics of sign languages. For rapid adaptation to unknown signers the proposed recognition system employs a combined approach of maximum likelihood linear regression and maximum a posteriori estimation.
Facial movement analysis in ASL BIBAFull-Text 363-374
  Christian Vogler; Siome Goldenstein
In the age of speech and voice recognition technologies, sign language recognition is an essential part of ensuring equal access for deaf people. To date, sign language recognition research has mostly ignored facial expressions that arise as part of a natural sign language discourse, even though they carry important grammatical and prosodic information. One reason is that tracking the motion and dynamics of expressions in human faces from video is a hard task, especially with the high number of occlusions from the signers' hands. This paper presents a 3D deformable model tracking system to address this problem, and applies it to sequences of native signers, taken from the National Center of Sign Language and Gesture Resources (NCSLGR), with a special emphasis on outlier rejection methods to handle occlusions. The experiments conducted in this paper validate the output of the face tracker against expert human annotations of the NCSLGR corpus, demonstrate the promise of the proposed face tracking framework for sign language data, and reveal that the tracking framework picks up properties that ideally complement human annotations for linguistic research.
Linguistic modelling and language-processing technologies for Avatar-based sign language presentation BIBADOI 375-391
  R. Elliott; J. Glauert; J. Kennaway; I. Marshall; E. Safar
Sign languages are the native languages for many pre-lingually deaf people and must be treated as genuine natural languages worthy of academic study in their own right. For such pre-lingually deaf, whose familiarity with their local spoken language is that of a second language learner, written text is much less useful than is commonly thought. This paper presents research into sign language generation from English text at the University of East Anglia that has involved sign language grammar development to support synthesis and visual realisation of sign language by a virtual human avatar. One strand of research in the ViSiCAST and eSIGN projects has concentrated on the generation in real time of sign language performance by a virtual human (avatar) given a phonetic-level description of the required sign sequence. A second strand has explored generation of such a phonetic description from English text. The utility of the conducted research is illustrated in the context of sign language synthesis by a preliminary consideration of plurality and placement within a grammar for British Sign Language (BSL). Finally, ways in which the animation generation subsystem has been used to develop signed content on public sector Web sites are also illustrated.
Sign language applications: preliminary modeling BIBADOI 393-404
  Annelies Braffort; Patrice Dalle
For deaf persons to have ready access to information and communication technologies (ICTs), the latter must be usable in sign language (SL), i.e., include interlanguage interfaces. Such applications will be accepted by deaf users if they are reliable and respectful of SL specificities -- use of space and iconicity as the structuring principles of the language. Before developing ICT applications, it is necessary to model these features, both to enable analysis of SL videos and to generate SL messages by means of signing avatars. This paper presents a signing space model, implemented within a context of automatic analysis and automatic generation, which are currently under development.
A knowledge-based sign synthesis architecture BIBAFull-Text 405-418
  Stavroula-Evita Fotinea; Eleni Efthimiou; George Caridakis; Kostas Karpouzis
This paper presents the modules that comprise a knowledge-based sign synthesis architecture for Greek sign language (GSL). Such systems combine natural language (NL) knowledge, machine translation (MT) techniques and avatar technology in order to allow for dynamic generation of sign utterances. The NL knowledge of the system consists of a sign lexicon and a set of GSL structure rules, and is exploited in the context of typical natural language processing (NLP) procedures, which involve syntactic parsing of linguistic input as well as structure and lexicon mapping according to standard MT practices. The coding on linguistic strings which are relevant to GSL provide instructions for the motion of a virtual signer that performs the corresponding signing sequences. Dynamic synthesis of GSL linguistic units is achieved by mapping written Greek structures to GSL, based on a computational grammar of GSL and a lexicon that contains lemmas coded as features of GSL phonology. This approach allows for robust conversion of written Greek to GSL, which is an essential prerequisite for access to e-content by the community of native GSL signers. The developed system is sublanguage oriented and performs satisfactorily as regards its linguistic coverage, allowing for easy extensibility to other language domains. However, its overall performance is subject to current well known MT limitations.
Generating American Sign Language animation: overcoming misconceptions and technical challenges BIBAFull-Text 419-434
  Matt Huenerfauth
Misconceptions about the English literacy rates of deaf Americans, the linguistic structure of American Sign Language (ASL), and the suitability of traditional machine translation (MT) technology to ASL have slowed the development of English-to-ASL MT systems for use in accessibility applications. This article traces the progress of a new English-to-ASL MT project targeted to translating texts important for literacy and user-interface applications. These texts include ASL phenomena called "classifier predicates." Challenges in producing classifier predicates, novel solutions to these challenges, and applications of this technology to the design of user-interfaces accessible to deaf users will be discussed.
Universal access to communication and learning: the role of automatic speech recognition BIBAFull-Text 435-447
  Mike Wald; Keith Bain
This communication discusses how automatic speech recognition (ASR) can support universal access to communication and learning through the cost-effective production of text synchronised with speech and describes achievements and planned developments of the Liberated Learning Consortium to: support preferred learning and teaching styles; assist those who for cognitive, physical or sensory reasons find notetaking difficult; assist learners to manage and search online digital multimedia resources; provide automatic captioning of speech for deaf learners or when speech is not available or suitable; assist blind, visually impaired or dyslexic people to read and search material; and, assist speakers to improve their communication skills.