| Rebuilding the Babel Tower | | BIBA | Full-Text | 1 | |
| Chris Johnson | |||
| The Tower of Babel was constructed so that its builders could climb to heaven. As a lesson against such impudence, God destroyed the tower. To ensure that such a project would never again be attempted, its builders were condemned to speak different languages as they dispersed throughout the world. This workshop addresses the consequences of the Tower's collapse. We are attempting to reduce the communications barriers that have arisen as people scatter in search of global markets and global sources of production. | |||
| Usability and Mobility; Interactions On the Move | | BIBA | Full-Text | 2 | |
| Peter Johnson | |||
| The developments in wireless communication, distributed systems, and
increases in the power and interactive capabilities of hand-held and portable
devices, provide us with the possibility to have wide-ranging and continual
access to computing resources in a variety of contexts. These technological
changes make increasing demands on the quality of the user interface and offer
the potential to further progress the functionality of computing devices.
However, this makes human-computer interaction all the more central to the
design and development of such mobile systems. The case remains that
functionality does not exist for the user if that functionality is not usable.
This paper considers aspects of mobile systems from an HCI perspective and in doing so reflects upon how well-equipped HCI is to support the design and development of mobile systems. Four areas of concern for HCI are raised and briefly discussed with example scenarios in which mobile systems might be developed to illustrate how these design situations present HCI researchers and practitioners with new challenges. | |||
| Exploiting Context in HCI Design for Mobile Systems | | BIB | Full-Text | 3 | |
| Tom Rodden; Keith Chervest; Nigel Davies; Alan Dix | |||
| Ubiquitous Input for Wearable Computing: Qwerty Keyboard without a Board | | BIBA | Full-Text | 4 | |
| Mikael Goldstein; Robert Book; Gunilla Alsio; Silvia Tessa | |||
| A different, yet familiar kind of input interface for the mobile cellular
phone user that has acquired the skill of touch-typing is proposed. By picking
up each finger's muscular contractions when touch typing, along with a language
model, it is possible to reduce the Qwerty keyboard to a truly ubiquitous
interface: A "Qwerty keyboard without a board". Each finger covers a certain
number of letters when touch-typing (between 3-6 letters). When | |||
| Using Non-Speech Sounds in Mobile Computing Devices | | BIBA | Full-Text | 5 | |
| Stephen Brewster; Gregory Leplatre; Murray Crease | |||
| One of the main problems with output from small, hand-held mobile computing
devices is the lack of screen space. As the device must be small to fit into
the user's hand or pocket there is no space for a large screen. Much of the
work on presentation in standard, desktop interfaces relies on a large,
high-resolution screen. The whole desktop metaphor, in fact, relies on such a
method of presentation. This means that much of the research on effective
screen design and information output cannot be generalised to mobile devices.
This has resulted in devices that are hard to use, with small text that is hard
to read, cramped graphics and little contextual information.
Lack of screen space is not a problem that can easily be improved with technological advances; the screen must fit on the device and the device must be small; screen space will always be in short supply. Another problem is that whatever screen there is will be unusable in a mobile telephone once the device is put to the user's ear to make or receive a call. There is one output channel that has, as yet, been little used to improve interaction in mobile devices (in fact very few systems of any type have used it effectively): Sound. Speech sounds are of course used in mobile phones when calls are being made but are not used by the telephone to aid the interaction with the device. Non-speech sounds are used for ringing tones or alarms (often in a quite sophisticated way) but again do not help the user interact with the system. There is now considerable evidence to suggest that sound can improve interaction and may be very powerful in limited display devices. We suggest that sound, particularly non-speech sound can be used to overcome some of the limitations due to the lack of screen space. Non-speech sounds have advantages over speech in that they are faster and language independent. Research we are undertaking has shown that using non-speech sound can significantly increase usability without the need for more screen space. The rest of this position paper will outline some of the work we are doing with non-speech sounds to improve the usability of human-computer interfaces. More details can be found on the web site above. | |||
| Design Lifecycles and Wearable Computers for Users with Disabilities | | BIBA | Full-Text | 6 | |
| Helen Petrie; Stephen Furner; Thomas Strothotte | |||
| As with all technological artifacts, wearable and mobile computers need to
be well-designed if they are to serve their functions appropriately. We are
working towards an appropriate iterative user-centred design lifecycle for the
development of wearable and mobile computers for people with visual
disabilities, taking many ideas from mainstream HCI but adapting them both for
the particular characteristics of wearable and mobile computer systems and the
particular characteristics of our user group. This process has lead us to
conclude that methodologies developed for the evaluation of static interfaces
will need to be adapted and extended if they are to capture the critical
features of peculiarities of wearable and mobile computers, whether they are
for able-bodied or disabled users.
Wearable computers have enormous potential to assist people with disabilities. For example, for people with sensory disabilities such as blindness or deafness, they could provide substitute sensory information. Very cumbersome laboratory systems have been developed which provide substitute visual information for blind people by projecting a simple image in tactile form on the back or stomach, and these have been shown to have some utility [3]. Such systems would be far more useful as a wearable technology, although the appropriate miniaturization is still in the future. However, it is already possible to provide disabled people with useful information via wearable systems, even if this is not complete sensory substitution. For example, the Low Vision Enhancement System [13] is an augmented reality headset which helps the wearer make more effective use of any remaining vision by magnifying images and increasing light/dark contrast. | |||
| Developing Scenarios for Mobile CSCW | | BIBA | Full-Text | 7 | |
| Steinar Kristoffersen; Jo Herstad; Fredrik Ljungberg; Frode Lobers; Jan R. Sandbakken; Kari Thoresen | |||
| This paper presents a scenario-based approach to designing mobile applications. Based on empirical studies of consultants in a maritime classification company, a set of scenarios was developed. The scenarios are used for assessing current mobile platforms, as well as pointing to new design possibilities for the organisation concerned. Appraising some solutions for different scenarios, we found that the current trend of simply making the desktop smaller is not sufficient. Mobile computing and wireless networks cannot match the performance of unmoving technology. At the same time, work is usually organised according to the capability of the desktop. Thus, new metaphors and human-computer interaction techniques are needed to improve the design of mobile computing. | |||
| Human-Computer-Giraffe Interaction: HCI in the Field | | BIBA | Full-Text | 8 | |
| Jason Pascoe; Nick Ryan; David Morse | |||
| This paper presents some findings and proposals for new research that have arisen from our work on the "Mobile Computing in Fieldwork Environments" project at the University of Kent at Canterbury [1]: a project that is sponsored by JTAP (JISC Technology Applications Programme) [2]. Our main research interest is in the development of novel software tools for the mobile fieldworker that exploit existing handheld computing and sensor technology. The work described in this paper concentrates on examining the special needs and environment of the fieldworker, reflecting on the HCI features required for a successful PDA (Personal Digital Assistant) for use in the field. | |||
| Some Lessons for Location-Aware Applications | | BIBA | Full-Text | 9 | |
| Peter J. Brown | |||
| There are a number of different technologies that can detect the user's
current location: GPS, DGPS, mobile phones [1], PARCTabs [2], active badges
[3], tags, and, for cases where the user actively records their location,
barcodes placed at defined locations. The field of location-sensing is
blossoming so much that it has been suggested that in the future it will be
standard for every computer operating system to know the location of the
computer it is running on, just as at present every operating system knows the
time (albeit perhaps modulo 100 years).
For portable devices, such as a PDA coupled to a GPS system, this opens the way to location-aware applications, and in particular to applications that automatically trigger information that is relevant to the user's current location. Simple applications are in tourism, where information is given about sights that the user is passing, and in maintenance, where the user is automatically given information about nearby equipment. In some applications, e.g. those concerned with the logging of events, the act of triggering causes a program to be run. We have been working in this field for the past five years -- the original inspiration coming from Xerox Research Centre Europe in Cambridge -- and the purpose of this paper is to present some of the lessons learned. In fact our work has been in context-aware applications in general: thus we are not just interested in location, but other elements of the user's context that may be detected by sensors, e.g. time, orientation, current companions, nearby equipment, temperature, the relative position of the nearest public transport vehicle, etc. A good deal of power, in terms of relating triggered information closely to the user's needs, comes from bringing together several contextual elements. As an example, information presented to a tourist might depend not only on the location, but on the time of day, the season of the year and the current temperature. Nevertheless, although our interest goes beyond just location, location is often a sine qua non of the applications that we have worked on. Our prime aim is to make context-aware applications easy to create and to use: to move them from the research laboratory, where most of them still reside, to the marketplace [4]. Specifically we aim to make authorship of applications simply a creative process, akin to creating web pages, rather than a programming challenge. To accomplish our aim, we have confined ourselves to discrete context-aware applications that involve triggering discrete pieces of information that are attached to discrete contexts, e.g. some information attached to the context of the location of the cathedral with a time in winter. We do not cover continuous applications where the user interface is continually changing as the user changes context. Continuous applications require programming effort, and provide a bigger authorship challenge. | |||
| Developing a Context Sensitive Tourist Guide | | BIB | Full-Text | 10 | |
| Nigel Davies; Keith Mitchell; Keith Cheverst; Gordon Blair | |||
| On the Importance of Translucence for Mobile Computing | | BIBA | Full-Text | 11 | |
| Maria R. Ebling; M. Satyanarayanan | |||
| Mobile clients experience a wide range of network characteristics, and this
situation is likely to continue for the foreseeable future. This range of
characteristics includes fast, reliable, and cheap networks at one extreme and
slow, intermittent, and expensive ones at the other. The demand for mobile
connectivity has created an active area of research. As new technologies become
available, mobile clients will have to choose between competing network
providers offering different levels of service. In fact, mobile clients will
eventually be capable of seamlessly switching from one network to another
depending on current needs. Thus, mobile clients will need to choose a network
provider dynamically.
The decision regarding which network provider to use involves trade-offs that include both energy and financial components. Our position is that mobile clients cannot balance these tradeoffs well without assistance from the user. The challenge is how to gain enough assistance to make wise choices without imposing undue burden on users. We believe that a key to meeting this challenge will be translucence. A translucent system exposes critical details of the system to the user in order to improve the system's ability to service the user's needs, while hiding non-critical details from the user to minimize the imposed burden. | |||
| Developing Interfaces for Collaborative Mobile Systems | | BIBA | Full-Text | 12 | |
| Keith Cheverst; Nigel Davies; Adrian Friday | |||
| This paper describes the issues encountered when developing user interfaces for collaborative multimedia applications designed for operation in unreliable mobile networking environments. To provide end-users with some degree of dependability applications need to provide increased levels of user-awareness in order to enable users to adapt their style of interaction to match the current quality of communications. The application described in this paper achieves this by presenting users with graphical feedback when the constraints imposed by the network violate the collaborating groups' various communications requirements. Because traditional distributed development platforms tend to mask detailed network information from reaching the application the development platform was enhanced to enable the flow of information between the network and application level services and vice versa. | |||
| Wireless Markup Language as a Framework for Interaction with Mobile Computing and Communication Devices | | BIBA | Full-Text | 13 | |
| Jo Herstad; Do Van Thanh; Steinar Kristoffersen | |||
| Wireless Application Protocol (WAP) is a result of continuous work to define an industry wide standard for developing applications and services over wireless communication networks. The scope for the WAP working group is to define a set of standards to be used by service applications. This document gives a general overview of the Wireless Markup Language (WML), which is an element in the WAP architecture. This progressing work is proposed by the WAP forum, which is a non-profit organization established by Motorola, Nokia, Unwired Planet and Ericsson. | |||
| Giving Users the Choice between a Picture and a Thousand Words | | BIB | Full-Text | 14 | |
| Malcolm McIlhagga; Ann Light; Ian Wakeman | |||
| User Needs for Mobile Communication Devices: Requirements Gathering and Analysis through Contextual Inquiry | | BIBA | Full-Text | 15 | |
| Kaisa Vaananen-Vainio-Mattila; Satu Ruuska | |||
| A major problem in exploring user requirements for mobile communication and personal organisation devices is the versatility of usage patterns and usage contexts in which the usage takes place. The traditional means of user interviews or usability testing in a laboratory environment are not capable of revealing insights of users' activities and needs in their "real life". This paper describes an example user needs study at Nokia and concludes that ethnographic methods such as the Contextual Inquiry method can -- despite of numerous practical challenges -- be successfully applied in the development of mobile communication devices. | |||