HCI Bibliography Home | HCI Conferences | OZCHI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
OZCHI Tables of Contents: 919293949596980102030405060708091011121314

Proceedings of OZCHI'07, the CHISIG Annual Conference on Human-Computer Interaction

Fullname:Proceedings of OZCHI'07, the CHISIG Annual Conference
Note:Entertaining User Interfaces
Editors:Mark Billinghurst
Location:Adelaide, Australia
Dates:2007-Nov-28 to 2007-Nov-30
Publisher:CHISIG
Standard No:ISBN 1-59593-872-9, 978-1-59593-872-5; ACM DL: Table of Contents hcibib: OZCHI07
Papers:54
Pages:323
Links:Conference Home Page | Conference Series Home Page
  1. Interaction design
  2. Interaction design tools
  3. Visualization
  4. Collaboration
  5. Industry applications
  6. Play
  7. Collaboration + emotion
  8. Gesture & gaze
  9. Ambience & 3D interfaces
  10. Pot porri
  11. Location & mobility
  12. Advanced interfaces

Interaction design

Seeing like an interface BIBAFull-Text 1-8
  Paul Dourish
Mobile and ubiquitous computing systems are increasingly of interest to HCI researchers. Often, this has meant considering the ways in which we might migrate desktop applications and everyday usage scenarios to mobile devices and mobile contexts. However, we do not just experience technologies in situ -- we also experience everyday settings through the technologies we have at our disposal. Drawing on anthropological research, I outline an alternative way of thinking about the relationship between technology and "seeing" everyday life and everyday space.
Alleviating communication challenges in film scoring: an interaction design approach BIBAFull-Text 9-16
  Julien Phalip; Matthew Morphett; Ernest Edmonds
Film Scoring is a creative and collaborative activity that involves several practitioners, in particular music specialists (film composers) and non specialists (filmmakers). These practitioners face recurrent challenges in communication primarily because they do not share the same musical language. In this paper we present the results of research undertaken into the communication process between filmmakers and composers, with particular focus on the challenges experienced by the two parties. We then propose and discuss an interaction design approach to progress towards appropriate computer-based solutions.
Probing communities: study of a village photo display BIBAFull-Text 17-24
  Nick Taylor; Keith Cheverst; Dan Fitton; Nicholas J. P. Race; Mark Rouncefield; Connor Graham
In this paper we describe a technology probe aiming to aid understanding of how digital displays can help support communities. Using a simple photo gallery application, deployed in a central social point in a small village and displaying user-generated photos and videos, we have been able to gain an understanding of this setting, field test our device and inspire new ideas directly from members of the community. We explore the process of deploying this display, the response from residents and how the display has taken a place within the community.
Designing and evaluating Buster: an indexical mobile travel planner for public transportation BIBAFull-Text 25-28
  Jesper Kjeldskov; Eva Andersen; Lars Hedegaard
This paper elaborates on previous research into the design and use of mobile information systems for supporting the use of public transportation. Contributing to this domain of HCI research, we describe the design and evaluation of a mobile travel planner, Buster, for the public city bus system of a large regional city in Denmark. Carrying on from on earlier research activities, we did contextual interviews, acting out of future scenarios in situ, and iterative paper prototyping to extend on previous design ideas and explore further the principle of indexicality in interface design for context-aware mobile systems. We then implemented a functional prototype application and evaluated it in the field.

Interaction design tools

How probes work BIBAFull-Text 29-37
  Connor Graham; Mark Rouncefield; Martin Gibbs; Frank Vetere; Keith Cheverst
'Cultural probes', since first being proposed and described by Bill Gaver and his colleagues, have been adapted and appropriated for a range of purposes within a variety of technology projects. In this paper we critically review different uses of Probes and discuss common aspects of different Probe variants. We also present and critique some of the debate around Probes through describing the detail of their use in two studies: The Digital Care Project (Lancaster University) and The Mediating Intimacy Project (University of Melbourne). We then reorient the discussion around Probes towards how probes work: both as interpretative fodder for social scientists and as a resource for 'designers'. Finally we discuss new possible directions for Probes as an approach and some of the challenges confronting Probes as an approach.
On the use of mobile tools in everyday life BIBAFull-Text 39-47
  Jason Pascoe; Kirsten Thomson
This paper explores how mobile tools are used in everyday life and investigates the issues surrounding their usage, or indeed, their lack of usage. Personal computers support us in a wide range of our desk-bound activities, but there is still relatively little use of computer-based tools in other parts of our day-to-day lives. We hypothesise that this is because certain barriers-to-use exist that discourage their use in everyday environments, namely that the tools are not readily to hand at the time of need and/or that they distract the user's attention too much from the surrounding environment or main task at hand.
   We briefly present our concept of a Smartwatch -- a wrist-worn form of a general-purpose wearable computer -- that aims to overcome these barriers. However, we strongly believe that a prerequisite to the successful development of this, and other types of mobile devices, is a better understanding of the use, or disuse, of mobile tools in everyday life (including traditional and paper-based solutions). To gain this understanding we conducted a diary study in which a group of twelve volunteers recorded their usage of mobile tools over a period of 2 days. With the large volume of data that was collected we performed a qualitative analysis based on grounded theory techniques, resulting in a comprehensive and detailed picture of the use of mobile tools in everyday life. From this understanding we have drawn out nine key themes which we present in some detail in this paper, including: situational versus portability strategies, the mobility of information, new behaviours derived from mobile phone usage, the importance of creative expression, concern over privacy and security issues, and the demonstrated existence of the barriers-to-use.
SVSb: simple and visual storyboards: developing a visualisation method for depicting user scenarios BIBAFull-Text 49-56
  Niina Kantola; Timo Jokela
Presenting scenarios, or user tasks, in the form of storyboards is a common technique in the field of HCI. A storyboarding approach, called SVSb ('simple and visual storyboards'), is proposed. The aim is to create storyboards that are simple and easy to construct and modify, but still communicative and descriptive enough. The storyboards include elements such as context, user, user goals, plans, evaluation, user actions and system events. The rationale of SVSb and experiences on using the approach in a case project are described.
Potential speech features for cognitive load measurement BIBAFull-Text 57-60
  M. Asif Khawaja; Natalie Ruiz; Fang Chen
Intelligent user interfaces with an awareness of a user's experienced level of cognitive load have the potential to change the way output strategies are implemented and executed. However, current methods of measuring cognitive load are intrusive and unsuitable in real-time scenarios. Certain speech features have been shown to change under high levels of load. We present a dual-task speech based user study in which we explore three speech features: pause length, pause frequency and latency to response. These features are evaluated for their diagnostic capacity. Pause length and latency to response are shown to be useful indicators of high load versus low load speech.

Visualization

Writing blocks: a visualization to support global revising BIBAFull-Text 61-68
  Sheng Xu; Hirohito Shibata
This paper focuses on the cognitive support of the revising activities of writing, especially the global revising activity. Most of previous writing support systems emphasize the support of content generating activities. As for revising, some of these systems have provided automatic spell check and grammar check features. However, revising should not be confined only to these local superficial changes. In this paper, we aim to support the revising activities through providing a new visualization and interaction that encourages writers to revise globally. We have built a system called Writing Blocks to confirm the feasibility of our approach. Our preliminary user study, which was conducted with three subjects, helped us to gain a perspective on the effectiveness of our approach especially in supporting revising scenarios such as the revising of scripts for time-restricted presentations and the revising of global usage of terms in long documents according to their contexts.
Transient visualizations BIBAFull-Text 69-76
  Mikkel Rønne Jakobsen; Kasper Hornæk
Information visualizations often make permanent changes to the user interface with the aim of supporting specific tasks. However, a permanent visualization cannot support the variety of tasks found in realistic work settings equally well. We explore interaction techniques that transiently visualize information near the user's focus of attention. Transient visualizations support specific contexts of use without permanently changing the user interface, and aim to seamlessly integrate with existing tools and to decrease distraction. Examples of transient visualizations for document search, map zoom-outs, fisheye views of source code, and thesaurus access are presented. We provide an initial validation of transient visualizations by comparing a transient overview for maps to a permanent visualization. Among 20 users of these visualizations, all but four preferred the transient visualization. However, differences in time and error rates were insignificant. On this background, we discuss the potential of transient visualizations and future directions.
The influence of spatial ability on the use of web sitemaps BIBAFull-Text 77-82
  C. J. Pilgrim
One challenge confronting web site designers is to provide effective navigational support. Supplemental navigation tools such as sitemaps are frequently included on web sites to support navigation. However, there is a lack of empirically based guidelines for designers of such tools. This paper reports an empirical investigation into the factors influencing the decision by users to select sitemaps or search tools. The study establishes a relationship between the user's level of spatial ability and their tendency to select Web sitemaps providing a basis for the design of further investigations into the usability and design of such tools.
Mind-modulated music in the mind attention interface BIBAFull-Text 83-86
  Ben Swift; James Sheridan; Yang Zhen; Henry J. Gardner
A recent study of electroencephalogram (EEG) activity associated with musical cognition has suggested a correlate for the amount of active musical processing taking place in the brains of musicians. Using a version of this measure, we have built a new brain computer interface which harnesses the "natural" brain activity of musicians to mold and modulate music as it is being composed and played. This computer music instrument is part of a system, the Mind Attention Interface, which provides an interface to a virtual reality theatre using measures of a participant's EEG, eye-gaze and head position. The theatre itself, and its spatialised sound system, closes a feedback loop through the mind of the participant.

Collaboration

Analysis of hand gestures in remote collaboration: some design recommendations BIBAFull-Text 87-93
  Aiden Wickey; Leila Alem
This paper reports on a qualitative analysis of gestures performed during a remote collaboration in which two people are working together with physical objects. CSCW researchers have established the importance of supporting gesture when sharing and interacting from a distance. Recent work reports on a corpus of gestures phrases [3] and a set of gestures functions and roles [2] as observed in remote collaboration on physical tasks. While advances are being made in identifying these gestures, to date the design implications of remote gesture systems is still unclear. In this paper we describe a set of gestures phrases, which we have observed. These gesture phrases are composed of a number of individual gestures. We describe a specific gesture performed by the helper indicative of participant's natural/intuitive and preferred interaction practices. We also describe gestures performed by the helper going beyond pointing or showing a movement or a shape, gestures that suggest that the helper is acting as if manipulating the remote physical objects. We discuss the implications for supporting such more elaborated gesture phrases and present a set of design recommendations for designing remote gesture systems.
Implementing three-party desktop videoconferencing BIBAFull-Text 95-102
  Jian Sun; Holger Regenbrecht
We describe the implementation and test of a novel three-party desktop videoconferencing system. To allow for gaze and workspace awareness between the participating partners a special quasi-spatial arrangement of cameras and graphical user interface elements is chosen. We informally tested the system setup with a usability evaluation presented at the end of this paper.
   Our prototypical solution is a customizable off-the-shelf, affordable way of supporting mutual awareness in three-party videoconferencing.
Annotating with light for remote guidance BIBAFull-Text 103-110
  Doug Palmer; Matt Adcock; Jocelyn Smith; Matthew Hutchins; Chris Gunn; Duncan Stevenson; Ken Taylor
This paper presents a system that will support a remote guidance collaboration, in which a local expert guides a remotely located assistant to perform physical, three-dimensional tasks. The system supports this remote guidance by allowing the expert to annotate, point at and draw upon objects in the remote location using a pen and tablet-based interface to control a laser projection device. The specific design criteria for this system are drawn from a tele-health scenario involving remote medical examination of patients and the paper presents the software architecture and implementation details of the associated hardware. In particular, the algorithm for aligning the representation of the laser projection over the video display of the remote scene is described. Early evaluations by medical specialists are presented, the usability of the system in laboratory experiments is discussed and ideas for future developments are outlined.
A descriptive screenshot analysis in a mixed presence setting BIBAFull-Text 111-114
  Cara Stitzlein; Anja Wessels
In mixed presence settings (MPS) computer-mediated interactions of individuals and co-located groups, particular technical (video transmission of a large co-located group) and perceptual (ability to perceive video of remote people) challenges require attention. This paper presents results of a descriptive study within industry based on in vivo screen snapshots. The results illustrate how MPS are complicated by size of co-located groups. We present considerations for using video in these settings, informing engineers, evaluators and users of such potentially complex videoconferencing spaces.

Industry applications

Stories with emotions and conflicts drive development of better interactions in industrial software projects BIBAFull-Text 115-121
  Georg Strom
An earlier study shows that stories with dialogue, emotions and conflicts -- similar to fiction writing -- give a better understanding of user needs and the situations in which an interface is used when compared to conventional scenarios. This paper describes how stories with emotions and conflicts were accepted as inputs to the definition of requirements in two industrial software projects, and how managers regarded stories as more credible than concise reports. The paper describes how it is possible to use stories with emotions and conflicts in industrial software projects, characteristics of the most useful stories, and how stories can be used to facilitate a dialogue between users and developers.
Fun and usable: augmented reality instructions in a hospital setting BIBAFull-Text 123-130
  Susanna Nilsson; Björn Johansson
The differences between Augmented Reality (AR) systems and computer display based systems create a need for a different approach than "traditional" HCI to the design and development of AR systems. This paper presents theoretical and empirical work which uses holistic approach compared to traditional guidelines in human computer interaction. The paper includes a usability study where AR was used for giving instructions to professional users in a hospital. The theoretical stance of Cognitive Systems Engineering is suggested as a possible approach to the design of AR system. The analysis show that the users in the context of medical care are positive towards AR systems as a technology and as a tool for instructions. This indicates that AR technology may become an accepted part of every day work in such a context.
An assessment of the impact of ELEXSA visualizations on operator situation awareness BIBAFull-Text 131-137
  Keith Mason; Catherine Howard; Jeff Sturm; Craig Keogh
Decision making in the tactical military domain is a time-critical, high stress, high stake activity. This paper presents the results of a human-in-the-loop experiment conducted to assess the impact of novel visualizations of radar detection ranges and safe areas on operator situation awareness of the tactical environment. The experiment involved comparing the performance of two groups of eight operators attempting to achieve the same tactical goals within the same scenario using two different, but functionally equivalent, situation awareness environments; one environment incorporated the novel visualizations outlined in this paper and one did not. The experiment showed that the novel visualizations: (1) increased operator survivability; (2) shortened mission duration; and (3) reduced the time the operator spent vulnerable to radar detection compared to the standard environment. Analysis of operator perceptions elicited via post simulation interviews showed that operators using the visualizations experienced lower workload and stress levels and had more accurate perceptions of their vulnerability to radar detection than those operators using the standard environment.
Decisional style and eParticipation BIBAFull-Text 139-141
  J. G. Phillips; M. Jory; N. Mogford
Decisional style may predict eParticipation tendencies. The vigilance, procrastination, buck-passing, and hypervigilance of 77 undergraduates was measured using the Melbourne Decision Making Scale and related to use of WebCT. Decisional style predicted grades, participation in discussion groups and course evaluations.

Play

Evaluating a distributed physical leisure game for three players BIBAFull-Text 143-150
  Florian 'Floyd' Mueller; Martin R. Gibbs
Physical leisure activities such as table tennis provide healthy exercise and can offer a means to connect with others socially; however, players have to be in the same physical location to play. We have developed a networked table tennis-like game that is played with a real paddle and ball, augmented with a large-scale videoconference. Unlike existing commercial console games that encourage physical activity, our system supports social interaction through an audio and video communication channel, offers a familiar gaming interface comparable to a traditional leisure game, provides non-virtual force feedback and can be enjoyed by players in three geographically separate locations simultaneously. We are presenting results from an empirical evaluation of "Table Tennis for Three" with 41 participants. The players reported that they had fun, used the game to build social rapport and experienced a sense of playing "together". Some participants did not enjoy the game, and we present informed opinions to explain their reactions. With our work, we provide other HCI researchers with a further example of an evaluation of a novel type of experience that lies in the realms of physical activity, fun and social interactions. We hope we can inspire designers to consider our results in their future game designs by looking at the characteristics of traditional physical leisure games to promote similar benefits such as exercise, enjoyment and bringing people together to socialize.
Virtual box: supporting mediated family intimacy through virtual and physical play BIBAFull-Text 151-159
  Hilary Davis; Mikael B. Skov; Malthe Stougaard; Frank Vetere
Mediated intimacy is the phenomenon where humans use technologies to express, share, or communicate intimate feelings with each other. Typically, technologies supporting mediated intimacy encompass different characteristics than technologies designed to solve specific work-oriented tasks. This paper reports on the design, implementation and initial evaluation of Virtual Box. Virtual Box attempts to create a physical and engaging context in order to support reciprocal interactions with expressive content. An implemented version of Virtual Box is evaluated in a location-aware environment to evaluate the design ideas according to mediated family intimacy.
Social media, participatory design and cultural engagement BIBAFull-Text 161-166
  Jerry Watkins
This paper reports on the application of Participatory Design methodology to an experiment in social media production. Staff at the Australian Museum are developing new content genres, creative tools and techniques in order to produce original cultural multimedia based on -- or inspired by -- the Museum's extensive collections. The ultimate aim of the project is for the Museum to act as a social media hub for external communities of interest to co-create their own narrative-based interpretations of the Museum's content, leading to an individualized cultural experience for physical and online visitors alike. A participatory content creation method has been developed for this project, which features iterative design cycles marked by social prototyping, evaluation and strategic formulation. These cycles are repeated until desired performance is achieved.
Brute force as input for networked gaming BIBAFull-Text 167-170
  Florian 'Floyd' Mueller; Stefan Agamanolis; Frank Vetere; Martin Gibbs
Bodily activities such as sports have many physical and mental health benefits. The associated physical interactions are often of an exertion character and facilitate the use of brute force and intense physical actions. On the other hand, computer interfaces so far have mainly focused on interactions that use limited force and often ignored the existence of extreme brutal interactions that can be encountered in everyday life, in particular in contact sports. We present our initial investigations on the concept of "Brute Force" interfaces in HCI and describe work-in-progress on a prototype that aims to facilitate brute force interactions. We hope with our work we can aid designers who want to leverage the physical and mental health benefits of such physically intense behaviors that people do exhibit in their lives.

Collaboration + emotion

Understanding awareness in mixed presence collaboration BIBAFull-Text 171-174
  Gregor Mcewan; Markus Rittenbruch; Tim Mansfield
Mixed presence collaboration combines distributed and collocated collaboration -- there are multiple distributed sites, each with a collocated group. While collocated collaboration and purely distributed collaboration are each the subject of rich bodies of research, the combination is less well explored. In this paper we present our initial concepts of awareness support in mixed presence collaboration. We present this as a first version model of awareness. The selected literature we have used to inform the model is drawn from collocated research and distributed research as well as the small body of work addressing mixed presence collaboration directly. In this paper we present a discussion of this relevant literature and use it to explain our model. We also offer a sample of applying the model through the use of a scenario.
Exploratory study on concurrent interaction in co-located collaboration BIBAFull-Text 175-178
  Christian Müller-Tomfelde; Claudia Schremmer; Anja Wessels
We present an exploratory lab study that provides observations and measures about the usage of interaction devices in co-located cooperative work situations at a tabletop display. We designed our experiment with the aim of providing a context for the collaboration that shares as many characteristics of real life as possible. Twenty-two participants were instructed to perform a shared goal task. They worked in co-located pairs on solving three sets of two jigsaw puzzles concurrently. They were allowed to use any combination of direct and indirect input device, i.e., touch and mouse, to achieve the goal. Additionally, a hidden task was imposed on the participants in the second and third puzzle task: They had to discover that pieces were mixed up between the two displayed puzzles. The role of the hidden task was to trigger spontaneous transitions from individual to collaborative work. Our observations focused on the participants' selection and usage of input devices during the task execution. Our study revealed amongst others that participants stuck to their preferred input device even when they got more engaged in coordination and communication with their partner. Our findings are based on log data, questionnaire data and video recordings.
Exploring interface with representation of gesture for remote collaboration BIBAFull-Text 179-182
  Jane Li; Anja Wessels; Leila Alem; Cara Stitzlein
This paper reports on a laboratory study into the gesture representation interface for remote collaboration on physical tasks. Measured by task performance and user's perception of interaction, the experiment assessed two gesture representations (hands vs. cursor pointer) in the context of a video mediated interface which included a view of the remote partner. We did not find any significant difference between the hands condition and the pointer condition when measuring user's task performance. However, our result showed that participants reported an overall preference of using the pointer functionality than using the hands'. We found that participants perceived higher quality of interaction in the hands condition than in the pointer condition and there was a significant difference. Additionally, majority of the participants valued the ability of being able to see each other's face during the collaboration. We conclude with a discussion on the importance of accounting for the user's perception of interaction in addition to the traditional task performance measure in evaluating gesture representation interface, and the importance of considering these two factors in recommending the most suitable interface design with gesture representation for collaboration on physical tasks.
Investigating emotional interaction with a robotic dog BIBAFull-Text 183-186
  Christian Martyn Jones; Andrew Deeming
Next generation of consumer-level entertainment robots should offer more natural engaging interaction. This paper reports on the development and evaluation of a consumer-level robotic dog with acoustic emotion recognition capabilities. The dog can recognise the emotional state of its owner from affective cues in the owner's speech and respond with appropriate actions. The evaluation study shows that users can recognise the new robotic dog to be emotionally intelligent and report that this makes the dog appear more 'alive'.
An emotionally intelligent user interface: modelling emotion for user engagement BIBAFull-Text 187-190
  Matthew Willis
This paper presents a model for simulating emotion and personality in interactive systems. This paper argues that by introducing simulated emotional responses and state dynamics to future systems, they will provide a more life-like, engaging and interactive experience to their users, and provide a more effective and efficient user interaction. Further, by simulating Emotional Intelligence in a system, a developer may be able to provide a more tailored user experience, and be provided with more control over the outcome of a user's interaction with their system. Emotion theory and expression are explored, and a model is presented based upon emotional states. Implementation of this model is then presented as an intelligent back end process that utilises dynamic video stream analysis that feeds into an interactive display. The proposed system, and its hardware implementation is presented, followed by a discussion of future areas of research.
Biometric valence and arousal recognition BIBAFull-Text 191-194
  Christian Martyn Jones; Tommy Troen
A real-time user-independent emotion detection system using physiological signals has been developed. The system has the ability to classify affective states into 2-dimensions using valence and arousal. Each dimension ranges from 1 to 5 giving a total of 25 possible affective regions. Physiological signals were measured using 3 biometric sensors for Blood Volume Pulse (BVP), Skin Conductance (SC) and Respiration (RESP). Two emotion inducing experiments were conducted to acquire physiological data from 13 subjects. The data from 10 of these subjects were used to train the system, while the remaining 3 datasets were used to test the performance of the system. A recognition rate of 62% for valence and 67% for arousal was achieved within +/- 1 units of the valence and arousal rating.

Gesture & gaze

Intelligent mind-mapping BIBAFull-Text 195-198
  Vincent Chik; Beryl Plimmer; John Hosking
Current computer based mind-mapping tools are much slower to use than pen and paper because users are distracted by tool operations such as finding and arranging widgets. The shift in focus from brainstorming to tool management interrupts the rapid brainstorming process that mind maps are intended to support. Our pen based mind-mapping software that includes intelligent ink recognition, editing and export alleviates these intrusions as the user only has to worry about writing on the canvas, yet usual digital document support is provided. The digital ink recognition and manipulation techniques described here will be of interest to others working with informal documents.
Evaluation of eye-gaze interaction methods for security enhanced PIN-entry BIBAFull-Text 199-202
  Alexander De Luca; Roman Weiss; Heiko Drewes
Personal identification numbers (PINs) are one of the most common ways of electronic authentication these days and used in a wide variety of applications, especially in ATMs (cash machines). A non-marginal amount of tricks are used by criminals to spy on these numbers to gain access to the owners' valuables. Simply looking over the victims' shoulders to get in possession of their PINs is a common one. This effortless but effective trick is known as shoulder surfing. Thus, a less observable PIN entry method is desirable. In this work, we evaluate three different eye gaze interaction methods for PIN-entry, all resistant against these common attacks and thus providing enhanced security. Besides the classical eye input methods we also investigate a new approach of gaze gestures and compare it to the well known classical gaze-interactions. The evaluation considers both security and usability aspects. Finally we discuss possible enhancements for gaze gestures towards pattern based identification instead of number sequences.
Applying layout algorithms to hand-drawn graphs BIBAFull-Text 203-206
  Peter Reid; Fred Hallett-Hook; Beryl Plimmer; Helen Purchase
Hand-drawing a node-and-edge graph is a simple visual problem solving technique; however as the graph is built it can easily get untidy and confusing. It is more difficult to understand and interpret a confusing graph. By applying edge morphing techniques and a force-directed algorithm the hand-drawn graph can retain its informal appearance while its layout is improved. Graphs will be more readily understood, making the problem solving process easier.
Mentoring collaborative user centred design BIBAFull-Text 207-210
  Patrick Kennedy
User-centred design is a key part of best practice website design, but becomes increasingly difficult when undertaken by an inexperienced, multidisciplinary team gathered from various parts of an organisation. Mentoring is one approach that can help alleviate the pressure and assist such a team in delivering solid results.
   This paper describes the experiences of a team and its mentor during the ground-up redesign of a website for an Australian government agency, referred to by the pseudonym 'ESA' in this paper.
CodeAnnotator: digital ink annotation within Eclipse BIBAFull-Text 211-214
  Xiaofan Chen; Beryl Plimmer
Programming environments do not support ink annotation. Yet, annotation is the most effective way to actively read and review a document. This paper describes a tool, CodeAnnotator, which integrates annotation support inside an Integrated Development Environment (IDE). This tool is designed and developed to support direct annotation of program code with digital ink in the IDE. Programmers will benefit from a more intuitive interaction space to record notes and comments just as they would on paper documents.
An efficient unification-based multimodal language processor in multimodal input fusion BIBAFull-Text 215-218
  Yong Sun; Yu Shi; Fang Chen; Vera Chung
A Multimodal User Interface (MMUI) allows a user to interact with a computer in a way similar to human-to-human communication, for example, through speech and gesture. Being an essential component in MMUIs, Multimodal Input Fusion should be able to find the semantic interpretation of a user's intention from recognized multimodal symbols which are semantically complementary. We enhanced our efficient unification-based multimodal parsing processor, which has the potential to achieve low polynomial computational complexity while parsing versatile multimodal inputs within a speech and gesture based MMUI, to handle multimodal inputs from more than two modalities. Its ability to disambiguate speech recognition results with gesture recognition results was verified with an experiment. The analysis of experiment results demonstrates that the improvement is significant after applying this technique.

Ambience & 3D interfaces

Virtual fish: visual evidence of connectivity in a master-planned urban community BIBAFull-Text 219-222
  Greg T. Young; Marcus Foth; Natascha Y. Matthes
The rapid densification of urban areas around the world offers exciting opportunities for new place-based artworks and locative media that aim at engaging, informing and entertaining members of local communities. In this paper, we introduce a design competition for concepts of interaction design which display visual evidence of connectivity in a master-planned community. This competition is based in and focuses on one of Brisbane's newly built inner urban renewal developments. Furthermore, we introduce the conceptual interaction design of one of the competition's winning entries, as well as its potential and its challenges to engage local residents in participation and exploration of place-based information and community media.
Picture navigation using an ambient display and implicit interactions BIBAFull-Text 223-226
  Han-Sol Ryu; Yeo-Jin Yoon; Myeong-Eun Lim; Chan-Yong Park; Soo-Jun Park; Soo-Mi Choi
There is increasing demand for ubiquitous displays that react to a user's actions. We propose a method of navigating pictures on an ambient display using implicit interactions. The ambient display can identify the user and measure how far away they are using an RFID reader and ultrasonic sensors. When the user is a long way from the display, it acts as a digital picture and does not attract attention. When the user comes within an appropriate range for interaction, the display shows pictures that are related to the user and provides quasi-3D navigation using the TIP (tour into the picture) method. In addition, menus can be manipulated directly on a touch-screen or remotely using an air mouse. In an emergency, LEDs around the display flash to alert the user.
3D input for 3D worlds BIBAFull-Text 227-230
  Sreeram Sreedharan; Edmund S. Zurita; Beryl Plimmer
Virtual Worlds present a 3D space to the user. However, input devices are typically 2D. This unnatural mapping reduces the engagement of the experience. We are exploring using Wii controllers to provide 3D gesture-based input to the 3D virtual world, Second Life. By evaluating its usability, we found that gesture-based interfaces are appealing and natural for hand gestures such as wave but difficult to map to facial expressions.
A Wii remote, a game engine, five sensor bars and a virtual reality theatre BIBAFull-Text 231-234
  Torben Schou; Henry J. Gardner
The Nintendo Wii Remote is having a huge impact on the computer games industry. This paper describes a project which is integrating this controller into a game environment in a multi-wall virtual reality theatre. Aspects considered include interaction taxonomies of the Wii controller, the extension of driver software to have the Wii controller deal with multiple Sensor Bars at once, and the porting of the game engine into the virtual reality theatre.
Surface manipulation using a paper sculpture metaphor BIBAFull-Text 235-238
  Glenn McCord; Beryl Plimmer; Burkhard Wuensche
The creation of 3D computer models is essential for many applications in science, engineering and arts and is frequently performed by untrained users. However, creating an intuitive mapping between 2D input and 3D models is a non-trivial task and is reflected in the difficulty novices have in using current 3D modelling software. Using metaphors of paper sculpture and pen sketching, our gesture based modelling tool simplifies this interaction mapping. More intuitive object manipulation means that an otherwise complex model can be rapidly created by an inexperienced, non-artistic user. To demonstrate this, we have chosen to model orchid flowers as they offer considerable challenges to the artist due to their complexity of shape and detail, especially the petal surfaces which vary a great deal in curvature.
PassShape: stroke based shape passwords BIBAFull-Text 239-240
  Alexander De Luca; Roman Weiss; Heinrich Hussmann
Authentication today mostly means using passwords or personal identification numbers (PINs). The average user has to remember an increasing amount of PINs and passwords. But unfortunately, humans have limited capabilities in remembering abstract alphanumeric sequences. Thus, many people either forget them or use very simple ones that imply several security risks. In our previous work on PIN entry on ATMs (cash machines), we found out that many persons support their memory recalling PINs by using an imaginary shape overlaid on the number pad. In this paper, we introduce PassShape, a shape based authentication mechanism. We argue that using shapes will allow more complex and more secure authentication with a lower cognitive load. That is, it enables people to use easy to remember but complex authentication patterns.

Pot porri

Working the contract BIBAFull-Text 241-248
  Dave Martin; Rob Procter; John Mariani; Mark Rouncefield
This paper presents data and analysis from a long term ethnographic study of the design and development of an electronic patient records system in a UK hospital Trust. The project is a public private partnership (PPP) between the Trust and a US based software house (OurComp) contracted to supply, configure and support their customizable-off-the-shelf (COTS) healthcare information system in cooperation with an in-hospital project team. Given this contractual relationship for system delivery and support (increasingly common, and 'standard' in UK healthcare) we focus on the ways in which issues to do with the 'contract' enter into and impinge on everyday design and deployment work as part of the process of delivering dependable systems.
Automatic cognitive load detection from speech features BIBAFull-Text 249-255
  Bo Yin; Natalie Ruiz; Fang Chen; M. Asif Khawaja
Cognitive load variations have been found to impact multimodal behaviour, in particular, features of spoken input. In this paper, we present a design and implementation of a user study aimed at soliciting natural speech at three different levels of cognitive load. Some of the speech data produced was then used to train a number of models to automatically detect cognitive load. We describe a classification approach, the cognitive load levels were detected and output as discrete level ranges. The final system achieved a 71.1% accuracy for 3 levels classification in a speaker-independent setting. The ability to detect and manage a user's cognitive load can help us to adapt intelligent interfaces that ensure optimal user performance.
Looking for expertise in physical interactions BIBAFull-Text 257-260
  Ben Kraal; Vesna Popovic
In this paper, we describe the methods we have used to investigate expertise in interaction with physical interfaces. This paper covers the background of the interfaces (compression bandages), describes the methods used and presents findings on the use of tacit and explicit knowledge during interaction. Due to the increase in interest in interfaces that cross between the physical and digital, this method may be of interest to researchers who are involved in similar projects.
Making usability work in industry: an Australian practitioner perspective BIBAFull-Text 261-264
  Vince Bruno; Martin Dick
The gap in usability knowledge between research and industry practice is an important one to bridge. This paper presents the findings of 12 interviews with usability practitioners. The interviews focus on eliciting stories about successful and unsuccessful usability outcomes. The analysis shows that an iterative usability process, ensuring stakeholder involvement, articulating usability goals and requirements and avoiding technological constraints are critical issues to achieving a successful usability outcome in a project.

Location & mobility

Privacy and community connectedness: designing intelligent environments for our cities BIBAFull-Text 265-272
  Craig Chatfield; René Hexel
This paper investigates the casual interactions that support and nourish a community and seeks to provide a solution to the increasing detachment of modern society as community spaces become less and less engaging. We suggest the use of a ubiquitous computing (ubicomp) infrastructure to promote and support community connectedness via the hosting of virtual community environments and by providing local information and interaction possibilities. This infrastructure addresses our need as society to communicate more effectively and create loose bonds with familiar strangers within our community. We explore this idea with a use scenario and user study of users interacting with the services in a developed intelligent environment.
GeoHealth: a location-based service for nomadic home healthcare workers BIBAFull-Text 273-281
  Claus M. Christensen; Jesper Kjeldskov; Klaus K. Rasmussen
In this paper, we describe GeoHealth -- a geographical information system prototype for home healthcare workers who during a normal workday have to attend clients and patients that are physically distributed over a large geographical area. Informed by field studies of work activities and interviews with the healthcare workers, we have designed an interactive location-based service for supporting distributed mobile collaboration. The prototype explores a representational approach to context-awareness and represents live contextual information about clients/patients, co-workers, current and scheduled work activities, and alarms adapted to the users' location. The prototype application is web-based and uses Google Maps, GPS positioning, and Web 2.0 technology to provide a lightweight dynamic and interactive representation of the work domain supporting distributed collaboration, communication, and peripheral awareness among nomadic workers.
A Gestalt theoretic perspective on the user experience of location-based services BIBAFull-Text 283-290
  Jeni Paay; Jesper Kjeldskov
Location-based services provide mobile users with information and functionality tailored to their geographical location. Within recent years these kinds of context-aware mobile systems have received increasing attention from software industry as well as from researchers within a wide range of computing disciplines. However, while a lot of research has been done into sensing, adapting and philosophizing over the complex concept of "context", little theoretically based knowledge exists about why, from a user experience perspective, some context-aware system designs work well and why others don't. Contributing to this discussion, this paper suggests the perspective of "Gestalt theory" as a potential theoretical framework for understanding the use of this class of computer systems, and argues that describing the user experience of location-based services through Gestalt theory's principles of proximity, closure, symmetry, continuity, and similarity can help explain how people make sense of small and fragmented pieces of information on mobile devices in context.
Designing usable interface for navigating mobile chat messages BIBAFull-Text 291-294
  Daniel Kuen Seong Su; Victoria Siew Yen Yee
There has been little design consideration given to ease the navigation through a long chat archive in a limited screen display. By incorporating graphical and user-centered design, messages can be presented in logical grouping for navigation ease and efficient tracking of specific messages in a long chat archive. This paper explores usable interface design for mobile group chat systems via navigation and visualisation to track messages that results in minimal key-pressed and fast message retrieval. Additionally, we incorporate avatars and emoticon in user identification and human embodiment to facilitate ease of understanding of the messages' contents.

Advanced interfaces

Hands-free mouse-pointer manipulation using motion-tracking and speech recognition BIBAFull-Text 295-302
  Frank Loewenich; Frederic Maire
Technology is advancing at a rapid pace, automating many everyday chores in the process. Information technology (IT) is changing the way we perform work and providing society with a multitude of entertainment options. Unfortunately, in the past designers of many software systems have not considered the disabled as active users of technology, and thus this significant part of the world population has often been neglected. A change in this mindset has been emerging in recent years, however, as private-sector organizations and governments have started to realize that including this user group is not only profitable, but also beneficial to society as a whole. This paper introduces an alternative method to the traditional mouse input device, using a modified Lucas-Kanade optical-flow algorithm for tracking head movements, and speech recognition to activate mouse buttons.
Using alternative views for layout, comparison and context switching tasks in wall displays BIBAFull-Text 303-310
  Anastasia Bezerianos
In this paper we first present a set of tasks that are relevant to wall display interaction. Among these, layout management, context switching and comparison tasks could benefit from the use of interactive shortcut views of remote areas of a wall display, presented close to the user. Such a shortcut view technique, the ScaleView portals, is evaluated against using a simple magnification lens and walking when performing these tasks. We observed that for a layout and comparison task with frequent context switching, users preferred ScaleView portals. But for simpler tasks, such as searching, regular magnification lenses and walking were preferred. General observations on how the display was used as a peripheral reference by different participants highlighted one of the benefits of using wall sized displays: users may visually refer to the large, spread out content on the wall display, even if they prefer to interact with it close to their location.
Levels of formality in diagram presentation BIBAFull-Text 311-317
  Louise Yeung; Beryl Plimmer; Brenda Lobb; Douglas Elliffe
The incremental beautification of hand-drawn diagrams is a process that is poorly understood. Thus implementation of beautification techniques in computer-based sketch tools is ad hoc, with most only supporting the ends of the spectrum: hand-drawn and fully formalized. Hand-drawn diagrams are more effective for early design and review but users are more satisfied with formal designs. This suggests that there may be applications for intermediate levels of formality. By understanding the attributes of visual formality it is possible to beautify a diagram progressively, thereby achieving visually consistent intermediate levels of formality. Here we present a taxonomy of the attributes of visual formality and the implementation of this taxonomy into a sketch tool.
Zebra striping: does it really help? BIBAFull-Text 319-322
  Jessica Enders
'Zebra striping' -- also known as half shadow -- is the application of a faint shadow to alternate lines or rows in data tables or forms to aid readability. Zebra striping has been in use on paper and in electronic mediums for almost half a century, however, there is practically no evidence that it actually assist users.
   We conducted an online experiment to measure the impact of zebra striping on accuracy and speed when answering a series of questions using a table of data. Surprisingly, zebra striping did not consistently deliver gains in either measure.