HCI Bibliography Home | HCI Conferences | AUIC Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AUIC Tables of Contents: 00010203040506070809101112131415

Proceedings of AUIC'06, Australasian User Interface Conference

Fullname:Proceedings of the 7th Australasian User interface conference -- Volume 50
Editors:Wayne Piekarski
Location:Hobart, Tasmania, Australia
Dates:2006-Jan-16 to 2006-Jan-19
Publisher:ACS
Standard No:ISBN: 1-920682-32-5; ACM DL: Table of Contents; hcibib: AUIC06
Papers:26
Pages:180
Links:Online Proceedings
Evaluation of user satisfaction and learnability for outdoor augmented reality gaming BIBAFull-Text 17-24
  Benjamin Avery; Wayne Piekarski; James Warren; Bruce H. Thomas
We have developed an outdoor augmented reality game, Sky Invaders 3D, which is designed to be played by the game playing public. We conducted a user study to measure how much users enjoyed playing an outdoor AR game, and how intuitive it was. We compared 44 participants on one of two games, an outdoor AR game, or a desktop PC equivalent of the same game. We found the AR game was rated by the participants as significantly more enjoyable by the users and more intuitive to use.
A markerless registration method for augmented reality based on affine properties BIBAFull-Text 25-32
  Y. Pang; M. L. Yuan; A. Y. C. Nee; S. K. Ong; Kamal Youcef-Toumi
This paper presents a markerless registration approach for Augmented Reality (AR) systems based on the Kanade-Lucas-Tomasi (KLT) natural feature tracker and the affine reconstruction and reprojection techniques. First, the KLT tracker is used to track the corresponding feature points in two control images. Next, four planar points are specified in each control image to set up the world coordinate system. The affine reconstruction and reprojection techniques are used to estimate the image projections of these four specified planar points in the live video sequence, and these image projections are used to estimate the camera pose in real time. A primitive approach that illustrates the basic idea is first discussed and a robust method is given to improve the resistance to the noise. Pre-defined markers are not required in the proposed approach. The virtual models can still be registered on the proper positions using the proposed method even if the specified region is occluded during the registration process.
Augmented interiors with digital camera images BIBAFull-Text 33-36
  Sanni Siltanen; Charles Woodward
In this paper, we present a system for Augmented Reality interior design based on digital images. The system can be used with an ordinary PC and a digital camera: no special equipment is required. Once placed in the image, virtual objects may be scaled, moved and rotated freely. In addition, the layout can be stored in file for later adjustment. We describe various user interface details and implementation issues, including a useful marker erasure method for general AR applications.
A pen-based paperless environment for annotating and marking student assignments BIBAFull-Text 37-44
  Beryl Plimmer; Paul Mason
A paperless environment for annotating student assignments is appealing to teachers and students. However, to do this, while retaining the richness and ease of annotating the work with a red pen, has not been possible until recently. This project presents an annotation problem that requires digital annotation, and additionally functionality to properly support the user requirements to move efficiently between assignments, and simultaneously annotate and record marks for the assignment. With Penmarked, our prototype system, the teacher is able to annotate a digital document with a stylus and at the same time write scores which are recognized and saved into both the student's document and a standard file format. The evaluation suggests that Penmarked is a viable alternative to both paper and existing paperless environments.
Extended radar view and modification director: awareness mechanisms for synchronous collaborative authoring BIBAFull-Text 45-52
  Minh Hong Tran; Yun Yang; Gitesh K. Raikundalia
Providing effective support for group awareness is a critical requirement of synchronous collaborative authoring tools. This paper reports our work on developing new awareness mechanisms including Extended Radar View (ERV) and Modification Director (MD) for Co Word, a synchronous collaborative authoring tool that is developed based on Microsoft® Word. ERV supports awareness for collaborative authoring tasks by showing both users' telecarets-eye views -- the locations of working areas -- and over-the-shoulder views -- the locations of viewing areas in a shared Word document. MD keeps track of changes in the document, and notifies users when their text is modified by other people. This paper also reports an ongoing user-based study that is conducted to evaluate the usefulness of ERV and MD. The preliminary results show that ERV is useful in supporting group awareness, and ERV is preferred to a conventional radar view. MD was found useful to some extent in supporting awareness for collaborative authoring.
User interface layout with ordinal and linear constraints BIBAFull-Text 53-60
  Christof Lutteroth; Gerald Weber
User interfaces as well as documents use tabular layout mechanisms. The HTML table construct and the GridBag layout in Java are typical examples. There are, however, shortcomings of these mechanisms that become obvious with more advanced content like semi-structured data or object-oriented models. We present a generalized table construct that solves these shortcomings and generalizes tabular layouts to a foundation of 2D layout. We describe an algorithm for specifying and rendering user interfaces -- and 2D documents in general -- using simple but expressive mathematical properties. In particular, the new tabular layout is described by ordinal and linear constraints. The ordinal information makes it possible to describe the general structure of a table and merge multiple cells into rectangular areas. The linear constraints allow it to map particular points of the table to particular coordinates or specify the size of areas in an absolute or relative manner. The resulting layout engine is easy to use and can render 2D information in real-time.
From pushing buttons to play and progress: value and interaction in Fable BIBAFull-Text 61-68
  Pippin Barr; James Noble; Robert Biddle; Rilla Khaled
A value can be understood as a belief that one mode of conduct is preferable to others. The user-interface of computer games mediates all player conduct in the game and is therefore key to understanding how values are expressed both by and to the player. How the interface affects player's expression and understanding of value in computer games is a relatively unknown quantity. We performed a qualitative case study of the game Fable to investigate connections between interface and value in gameplay. The concepts uncovered allow us to better address the computer game interface in both design and analysis.
Usability evaluation of library online catalogues BIBAFull-Text 69-72
  Hayley White; Tim Wright; Brenda Chawner
We performed a usability evaluation of four New Zealand university online library catalogues. The evaluation found severe usability problems with online catalogues -- we found so many problems we were forced to use a card sorting technique to understand and classify the problems. These problems cover almost all aspects of catalogue use and existed across all the evaluated catalogues. This paper describes the evaluation and a summary of the results. Additionally, we illustrate a redesigned search screen to avoid many of the problems we identified.
Persuasive interaction for collectivist cultures BIBAFull-Text 73-80
  Rilla Khaled; Robert Biddle; James Noble; Pippin Barr; Ronald Fischer
Persuasive technology is defined as "any interactive product designed to change attitudes or behaviours by making desired outcomes easier to achieve". It can take the form of interactive web applications, hand held devices, and games. To date there has been limited research into persuasive technology outside of America. Cross-cultural research shows that in order for persuasion to be most effective, it is often necessary to draw upon important cultural themes of the target audience. Applying this insight to persuasive technology, we claim that the set of persuasive technology strategies as described by B J Fogg caters to a largely individualist audience. Drawing upon cross-cultural psychology and sociology findings about patterns of behaviour commonly seen in collectivists, we present a principled set of collectivism-focused persuasive technology strategies. These strategies are: group opinion, group surveillance, deviation monitoring, disapproval conditioning, and group customisation. We also demonstrate how application of the strategies can support the design of a collectivist, persuasive game.
Fluid sketching of directed graphs BIBAFull-Text 81-86
  James Arvo; Kevin Novins
We describe a prototype sketch-based system that allows users to draw and manipulate directed graphs using gestural input exclusively. The system incorporates the notion of fluid sketching, which attempts to recognize and beautify what the user is drawing while it is being drawn. This concept applies to both the drawing of vertices, which are morphed to circles, and to the drawing of edges, which are approximated on-the-fly by a constrained projection onto low-order polynomials. Consequently, all user-drawn strokes are cleaned up by continuously morphing or projecting them to the nearest geometrically precise shapes. The system has a unique look and feel in that the currently-displayed graph is always at or in transition toward a clean and precise representation of what the user has drawn or is in the process of drawing. When vertices of the graph are dragged to new locations or edges are reshaped, the original graph connectivity is maintained while simultaneously retaining some of its original user-drawn character, such as vertex size and placement, and overall edge shape.
Moving animation script creation from textual to visual representation BIBAFull-Text 87-90
  Erik Haugvaldstad; Tim Wright
Animation scripts are an integral part of developing computer games: they describe which character animations to play and when to switch between animations. These scripts are often written in text editors which can be an error-prone task since there are multiple variables and conditions that needs to be understood, and text editors gives no indication of how animations are linked together. This paper describes how a graphical user interface, using lines and circles to simulate direction and speed, can simplify the process of creating character animation scripts.
PatternProgrammer: yet another rule-based programming environment for children BIBAFull-Text 91-96
  Tim Wright
Graphical rewrite rules are often used in programming environments for children. These rules consist of two parts: a left-hand side, which is visually matched in the environment, and a right hand side, which replaces the matched area. Programs using graphical rewrite rules typically describe behaviour 2D visual simulations where the program describes how visual agents move around a 2D space and interact with other agents. These types of programming environments are very simple yet very powerful. Despite the simplicity of graphical rewrite rules, evaluations of these programming environments have found the environments suffer two flaws. The first flaw is that children have difficulties understanding the implications of the rule scheduler. The second flaw is that children create one or two large rules to describe complex behaviour rather than creating many small rules. This paper presents another rule-based programming environment for children. The environment is designed to avoid these problems at the cost of an intuitive rule matching algorithm. Informal usability tests found that people initially make some mistakes comprehending how Pattern-Programmer applies their rules, but quickly adjust their cognitive model to one that accurately reflects PatternProgrammer's scheduling and matching behaviour.
A case for iconic icons BIBAFull-Text 97-100
  Jennifer Ferreira; James Noble; Robert Biddle
User interface designers still have to rely on personal creativity and skill when designing computer icons for program functions that have no existing conventional representation. Further, designers often stumble upon usable icons by trial and error. We designed an Icon Intuitiveness Test to gain better insight into how users interpret icons. Our hypothesis was that users would interpret icons they do not know the functionality of as iconic signs [1] by assuming that the icon looks like the functionality it represents. Our study suggests that participants do indeed base their guesses on the visual clues they can see and interpret the unknown icon as having the functionality they think it resembles.
Generating mobile device user interfaces for diagram-based modelling tools BIBAFull-Text 101-108
  Dejin Zhao; John Grundy; John Hosking
Mobile display devices such as phones and PDAs have become very widely available and used. However, most content on these devices is limited to text, static images and motion video. Displaying and interacting with dynamic diagrammatic content on such devices is difficult, as is engineering applications to implement such functionality. We describe a set of plug-in components for a meta-diagramming tool that enable a diagram type to be visualized and interacted with on mobile devices. Key features of our approach include generating diagram content from an existing meta-tool, run-time user configuration of diagram appearance and navigation, and multi-level, zoomable diagrams and diagram content. We describe our experiences prototyping, using and evaluating this new mobile device diagramming technology.
Middle-aged users' experience of short message service BIBAFull-Text 109-112
  Christine Soriano; Gitesh K. Raikundalia; Jakub Szajman
Short Message Service (SMS) is a popular form of nonverbal mobile communication. To date, most research has focused upon the use of SMS by teenagers and young adults. Our work examines the use of SMS by middle-aged users. We conducted a usability study evaluating the ease of use or difficulties experienced by middle-aged individuals whilst engaging in text message objectives and SMS tasks, in each of two pre-determined scenarios. Participants used two different mobile phone handsets to perform their tasks while the usability of each handset was assessed with respect to SMS tasks. The experiment gave insights into the usability issues experienced by middle-aged users such as the clarity of on-screen menus, and the ability to follow the navigational input provided by the hardware design of the mobile phone handsets.
Implementing a natural user interface for camera phones using visual tags BIBAFull-Text 113-116
  Sanni Siltanen; Jouko Hyväkkä
A conventional keypad user interface of mobile phones is unsuitable for some modern situations. Here we present a visual tagging system and show how such a system can be used for fast and efficient user interface, utilizing a camera phone as a pointing device. We also present our two optional mobile phone implementations, one aiming for very fast processing and the other for high detection resolution. These efficient implementations are achieved by taking into account the strict constraints of the mobile environment. We present a flexible system where varying physical and storage sizes of the visual tag can be used in different applications through one interface application and we present some performance statistics of our implementations. We also identify some of the constraints on the mobile device programming environment concentrating on Symbian Series 60 mobile phones.
TIDL: mixed presence groupware support for legacy and custom applications BIBAFull-Text 117-124
  Peter Hutterer; Benjamin S. Close; Bruce H. Thomas
In this paper, we present a framework to use an arbitrary number of mouse and keyboard input devices controlling Swing based Java applications. These devices can be distributed amongst any number of host computers on a network. We use this framework to provide independent input devices to a number of users on different host computers. These users can then work collaboratively on applications. A major limitation for current real-time groupware is that contemporary graphic environments do not support more than one system cursor and keyboard. The Transparent Input Device Layer (TIDL) is a framework we have developed that provides an easy-to-use API for Java applications to gain support for multiple independent input devices. We have also created a wrapper application to retrofit legacy applications with support for multiple distributed input devices at runtime. This support can be injected without altering or recompiling the application's source code. TIDL allows multiple devices to work across window and application boundaries. Applications supporting multiple input devices can employ features such as simultaneous drag-and-drop and the entry of text in multiple textboxes. In addition, different applications running simultaneously can use multi-device support independently and at the same time. We present four applications that use TIDL to enable distributed groups to work collaboratively. One of these applications has been developed to make active use of TIDL, the other three applications are applications we have found on the web and gain support for multiple independent devices through the wrapper application.
Virtual planning rooms (ViPR): a 3D visualisation environment for hierarchical information BIBAFull-Text 125-128
  Michael Broughton
The Future Operations Centre Analysis Laboratory (FOCAL) at Australia's Defence Science and Technology Organisation (DSTO) is aimed at exploring new paradigms for situation awareness (SA) and command and control (C2) in military command centres, making use of new technologies developed for virtual reality and real-time 3D animation. Recent work includes the conceptual design and prototype development of the Virtual Planning Rooms (ViPR), an innovative visualisation environment that displays multi-media information on the walls of immersive virtual rooms. The operator is able to view and interact with relevant information within the octagonal rooms, and to explore different levels of abstraction or alternative data sets by navigating through doorways to adjoining rooms. Random accessibility throughout the environment is also possible via interaction with a 3D map representing a high level view of the data. Potential uses for ViPR include course-of-action visualisation, planning or display of other hierarchical data sets. This paper provides an overview of FOCAL, followed by a brief description of ViPR that includes its conceptual design and prototype development, together with future directions.
Evaluation of a universal interaction and control device for use within multiple heterogeneous display ubiquitous environments BIBAFull-Text 129-136
  Hannah Slay; Bruce H. Thomas
This paper provides an insight into the usability of our Universal Interaction Controller (UIC), a user interface device designed to support interactions in ubiquitous computing environments equipped with multiple heterogeneous displays. The results are presented of a user study we undertook to test the intuitiveness of Ukey, our UIC input device, comparing it to a wireless gyroscopic mouse, as benchmarked against a traditional mouse. We found that with no training, users were able to perform better with the UIC than with the gyroscopic mouse, but their performance with the traditional mouse exceeded both. With an hour of additional training, participant's performance with the UIC was better than their performance with a traditional mouse when interacting between display devices.
A framework for interactive web-based visualization BIBAFull-Text 137-144
  Nathan Holmberg; Burkhard Wünsche; Ewan Tempero
As the power of end user web browsers increases, the delivery of sophisticated visualizations of information via the web becomes possible. However no technology exists that offers the kind of interactions that a stand-alone application can deliver. Technologies such as Java3D, VRML, X3D and SVG incorporate powerful rendering capabilities but make it difficult to interact with the underlying source data. Other technologies such as SMIL can also offer synchronization but then lack this rendering context. We present a framework within which these technologies can be evaluated. We also address the question of how to integrate these technologies with existing and well-understood web technologies such as Javascript, CSS, Web Services, and PHP, to provide interactive web-based visualization applications. We then describe a generalized framework for determining how to choose the right set of technologies for a web-based visualization application.
Visualising phylogenetic trees BIBAFull-Text 145-152
  Wan Nazmee Wan Zainon; Paul Calder
This paper describes techniques for visualising pairs of similar trees. Our aim is to develop ways of presenting the information so as to highlight both the common structure of the trees and their points of difference. The impetus for the work comes from the field of bioinformatics, where geneticists construct complex phylogenetic trees to represent the evolution of species or genes. But the techniques can also be used for other tree-structured data such as file systems, parse trees, decision trees, and organisational hierarchies. To investigate our techniques, we have built a prototype application that reads and displays phylogenetic trees in the popular Nexus format. The application incorporates a variety of interactive and automated visualisation techniques, and is implemented in Java. We are working with biologists to see how well the techniques work for real-world data.
Visualisations of execution traces (VET): an interactive plugin-based visualisation tool BIBAFull-Text 153-160
  Mike McGavin; Tim Wright; Stuart Marshall
An execution trace contains a description of everything that happened during an execution of a program. Execution traces are useful, because they can help software engineers understand code, resulting in a variety of applications such as debugging software, or more effective software reuse. Unfortunately, execution traces are also complex, typically containing hundreds of thousands of events for medium size computer programs, and more for large scale programs. We have developed an execution trace visualisation tool, called VET, that helps programmers manage the complexity of execution traces. VET is also plugin based. Expert users of VET can add new visualisations and new filters, without changing VET's main code base.
A wearable fatigue monitoring system: application of human-computer interaction evaluation BIBAFull-Text 161-164
  Soichiro Matsushita; Ayumi Shiba; Kan Nagashima
We developed a wearable fatigue monitoring system with a high-sensitivity 2-axis accelerometer and an on-board signal processing microcontroller. The proposed system measures faint motion of the user's head while the user is trying to stand still for 30 seconds. The two axes of the accelerometer were settled parallel to the ground. As one of the candidates for diagnostic parameters, we adopted a time-integral of acceleration trace pattern length, which was defined as the length between the adjacent two acceleration X-Y plots. As artificially introduced physical stress such as running as well as some physically or mentally exhausted situations made consistent changes in the acceleration trace length, the proposed system was shown to have a capability of evaluating the degree of tiredness. Then we applied the proposed system to evaluation of human-computer interaction. We performed experiments on a computer entertainment using immersive display devices such as head-mounted displays and wide-angle plasma displays. As a result, the measured values of the acceleration trace length showed some inconsistency with user-interviews consist of subjective questionnaires about the user's fatigue.
Garment-based body sensing using foam sensors BIBAFull-Text 165-171
  Lucy E. Dunne; Sarah Brady; Richard Tynan; Kim Lau; Barry Smyth; Dermot Diamond; G. M. P. O'Hare
Wearable technology is omnipresent to the user. Thus, it has the potential to be significantly disruptive to the user's daily life. Context awareness and intuitive device interfaces can help to minimize this disruption, but only when the sensing technology itself is not physically intrusive: i.e., when the interface preserves the user's homeostatic comfort. This work evaluates a novel foam-based sensor for use in body-monitoring for context-aware and gestural interfaces. The sensor is particularly attractive for wearable interfaces due to its positive wearability characteristics (softness, pliability, washability), but less precise than other similar sensors. The sensor is applied in the garment-based monitoring of breathing, shoulder lift (shrug), and directional arm movement, and its accuracy is evaluated in each application. We find the foam technology most successful in detecting the presence of movement events using a single sensor, and less successful in measuring precise, relative movements from the coordinated responses of multiple sensors. The implications of these results are considered from a wearable computing perspective.
An interface test-bed for 'Kansei' filters using the touch designer visual programming environment BIBAFull-Text 173-176
  Rodney Berry; Masahide Naemura; Yuichi Kobayashi; Masahiro Tada; Naomi Inoue; Yusuf Pisan; Ernest Edmonds
In the context of a larger project dealing with kansei analysis of movement, we present a basic method for applying real-time filters to human motion capture data in order to modify the perceived emotional affect of the movement. By employing a commercial realtime 3D package, we have been able to quickly prototype some interfaces to an as yet non-existent system. Filters are represented as physical objects whose proximity to an animated dancing human figure determine how much they modify the movement.
Line drawing in virtual reality using a game pad BIBAFull-Text 177-180
  Henry Gardner; Duan Lifeng; Qing Wang; Guyin Zhou
We describe a software interface for drawing in three dimensions using a game pad. The software runs in a walk-in, virtual-reality theatre and has been designed for walk-up usability for a science museum. Usability aspects of the interface are discussed including the mapping of the thumb-joystick controllers and movements of the cursor in three dimensions.