HCI Bibliography Home | HCI Conferences | AUIC Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AUIC Tables of Contents: 00010203040506070809101112131415

Proceedings of AUIC 2015 Australasian User Interface Conference

Fullname:Proceedings of the 16th Australasian User Interface Conference -- Volume 162
Editors:Stefan Marks; Rachel Blagojevic
Location:Sydney, Australia
Dates:2015-Jan-27 to 2015-Jan-30
Publisher:ACS
Standard No:hcibib: AUIC15; ISBN: 978-1-921770-44-9 ISSN: 1445-1336
Papers:9
Pages:76
Links:Online Proceedings | Conference Website
The Effectiveness of Transient User Interface Components BIBAHTMLPDF 3-10
  D. Patterson; S. Costain
With small screen devices, including mobile and tablet based systems, becoming more common, the effective use of available screen space has become a critical skill in the design of user interfaces. Transient interface components are one technique that allows for a more complex interface to be displayed, in the form components that are only visible 'on-demand", without a significant or permanent on-screen footprint. This paper describes a study of transient user interfaces and the users perception of transient interface systems of different types, as applied in visually rich 3D environments. The primary objective of transient components is to free the screen space of unwanted interface controls, allowing space to be allocated to the main content, thus creating a more immersive experience for the user. This research involved a randomized control study looking at how users interacted with 3D worlds containing transient interfaces and in particular whether their experiences were enhanced with transient systems when compared with both permanently displayed and totally invisible interfaces. Results indicated that users did feel an enhanced level of immersion when using transient interfaces, but that the detail of how and when the transient components were displayed presented challenges. Those challenges, particularly in terms of the users sense of control of the interactive systems, play an important role in how effective such transient interfaces are. Overall the study found transient interfaces to be an effective way of providing users more immersion within a rich 3D space, while also offering improved access to interface controls and information.
Handheld Augmented Reality: Does Size Matter? BIBAHTMLPDF 11-20
  L. Sambrooks; B. Wilkinson
Handheld devices have become extremely popular in recent years and represent attractive options for augmented reality (AR) research. Most modern devices now incorporate many of the necessary input and output capabilities and do so in self-contained packages of varying size, weight, and cost. But while most previous AR work with handhelds has focused on smaller form factors, we have been interested in further exploring the range of larger devices often referred to under the umbrella of 'tablets'. This paper presents the results from a study we conducted on the suitability of different form factors for mobile AR use. Three form factor categories were evaluated: smartphone, mini tablet, and tablet. Although most devices today are marketed as being either the first or last, we propose there needs to be a third, middle category that caters for the subtle differences between sizes. The study asked 15 participants to use a device from each category to complete a series of seven interactive tasks. The tasks were designed to incorporate typical AR interactions. Participants completed pre- and post-test questionnaires and were audio recorded during the testing process. Our results showed that no one form factor was best suited to all tasks but rather the 'right' form factor was influenced by task specifics and personal preferences. In terms of usability ratings, we found a significant difference between smartphone and tablet form factors but no such difference between other combinations. Finally, we noted a negative correlation between participants' fatigue rating and the ease with which they found completing the tasks.
Recognizing Hand-drawn Glyphs from One Example and Four Lines of Code BIBAHTMLPDF 21-29
  R. Blagojevic; D. Dhir; K. Ranganathan; C. Lutteroth; B. Plimmer
The biggest challenge in the development of gesture-based user interfaces is the creation of a gesture recognizer. Existing approaches to support high-level recognition of glyphs require a lot of effort from developers, are error prone, and suffer from low recognition rates. We propose a tool that generates a recognizer for hand-drawn glyphs from one example. Our tool uses the output of a basic shape recognizer as input to the glyph recognition. The recognizer can be integrated into an app by adding only four lines of code. By reducing the development effort required, the approach makes it possible for many touch-interaction apps to take advantage of hand-drawn content. We demonstrate the tools effectiveness with two examples. Furthermore, our within-subject evaluation shows that programmers with no knowledge of gesture recognition can generate a recognizer and integrate it into an app more quickly and easily than manually coding recognition rules, and that the generated recognizer is more accurate than a manually coded one.
Tangible-Tango: Designing and Fabricating Tangibles with Tangibles BIBAHTMLPDF 31-39
  B. Whitely; R. Blagojevic; B. Plimmer
We present Tangible-Tango, a system which enables users to fabricate new tangibles and their equivalent 3D virtual models. Thus the cognitive load required to understand and interact with virtual models is reduced. Users build new models by iteratively creating and assembling physical models. Each physical model has an associated virtual model. The new models, both virtual and tangible, can be iteratively re-used in the system. This iterative fabrication of tangibles and their virtual partners is the key contribution of Tangible-Tango. Our user study found that all participants efficiently produced the desired results, regardless of their background. This indicates the system is easy to learn and takes us one step closer to melding tangible and virtual 3D representations.
Interactive Visualisation for Surface Proximity Monitoring BIBAHTMLPDF 41-50
  D. F. Marshall; H. J. Gardner; B. H. Thomas
We consider tasks that require users to be aware of the proximity of two 3D surfaces and where one or both of these surfaces is changing over time. We consider situations where users need to quickly and accurately assess when and where the two surfaces approach each other and eventually intersect. Because occlusion in 3D visualisations remains an issue in the perception of such data, a complete, simultaneous perception of the proximity of two such surfaces could be helpful. We propose and implement a new, interactive, visualisation technique, "Proximity Map Projection" (PMP), to provide this assistance to users and describe a user study to investigate the effectiveness of PMP in a static scenario. This study found that PMP enabled faster and more accurate identification of regions of nearest proximity and greatest protrusion. As well as affirming the potential benefits of PMP, this study motivates several areas of further investigation of the technique.
Getting to Grips with Economic Sustainability: A Case Study in Human Computation Through Movement BIBAHTMLPDF 51-60
  R. McAdam
The field of human computation creates novel user interfaces in order to leverage human capabilities to help solve problems that are difficult to solve using conventional computational techniques alone. One human capability that has received limited attention from the human computation community to date is human motor learning and control. In previous work the authors have developed a technique known as Continuous Interactive Simulation in which our natural ability to explore and master movement in novel physical situations is used to help solve problems concerning the control of nonlinear dynamical systems. The technique allows human motor learning capabilities to be applied to two broad classes of problem: strategy discovery and strategy refinement. This paper draws this work together in a complete case study that illustrates the application of the technique to a nonlinear model of economic growth and environmental sustainability. The results of the case study reveal new policy strategies that extend previous work on this model. Finally, the approach is reviewed in terms of its relation to the broader field of human computation in order to suggest potential paths for wider deployment and future research.
Challenges in Virtual Reality Exergame Design BIBAHTMLPDF 61-68
  L. A. Shaw; B. C. Wunsche; C. Lutteroth; S. Marks; R. Callies
Exercise video games have become increasingly popular due to their potential as tools to increase user motivation to exercise. In recent years we have seen an emergence of consumer level interface devices suitable for use in gaming. While past research has indicated that immersion is a factor in exergame effectiveness, there has been little research investigating the use of immersive interface technologies such as head mounted displays for use in exergames. In this paper we identify and discuss five major design challenges associated with the use of immersive technologies in exergaming: motion sickness caused by sensory disconnect when using a head mounted display, reliable bodily motion tracking controls, the health and safety concerns of exercising when using immersive technologies, the selection of an appropriate player perspective, and physical feedback latency. We demonstrate a prototype exergame utilising several affordable immersive gaming devices as a case study in overcoming these challenges. The results of a user study we conducted found that our prototype game was largely successful in overcoming these challenges, although further work would lead to improvement and we were able to identify further issues associated with the use of a head mounted display during exercise.
3D Orientation Aids to Assist Re-Orientation and Reduce Disorientation in Mobile Apps BIB 69-72
  D. Patterson
Assigned Responsibility for Remote Robot Operation BIBAHTMLPDF 73-76
  N. J. Small; G. Mann; K. Lee
The remote control of robots, known as teleoperation, is a non-trivial task, requiring the operator to make decisions based on the information relayed by the robot about its own status as well as its surroundings. This places the operator under significant cognitive load. A solution to this involves sharing this load between the human operator and automated operators. This paper builds on the idea of adjustable autonomy, proposing Assigned Responsibility, a way of clearly delimiting control responsibility over one or more robots between human and automated operators. An architecture for implementing Assigned Responsibility is presented.