HCI Bibliography Home | HCI Conferences | EICS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
EICS Tables of Contents: 09101112131415

ACM SIGCHI 2011 Symposium on Engineering Interactive Computing Systems

Fullname:Proceedings of the 3rd ACM SIGCHI Symposium on Engineering Interactive Computing Systems
Editors:Fabio Paternò; Kris Luyten; Frank Maurer
Location:Piza, Italy
Dates:2011-Jun-13 to 2011-Jun-16
Standard No:ISBN: 1-4503-0670-5, 978-1-4503-0670-6; ACM DL: Table of Contents hcibib: EICS11
Papers:56
Pages:344
Links:Conference Home Page
  1. Keynote 1
  2. Model-based design and evaluation
  3. Formal methods and their applications
  4. Adaptation and migration
  5. Tools for graphical user interfaces
  6. Demo session
  7. Testing
  8. Interaction with large screens
  9. Designing graphical interfaces
  10. Keynote 2
  11. Innovative interaction
  12. Posters
  13. Doctoral consortium
  14. Tutorials
  15. Workshops

Keynote 1

The measurability and predictability of user experience BIBAFull-Text 1-10
  Effie Lai-Chong Law
User experience is an emerging research area with a range of issues to be resolved. Among them, the measurability of UX remains contentious. The key argument hinges on the meaningfulness, validity and usefulness of reducing fuzzy experiential qualities such as fun, challenge and trust to numbers. UX people seem ambivalent towards UX measures. In UX empirical studies, qualitative approaches are predominant, though the popular use of questionnaires in these studies suggests that some form of numeric measures is deemed useful or even necessary. The tension between the two camps (i.e. qualitative design-based and quantitative model-based) stimulates scientific discussions to bring the field forward. As measures may enable us to predict, the concomitant issue of UX predictability is explored. Besides, we look into theoretical frameworks that potentially contribute to a deeper understanding of UX. Of particular interest is theory of memory.

Model-based design and evaluation

A model-based approach for distributed user interfaces BIBAFull-Text 11-20
  Jérémie Melchior; Jean Vanderdonckt; Peter Van Roy
This paper describes a model-based approach for designing Distributed User Interfaces (DUIs), i.e., graphical user interfaces that are distributed along the following dimensions: end user, display device, computing platform, and physical environment. The three pillars of this model-based approach are: (i) a Concrete User Interface model for DUIs incorporating the distribution dimensions and expressing any DUI element in a XML-compliant format until the granularity of an individual DUI element is reached, (ii) a specification language for DUI distribution primitives that have been defined in a user interface toolkit, and (iii), a step wise method for modeling a DUI based on the concepts of a distribution graph expressing a distribution scenario that can be played, based on the distribution primitives. A distribution graph consists of a state-transition diagram whose states represent significant DUI distribution states and whose transitions are labeled by an Event-Condition-Action (ECA) representation. The actions involved in this format call any distribution primitive belonging to the DUI toolkit. This model-based approach is exemplified on two simple DUIs: one DUI for the Pictionary game and another one for the Minesweeper game. They are then incorporated into a larger composed DUI game of the goose where cells could trigger the other two games.
A model-based approach for supporting engineering usability evaluation of interaction techniques BIBAFull-Text 21-30
  Philippe Palanque; Eric Barboni; Célia Martinie; David Navarre; Marco Winckler
This paper offers a contribution for engineering interaction techniques by proposing a model-based approach for supporting usability evaluation. This approach combines different techniques including formal analysis of models, simulation and, in particular, analysis of log data in a model-based environment. This approach is integrated in a process and is supported by a model-based CASE tool for modeling, simulation and evaluation of interactive systems. A case study illustrates the approach and operation of the tool. The results demonstrate that the log data at model level can be used not only to identify usability problems but also to identify where to operate changes to these models in order to fix usability problems. Finally we show how the analysis of log data allows the designer to easily shape up the interaction technique (as the results of log analysis are presented at the same abstraction level of models). Such as an approach offers an alternative to user testing that are very difficult to configure and to interpret especially when advanced interaction techniques are concerned.
CAP3: context-sensitive abstract user interface specification BIBAFull-Text 31-40
  Jan Van den Bergh; Kris Luyten; Karin Coninx
Despite the fact many proposals have been made for abstract user interface models it was not given a detailed context in which it should or could be used in a user-centered design process. This paper presents a clear role for the abstract user interface model in user-centered and model-based development, provides an overview of the stakeholders that may create and/or use abstract user interface models and presents a modular abstract user interface modeling language, CAP3, that makes relations with other models explicit and builds on the foundation of existing abstract user interface models. The proposed modeling notation is supported by a tool and applied to some case studies from literature and in some projects. It is also validated based on state-of-the-art knowledge on domain-specific modeling languages and visual notations and some case studies.
Automated generation of device-specific WIMP UIs: weaving of structural and behavioral models BIBAFull-Text 41-46
  David Raneburger; Roman Popp; Hermann Kaindl; Jürgen Falb; Dominik Ertl
Any graphical user interface needs to have defined structure and behavior. So, in particular, models of Window / Icon / Menu / Pointing Device (WIMP) UIs need to represent structure and behavior at some level of abstraction, possibly in separate models. High-level conceptual models such as Task or Discourse Models do not model the UI per se. Therefore, in the course of automated generation of (WIMP) UIs from such models, structure and behavior of the UI need to be generated, and they need to fit together. In order to achieve that, we devised a new approach to weaving structural and behavioral models on different levels of abstraction.
W5: a meta-model for pen-and-paper interaction BIBAFull-Text 47-52
  Felix Heinrichs; Daniel Schreiber; Jochen Huber; Max Mühlhäuser
Pen-and-Paper Interaction (PPI) is used in an increasing number of applications to bridge the digital-physical gap between paper and interactive computer systems. We present W5, a meta-model for describing PPI, and demonstrate its expressiveness by applying it to several interaction techniques from the literature. In doing so, we derive a set of basic interaction primitives, which can be used to inform the design of development toolkits for PPI and guide interaction designers in a structured exploration of the design space. We present a proof-of-concept implementation for a PPI toolkit based on W5 in order to demonstrate the practical relevance of our findings.

Formal methods and their applications

Model-based training: an approach supporting operability of critical interactive systems BIBAFull-Text 53-62
  Célia Martinie; Philippe Palanque; David Navarre; Marco Winckler; Erwann Poupart
Operation of safety critical systems requires qualified operators that have detailed knowledge about the system they are using and how it should be used. Instructional Design and Technology intends to analyze, design, implement, evaluate, maintain and manage training programs. Among the many methods and processes that are currently in use, the first one to be widely exploited was Instructional Systems Development (ISD) which has been further developed in many ramifications and is part of the Systematic Approach to Training (SAT) instructional design family. One of the key features of these processes (at least when they are refined) is the importance of Instructional Task Analysis, particularly the decomposition of a job in its tasks and sub-tasks in order to decide what knowledge and skills must be acquired by the trainee. This paper proposes to leverage this systematic approach using model-based approaches currently used for interactive systems engineering in order to design such training programs and thus to improve human reliability. The paper explains how task and interactive systems modeling can be bound to job analysis to ensure that each trainee meets the performance goals required. Such training ensures proper learning at the three levels of the Skills Rule Knowledge (SRK) levels of Rasmussen's. In the case study we describe the process for building a training program for operators of satellite ground segments, which is based on and compatible with the Ground Systems and Operations ECSS standard. Then, we propose to enhance this process with a) the application of a Systematic Approach to Training and b) the use of both a System Model and an Operator Task Model. The system model is build using the ICO notation while operators' goals and tasks are described using HAMSTERS notation.
MACS: combination of a formal mixed interaction model with an informal creative session BIBAFull-Text 63-72
  Christophe Bortolaso; Cédric Bach; Emmanuel Dubois
In this paper, we propose a collaborative design method combining the informal power of creative session and the formal generative power of a mixed interaction model called MACS (Model Assisted Creativity Session). By using a formal notation during creative sessions, interdisciplinary teams systematically explore combinations between the physical and digital spaces and remain focused on the design problem to address. In this paper, we introduce the MACS method principles and illustrate its application on two case studies.
Buffer automata: a UI architecture prioritising HCI concerns for interactive devices BIBAFull-Text 73-78
  Harold Thimbleby; Andy Gimblett; Abigail Cauchi
We introduce an architectural software formalism, buffer automata, for the specification, implementation and analysis of a particular class of discrete interactive systems and devices. The approach defines a layer between the physical user interface and the application (if any) and provides a clear framework for highlighting a number of interaction design issues, in particular around modes and undo.
Formalizing model consistency based on the abstract syntax BIBAFull-Text 79-84
  Frank Trollmann; Marco Blumendorf; Veit Schwartze; Sahin Albayrak
In this paper we define a notion to describe consistency within and between models, which has been identified as important issue when using model-based tools. We introduce the abstract syntax of models as attributed typed graphs and define a formalism of consistency based on this formal description. The application of the formalism is illustrated by an example.

Adaptation and migration

Combining aspect-oriented modeling with property-based reasoning to improve user interface adaptation BIBAFull-Text 85-94
  Arnaud Blouin; Brice Morin; Olivier Beaudoux; Grégory Nain; Patrick Albers; Jean-Marc Jézéquel
User interface adaptations can be performed at runtime to dynamically reflect any change of context. Complex user interfaces and contexts can lead to the combinatorial explosion of the number of possible adaptations. Thus, dynamic adaptations come across the issue of adapting user interfaces in a reasonable time-slot with limited resources. In this paper, we propose to combine aspect-oriented modeling with property-based reasoning to tame complex and dynamic user interfaces. At runtime and in a limited time-slot, this combination enables efficient reasoning on the current context and on the available user interface components to provide a well suited adaptation. The proposed approach has been evaluated through EnTiMid, a middleware for home automation.
Showing user interface adaptivity by animated transitions BIBAFull-Text 95-104
  Charles-Eric Dessart; Vivian Genaro Motti; Jean Vanderdonckt
In order to reduce the inevitable end user disruption and cognitive perturbation induced by adapting a graphical user interface, the results of the adaptation could be conveyed to the end user by animating a transition scenario showing the evolution from the user interface before adaptation to the user interface after adaptation. A transition scenario consists of a sequence of adaptation operations (e.g., set/change a property of a widget, replace a widget by another, resize a widget) belonging to a catalogue of operations defined as an Extended Backus-Naur Form grammar. Each transition operation has a range from a single widget (e.g., this Ok button) to a selection of widgets based on a selector mechanism (e.g., all validation widgets of this family of interfaces). A transition scenario is built either automatically by any adaptation algorithm or interactively by a specific editor for designers. An animator then executes the animation scenario by parsing each adaptation operation one by one or in a grouped mode and by rendering them by an animated transition on a user interface model. The type (e.g., wipe, box in, box out) and parameters (e.g., animation speed, pace, direction) of each animated transition have been selected based on usability guidelines for animation. A user study suggests that a transition scenario reinforces understandability and trust, while still suffering from lag.
Engineering JavaScript state persistence of web applications migrating across multiple devices BIBAFull-Text 105-110
  Federico Bellucci; Giuseppe Ghiani; Fabio Paternò; Carmen Santoro
Ubiquitous environments call for user interfaces able to migrate across various types of devices while preserving task continuity. One fundamental issue in migratory user interfaces is how to preserve the state while moving from one device to another. In this paper we present a solution for the interactive part of Web applications. In particular, we focus on the most problematic part, which is maintaining the JavaScript state. We also describe an example application to illustrate the support provided by our migration platform.
If their car talks to them, shall a kitchen talk too?: cross-context mediation of interaction preferences BIBAFull-Text 111-116
  Elena Vildjiounaite; Vesa Kyllönen; Jani Mäntyjärvi
So called "smart products" try to recognise user context and to deliver relevant information upon own initiative, e.g., to advise to buy a windscreen washing liquid or to stir an overheated meal. As variety of usage situations grow, it may become difficult for the users to configure interaction manually in every new case, e.g., to specify via which modalities to deliver different message types. This work proposes several strategies to predict interaction preferences of individual users and user groups for a new context, based on preferences of these and other users in other contexts and preferences of other users in the target context. In the experiments with the smart products' configurations, set by 21 test subjects for different contexts (new and known tasks in cooking and car servicing domains, performed alone and in a group), the best of the proposed preferences mediation strategies allowed to predict on average 75% of settings, chosen by individuals and groups.

Tools for graphical user interfaces

Hayaku: designing and optimizing finely tuned and portable interactive graphics with a graphical compiler BIBAFull-Text 117-126
  Benjamin Tissoires; Stéphane Conversy
Although reactive and graphically rich interfaces are now mainstream, their development is still a notoriously difficult task. This paper presents Hayaku, a toolset that supports designing finely tuned interactive graphics. With Hayaku, a designer can abstract graphics in a class, describe the connections between input and graphics through this class, and compile it into runnable code with a graphical compile chain. The benefits of this approach are multiple. First, the front-end of the compiler is a rich standard graphical language that designers can use with existing drawing tools. Second, manipulating a data flow and abstracting the low-level run-time through a front-end language makes the transformation from data to graphics easier for designers. Third, the graphical interaction code can be ported to other platforms with minimal changes, while benefiting from optimizations provided by the graphical compiler.
Specifying and implementing UI data bindings with active operations BIBAFull-Text 127-136
  Olivier Beaudoux; Arnaud Blouin; Olivier Barais; Jean-Marc Jézéquel
Modern GUI toolkits propose the use of declarative data bindings to link the domain data to their presentations. These approaches work fine for defining simple bindings, but require an increasing programming effort as soon as the bindings become more complex. In this paper, we propose the use of active operations for specifying and implementing UI data bindings to tackle this issue. We demonstrate that the proposed approach goes beyond the usual declarative data bindings by combining the simplicity of the declarative approaches with the expressiveness of active operations.
GUIDE2ux: a GUI design environment for enhancing the user experience BIBAFull-Text 137-142
  Jan Meskens; Matthias Loskyll; Marc Seißler; Kris Luyten; Karin Coninx; Gerrit Meixner
For the design and development of graphical user interfaces, designers have to take various guidelines, standards and target platform characteristics into account. This is often a hard and time consuming activity because guidelines are spread over multiple documents using different styles, and furthermore it requires considerable effort to verify whether a design will work well on the targeted device. We propose GUIDE2ux, a design environment that (1) identifies and shows usability problems automatically and (2) facilitates designers to verify their designs on the target device easily. With GUIDE2ux, we make design standards more accessible for designers and help them to improve and test the user experience of their designs.
GRIP: get better results from interactive prototypes BIBAFull-Text 143-148
  Jan Van den Bergh; Deepak Sahni; Mieke Haesen; Kris Luyten; Karin Coninx
Prototypes are often used to clarify and evaluate design alternatives for a graphical user interface. They help stakeholders to decide on different aspects by making them visible and concrete. This is a highly iterative process in which the prototypes evolve into a design artifact that is close enough to the envisioned result to be implemented. People with different roles are involved in prototyping. Our claim is that integrated or inter-operable tools help design information propagate among people while prototyping and making the transition more accurately into the software development phase.
   We make a first step towards such a solution by offering a framework, GRIP, in which such a tool should fit. We conducted a preliminary evaluation of the framework by using it to classify existing tools for prototyping and implementing a limited prototyping tool, GRIP-it, which can be integrated into the overall process.

Demo session

PageSpark: an E-magazine reader with enhanced reading experiences on handheld devices BIBAFull-Text 149-152
  Jiajian Chen; Jun Xiao; Jian Fan; Eamonn O'Brien-Strain
In this paper we present PageSpark, a system that automatically converts static magazine content to interactive and engaging reading apps on handheld reading devices. PageSpark enhances the reading experience in three general aspects: page layout reorganization, page element interactions and page transitions. We explored and implemented several design variations in each aspect with the prototype running on the iPad. Participants from our initial user study showed strong interest of using PageSpark over existing magazine reading apps.
A research framework for performing user studies and rapid prototyping of intelligent user interfaces under the OpenOffice.org suite BIBAFull-Text 153-156
  Martin Dostál; Zdenek Eichler
We introduce a research framework which enables to use the OpenOffice.org as a platform for HCI research, particularly for performing user studies or prototyping and evaluating intelligent user interfaces. We make two contributions: (1) we introduce an innovative hybrid logging technique which provides high-level, rich and accurate information about issued user commands, command parameters and used interaction styles. Our logging technique also avoids an unwanted requirement for further complex processing of logged user interface events to infer user commands, which must be performed on most loggers these days. (2) Our logging tool acts as a component object in OpenOffice.org with an easy-to-use Application Program Interface (API) which enables, along with deep OpenOffice.org programmability, to use OpenOffice.org as a research framework for developing and evaluating intelligent user interfaces.
Tell me your needs: assistance for public transport users BIBAFull-Text 157-160
  Bernd Ludwig; Martin Hacker; Richard Schaller; Bjoern Zenker; Alexei V. Ivanov; Giuseppe Riccardi
Providing navigation assistance to users is a complex task generally consisting of two phases: planning a tour (phase one) and supporting the user during the tour (phase two). In the first phase, users interface to databases via constrained or natural language interaction to acquire prior knowledge such as bus schedules etc. In the second phase, often unexpected external events, such as delays or accidents, happen, user preferences change, or new needs arise. This requires machine intelligence to support users in the navigation real-time task, update information and trip replanning. To provide assistance in phase two, a navigation system must monitor external events, detect anomalies of the current situation compared to the plan built in the first phase, and provide assistance when the plan has become unfeasible. In this paper we present a prototypical mobile speech-controlled navigation system that provides assistance in both phases. The system was designed based on implications from an analysis of real user assistance needs investigated in a diary study that underlines the vital importance of assistance in phase two.
Multi-user chorded toolkit for multi-touch screens BIBAFull-Text 161-164
  Ioannis Leftheriotis; Konstantinos Chorianopoulos
In this work, we present the design and implementation of a chorded menu for multiple users on a large multi-touch vertical display. Instead of selecting an item in a fixed menu by reaching for it, users make a selection by touching multiple fingers simultaneously on any place of the display. Previous research on multi-touch toolkits has provided basic access to touch events, but there is no support for advanced user interface widgets, such as chords. For this purpose, we extended the open-source PyMT toolkit with an architecture that supports alternative user interaction strategies with chorded menus. In addition, we built a multi-user extension that supports chords for two or more users. Chords could be used for having user-aware MT applications. Our toolkit is open source and has been designed as a widget that could be integrated into broader interaction frameworks for multi-touch screens.

Testing

UI-driven test-first development of interactive systems BIBAFull-Text 165-174
  Judy Bowen; Steve Reeves
Test-driven development (TDD) is a software development approach, which has grown out of the Extreme Programming and Agile movements, whereby tests are written prior to the implementation code which is then developed and refactored so that it passes the tests. Test-first development (TFD) takes a similar approach, but rather than relying on the testers to infer the correct tests from the requirements (often expressed via use cases) they use models of the requirements as the basis for the tests (and as such have a more formal approach). One of the problems with both TDD and TFD is that is has proven hard to adapt it for interactive systems as it is not always clear how to develop tests to also support user interfaces (UIs). In this paper we propose a method which uses both formal models of informal UI design artefacts and formal specifications to derive abstract tests which then form the basis of a test-first development process.
Test case generation from mutated task models BIBAFull-Text 175-184
  Ana Barbosa; Ana C. R. Paiva; José Creissac Campos
This paper describes an approach to the model-based testing of graphical user interfaces from task models. Starting from a task model of the system under test, oracles are generated whose behaviour is compared with the execution of the running system. The use of task models means that the effort of producing the test oracles is reduced. It does also mean, however, that the oracles are confined to the set of expected user behaviours for the system. The paper focuses on solving this problem. It shows how task mutations can be generated automatically, enabling a broader range of user behaviours to be considered. A tool, based on a classification of user errors, generates these mutations. A number of examples illustrate the approach.

Interaction with large screens

Rapid development of user interfaces on cluster-driven wall displays with jBricks BIBAFull-Text 185-190
  Emmanuel Pietriga; Stéphane Huot; Mathieu Nancel; Romain Primet
Research on cluster-driven wall displays has mostly focused on techniques for parallel rendering of complex 3D models. There has been comparatively little research effort dedicated to other types of graphics and to the software engineering issues that arise when prototyping novel interaction techniques or developing full-featured applications for such displays. We present jBricks, a Java toolkit that integrates a high-quality 2D graphics rendering engine and a versatile input configuration module into a coherent framework, enabling the exploratory prototyping of interaction techniques and rapid development of post-WIMP applications running on cluster-driven interactive visualization platforms.
SCIVA: designing applications for surface computers BIBAFull-Text 191-196
  Tobias Hesselmann; Susanne Boll; Wilko Heuten
The usability of surface computing applications largely depends on a thorough consideration of the specific characteristics and constraints of surface computers during development. Nevertheless, established user interface design processes do not sufficiently consider these aspects. Thus, Developers designing applications for Interactive Tabletops and Surfaces often need to rely on best practices and intuition rather than a systematic development process based on facts and specifications. We address this problem by presenting SCIVA, an iterative process for designing gesture-based, visual interfaces for interactive surfaces. We identify challenges in the design of applications for surface computers, describe necessary steps to address them and suggest user-centric methods that can be applied to the respective steps.

Designing graphical interfaces

End-user development of service-based interactive web applications at the presentation layer BIBAFull-Text 197-206
  Tobias Nestler; Abdallah Namoun; Alexander Schill
Lightweight service composition approaches are gaining a fast momentum in the integration landscape, among which is the integration/composition at the presentation layer where software components are integrated using their frontends, rather than application logic or data. This paper presents a new approach for composing web services through their user interfaces (UI) to form composite web applications in a purely graphical manner without the necessity to write any programming code. Unlike existing approaches, our service composition approach is shaped by a set of iterative user based evaluations to ensure no modeling or programming skills are required for web application development. Indeed our approach is tailored towards non-programmers. This paper provides an in-depth description of the general concepts and fundamental principles of our UI-centric design time approach, a brief description of our prototype, namely the ServFace Builder which serves as a proof of concept, and evaluation results.
When the functional composition drives the user interfaces composition: process and formalization BIBAFull-Text 207-216
  Cédric Joffroy; Benjamin Caramel; Anne-Marie Dery-Pinna; Michel Riveill
The emergence of mashups made the reuse of applications easier by providing a simple solution to juxtapose applications. However, the resulting composite applications do not allow sharing data or create complex workflows. The only current way to do so is by composing applications at the functional level to create new services. Furthermore, user interfaces must be redesigned and regenerated in order to provide an interaction between user and this new service.
   This paper proposes a solution to this problem. The implemented approach enables to reuse user interfaces while composing services. This composition relies on a process that first abstracts the applications to be composed and the functional composition. Then, it achieves to a composition at the abstract level regenerating a concrete user interface in a target language. Also, thanks to a mixed-initiative composition framework, the several identified composition conflicts are then solved, either automatically or by a developer.
A design pattern mining method for interaction design BIBAFull-Text 217-222
  Claudia Iacob
This paper reports on a design pattern mining method addressing pattern mining in interaction design. The method aims at identifying proven solutions to recurring design problems through design workshops and software application analysis. During a design workshop, a team of 3-5 designers is asked to design the GUI and the interaction process for an application in the domain of the mining process and the design issues they address are collected. Moreover, a set of software applications in the area of the mining process is analyzed in order to identify in what measure the design issues discussed during the workshops are considered in the implementation of existing applications. Candidates for being documented as design patterns are the most recurring design issues in both the workshops and the software analysis. The paper describes the method together with its application in mining for design patterns for the design of synchronous collaborative systems.
Flippable user interfaces for internationalization BIBAFull-Text 223-228
  Iyad Khaddam; Jean Vanderdonckt
The language reading direction is probably one of the most determinant factors influencing the successful internationalization of graphical user interfaces, beyond their mere translation. Western languages are read from left to right and top to bottom, while Arabic languages and Hebrew are read from right to left and top to bottom, and Oriental languages are read from top to bottom. In order to address this challenge, we introduce flippable user interfaces that enable the end user to change the reading direction of a graphical user interface by flipping it into the desired reading direction by direct manipulation. This operation automatically and dynamically changes the user interface layout based on a generalized concept of reading direction and translates it according to the end user's preferences.

Keynote 2

Engineering interactive ubiquitous computing systems BIBFull-Text 229-230
  Albrecht Schmidt

Innovative interaction

An extensible digital ink segmentation and classification framework for natural notetaking BIBAFull-Text 231-240
  Adriana Ispas; Beat Signer; Moira C. Norrie
With the emergence of digital pen and paper technologies, we have witnessed an increasing number of enhanced paper-digital notetaking solutions. However, the natural notetaking process includes a variety of individual work practices that complicate the automatic processing of paper notes and require user intervention for the classification of digital ink data. We present an extensible digital ink processing framework that simplifies the classification of digital ink data in natural notetaking applications. Our solution deals with the manual as well as automatic ink data segmentation and classification based on Delaunay triangulation and a strongest link algorithm. We further highlight how our solution can be extended with new digital ink classifiers and describe a paper-digital reminder application that has been realised based on the presented digital ink processing framework.
Improving FTIR based multi-touch sensors with IR shadow tracking BIBAFull-Text 241-246
  Samuel A. Iacolina; Alessandro Soro; Riccardo Scateni
Frustrated Total Internal Reflection (FTIR) is a key technology for the design of multi-touch systems. With respect to other solutions, such as Diffused Illumination (DI) and Diffused Surface Illumination (DSI), FTIR based sensors suffer less from ambient IR noise, and is, thus, more robust to variable lighting conditions. However, FTIR does not provide (or is weak on) some desirable features, such as finger proximity and tracking quick gestures. This paper presents an improvement for FTIR based multi-touch sensing that partly addresses the above issues exploiting the shadows projected on the surface by the hands to improve the quality of the tracking system. The proposed solution exploits natural uncontrolled light to improve the tracking algorithm: it takes advantage of the natural IR noise to aid tracking, thus turning one of the main issues of MT sensors into a useful quality, making it possible to implement pre-contact feedback and enhance tracking precision.
Towards informed metaphor selection for TUIs BIBAFull-Text 247-252
  Stefan Oppl; Chris Stary
In TUI design the selection of metaphors influences user expectations and ease of use. Traditional TUI design processes have not addressed this issue explicitly so far. A reflection of existing approaches for metaphor classification in TUI design helps explaining why TUI metaphors not fitting could mislead users. Based upon these explanations, we could gain empirical insight into negative effects caused by selecting metaphors not fitting the situation of use. The results allow pursuing a metaphor-aware TUI specification process, as they address metaphor selection explicitly, and can be grounded in both, concept development, and empirical findings.
Estimating scale using depth from focus for mobile augmented reality BIBAFull-Text 253-258
  Klen Copic Pucihar; Paul Coulton
Whilst there has been a considerable progress in augmented reality (AR) over recent years, it has principally been related to either marker based or a priori mapped systems, which limits its opportunity for wide scale deployment. Recent advances in marker-less systems that have no a priori information, using techniques borrowed from robotic vision, are now finding their way into mobile augmented reality and are producing exciting results. However, unlike marker based and a priori tracking systems these techniques are independent of scale which is a vital component in ensuring that augmented objects are contextually sensitive to the environment they are projected upon. In this paper we address the problem of scale by adapting a Depth From Focus (DFF) technique, which has previously been limited to high-end cameras to a commercial mobile phone. The results clearly show that the technique is viable and adds considerably to the enhancement of mobile augmented reality. As the solution only requires an auto focusing camera, it is also applicable to other AR platforms.

Posters

BiLL: an experimental environment for visual analytics BIBAFull-Text 259-264
  Jan Wojdziak; Dietrich Kammer; Ingmar S. Franke; Rainer Groh
The field of Visual Analytics attempts to identify phenomena, guidelines, and algorithms to generate images suitable to communicate information efficiently and effectively. The benefit of using information visualizations is that the represented data can be quickly perceived and comprehended by the viewer. Research of novel visualization and interaction techniques in the context of three-dimensional computer graphics requires interactive computer systems. To this end, a component-oriented software framework is presented in this contribution. Bildsprache LiveLab (BiLL) allows independent implementation and combination of different components. Each component is responsible for various tasks in the context of investigating images of three-dimensional scenes. Two case studies covering multiperspective and color perspective illustrate the application of BiLL and its potential as an experimental environment for visualizing user-centered projections of three-dimensional scenes.
QUIMERA: a quality metamodel to improve design rationale BIBAFull-Text 265-270
  Alfonso García Frey; Eric Céret; Sophie Dupuy-Chessa; Gaëlle Calvary
With the increasing complexity of User Interfaces (UI) it is more and more necessary to make users understand the UI. We promote a Model-Driven approach to improve the perceived quality through an explicit and observable design rationale. The design rationale is the logical reasons given to justify a designed artifact. The design decisions are not taken arbitrarily, but following some criteria. We propose a Quality Metamodel to justify these decisions along a Model-Driven Engineering approach.
A resource-based framework for interactive composition of multimedia documents BIBAFull-Text 271-276
  Paolo Bottoni; Riccardo Genzone
Interactive document composition requires users to launch complex programs, interleaving editing, integration, and formatting activities. Moreover, access to the document fragments may require specialised programs, possibly using proprietary formats. We propose a light-weight interaction framework for document composition, based on a notion of resource, where document construction amounts to selecting fragments, possibly generated via simple in-place editors or extracted from existing rich format documents. Actual document generation uses style sheets to produce different renderings for the same content.
User experience quality in multi-touch tasks BIBAFull-Text 277-282
  Ioannis Leftheriotis; Konstantinos Chorianopoulos
In this paper, we present an updated set of experimental tasks and measures for large multi-touch (MT) input devices. In addition to a multi-user condition, we have employed an updated set of tasks, as well as subjective measures for user enjoyment. In the first experiment (a target acquisition task with two moving targets), the MT was more efficient than the mouse. Surprisingly, we found that the reduced accuracy of MT did not affect the perceived usability, or the enjoyment of the users. In the second experiment (a multiple shapes docking task), the MT was again more efficient and enjoying than the mouse. In the two-user condition, we found that performance and enjoyment was always higher than the single-user conditions, regardless of input device and task. Besides the quantitative results, we observed that users employed diverse interaction strategies in the MT condition, such as bi-manual input. The proposed tasks and the results support the use of MT in entertainment applications (multimedia and video-games), collaborative work, and scientific visualizations with complex data.
TREC: platform-neutral input for mobile augmented reality applications BIBAFull-Text 283-288
  Jason Kurczak; T. C. Nicholas Graham
Development of Augmented Reality (AR) applications can be time consuming due to the effort required in accessing sensors for location and orientation tracking data. In this paper, we introduce the TREC framework, designed to handle sensor input and make AR development easier. It does this in three ways. First, TREC generates a high-level abstraction of user location and orientation, so that low-level sensor data need not be seen directly. TREC also automatically uses the best available sensors and fusion algorithms so that complex configuration is unnecessary. Finally, TREC enables extensions of the framework to add support for new devices or customized sensor fusion algorithms.
Low-fidelity prototyping of gesture-based applications BIBAFull-Text 289-294
  Ali Hosseini-Khayat; Teddy Seyed; Chris Burns; Frank Maurer
Touch-based devices are becoming increasingly common in the consumer electronics space. Support for prototyping touch-based interfaces is currently limited. In this paper, we present a tool we developed in order to bridge the gap between user interface prototyping and touch-based interfaces.

Doctoral consortium

Specifying concurrent behavior to evaluate ubiquitous computing environments BIBAFull-Text 295-298
  René Zilz
Usability evaluation in ubiquitous computing environments is a rapidly developing research area in human computer interaction because most of the traditional evaluation methods are difficult to apply for this purpose. The paper discusses the idea of using virtual environments to conduct evaluation. Until now, we can conduct real time scenarios in a 2D based virtual environment called ViSE. But there are a number of disadvantages when using real time scenario evaluation. For instance, we cannot specify concurrent user behavior in an acceptable way. For the specification of such a behavior a system is needed where the start and end times of every user action can be specified in the scenario definition. We introduce an approach that can improve existing approaches by associating time lines for users, devices and items belonging to the ubiquitous computing environment. As a result, the expert is able to evaluate concurrent interacting test cases which are crucial for collaborative systems.
Toward a closer integration of usability in software development: a study of usability inputs in a model-driven engineering process BIBAFull-Text 299-302
  Carine Lallemand
Even though the benefits of usability have widely been proven, it seems that development-oriented companies face many difficulties to introduce usability practices into their defined development processes. This paper describes the overall methodology deployed as an attempt to achieve a closer integration of usability practices in the software development process. Model-Driven Engineering (MDE) is used as a basis for this integration. Providing a precise framework composed of models and transformations, it allows to track usability problems and to highlight where exactly they occur in the development process. We will thus be able to link every step of the process to specific ergonomic inputs and to study their consequences on the usability of the generated system. Because MDE will only be used as a way among others to investigate some hypotheses on usability and User-Centered Design (UCD) in general, our results are expected to provide valuable and generic information on usability and UCD processes.
Sustainable management of usability information BIBAFull-Text 303-306
  Ben Heuwing
Information from usability engineering activities can be useful in different contexts beyond the scope of a specific development project. This dissertation project aims at developing a model of usability information in order to support corresponding information needs of usability engineers in organizations with a usability information system. Results of interviews with usability professionals are presented, indicating that usability related information is already put to use in practice and that access to this information can be improved.
Toward a flexible design method sustaining UIs plasticity BIBAFull-Text 307-310
  Eric Ceret
More and more, plasticity is a main concern in User Interface (UI) design, and its complexity impulses the need for a sustaining design method or a guide to compose methods' fragments. But facing the quantity of methods makes it hard to choose which one is the most adapted one regarding to a project's context. In this paper, we propose a taxonomy that enables the designers to compare methods' process models, and so empowers them to make enlightened choices, and we describe our approach to create a new design method, empowering flexible conception of plastic UIs.
Distributed user interfaces in space and time BIBAFull-Text 311-314
  Jérémie Melchior
Distributed User Interfaces (DUIs) have been imagined in order to support end users in carrying out interactive tasks that could be distributed in space (e.g., some subtasks are carried out in different locations) and time (e.g., some subtasks are carried out during different time intervals, depending on who is contributing to the task. Classical interactive applications involving a single-user, single-context user interface are rarely developed in a way that distributing parts or whole of the user interface is made effective and efficient. In order to facilitate the deployment of such distributed user interfaces, this thesis provides the following contributions: a series of models capturing the various aspects of a DUI based on new concepts (i.e. distribution scene and scenario), an engineering method for specifying DUIs based on these concepts, and a supporting toolkit providing the developers with distribution primitives.
A computational framework for multi-dimensional context-aware adaptation BIBAFull-Text 315-318
  Vivian Genaro Motti
Most interactive applications often assume a pre-defined context of use of an able-bodied user, a desktop platform, in a stable environment. In contrast, users compose a heterogeneous group, interacting via different means and devices in varied environments; which requires, thus, context-aware adaptation. Adaptation has been largely investigated, but the studies are often constrained to one context dimension at a time: user or platform or environment. To address this issue and to bridge the gap between high-level adaptation goals and implementation of adaptation, this research aims at developing a computational framework for user interface adaptation based on distinct dimensions and contexts of use. This framework consists of four main contributions: a design space to characterize context-aware adaptation of user interface, a reference framework to classify adaptation techniques for distinct scenarios, an ontology of adaptation techniques based on a 3-level Adaptation Rules, and an interpreter of adaptation rules to address techniques defined in the design space and reference framework.
Modeling animations for dependable interactive applications BIBAFull-Text 319-322
  Thomas Mirlacher
While today most parts of a graphical user interface are static, animations are increasingly used. This increase of use can be attributed to the fact that CPU resources remain available after the system's main functions are performed. Additionally, animations have been demonstrated useful to support the users in their understanding of the user interface's behavior and evolutions. However, adding animation significantly increases specification and implementation complexity.
   This paper elaborates on a model-based approach to describe animations in a complete and unambiguous way, while keeping the complexity low. Such models can then be exploited to predict the impact of an animation on system performance, users' performance and experience.
Identifying, relating, and evaluating design patterns for the design of software for synchronous collaboration BIBAFull-Text 323-326
  Claudia Iacob
Many working environments require that geographically-distributed or co-located work group members work together -- supported by software -- in developing and refining one commonly shared resource in the same time. Hence, synchronous collaboration is common to various contexts and domains, examples being drawing, searching, text editing, and game solving. However, little work has been done in identifying design patterns for the design of systems for such collaboration. This line of research aims at identifying, relating and evaluating such design patterns for providing: a). a better understanding of the design processes of synchronous collaborative software, and b). a repository of knowledge comprising best practices in such design processes for practitioners.
A model-based approach for gesture interfaces BIBAFull-Text 327-330
  Lucio Davide Spano
The interaction technologies had substantial enhancements in later years, with the introduction of devices whose capabilities changed the way people interact with games and mobile devices. However, this change did not really affected desktop systems. Indeed, few applications are able to exploit such new interaction modalities in an effective manner. This work envisions the application of model-based approaches for the engineering of gesture user interfaces, in order to provide the designer with a comprehensive theoretical framework for usage centred application design. The differences between existing gesture-enabling devices will be tackled applying more general solutions for multi-device user interfaces.

Tutorials

Designing visual user interfaces for mobile applications BIBAFull-Text 331-332
  Luca Chittaro
People want to do more with their mobile phones and every day new mobile applications are launched in a increasingly wide number of domains. However, traditional UI knowledge is not sufficient to design effective interfaces for mobile applications because the mobility context presents developers with several peculiarities and new challenges. This tutorial will introduce participants to the design of visual interfaces for mobile applications. In particular, it will: (i) illustrate the peculiar aspects of mobile interaction that make it more difficult to build effective user interfaces for mobile users, (ii) show how the powerful graphics capabilities of today's mobile devices can be exploited to create interfaces that help users on the move do more with their phones, requiring less time and attention, (iii) look at recent developments in the engineering of mobile UIs such as tool support for mobile interface development, including mobile end-user programming.

Workshops

Enhancing interaction with supplementary supportive user interfaces (UIs): meta-UIs, mega-UIs, extra-UIs, supra-UIs . . . BIBAFull-Text 333-334
  Alexandre Demeure; Grzegorz Lehmann; Mathieu Petit; Gaëlle Calvary
In order to improve the interaction control and intelligibility, end-user applications are supplemented with Supportive User Interfaces (SUI), like meta-UIs, mega-UIs, helping or configuration wizards. These additional UIs support the users by providing them with information about the available functionalities, the context of use, or the performed adaptations. Such UIs allow the user to supervise and modify an application interactive behavior according to her/his needs.
   Given the rising complexity of interactive systems, supportive UIs are highly desirable features. However, there is currently no common understanding of types and roles of supportive UIs. Enabling concepts and definitions underlying the engineering of such UIs are also missing. In order to fill this gap, the workshop seeks a discussion with a broad audience of researchers, who have experience with the design and development of supportive UIs.
Second workshop on engineering patterns for multi-touch interfaces BIBAFull-Text 335-336
  Kris Luyten; Davy Vanacken; Malte Weiss; Jan Borchers; Miguel Nacenta
Multi-touch gained a lot of interest in the last couple of years and the increased availability of multi-touch enabled hardware boosted its development. However, the current diversity of hardware, toolkits, and tools for creating multi-touch interfaces has its downsides: there is only little reusable material and no generally accepted body of knowledge when it comes to the development of multi-touch interfaces. This workshop is the second workshop on this topic and the workshop goal remains unchanged: to seek a consensus on methods, approaches, toolkits, and tools that aid in the engineering of multi-touch interfaces and transcend the differences in available platforms. The patterns mentioned in the title indicate that we are aiming to create a reusable body of knowledge.
Model-based interactive ubiquitous systems BIBAFull-Text 337-338
  Thomas Schlegel; Stefan Pietschmann
Ubiquitous systems are introducing a new quality of interaction both into our lives and into software engineering. Software becomes increasingly dynamic making frequent changes to system structures, distribution, and behavior necessary. Also, adaptation to user needs and contexts as well as different modalities and communication channels make these systems differ strongly from what has been standard over the last decades.
   Model-driven engineering forms a promising approach for coping with the dynamics and uncertainties inherent to interactive ubiquitous systems (IUS). This workshop discusses models and model-driven architectures addressing the challenges of interaction with and engineering of IUS with regard to the design and runtime.
Pattern-driven engineering of interactive computing systems (PEICS) BIBAFull-Text 339-340
  Marc Seissler; Kai Breiner; Gerrit Meixner; Peter Forbrig; Ahmed Seffah; Kerstin Kloeckner
Since almost one decade HCI pattern languages are one popular form of design knowledge representations which can be used to facilitate the exchange of best practices, knowledge and design experience between the interdisciplinary team members and allow the formalization of different user interface aspects. Since patterns usually describe the rational in which context they should be applied (when), why a certain pattern should be used in a specific use context (why) and how to implement the solution part (how) they are suitable to describe different user interface aspects in a constructive way.
   But despite intense research activities in the last years, HCI pattern languages still lack in a lingua franca, a common language for the standardized description and organization of the pattern. This makes it difficult to design suitable tools that support the developers in applying HCI patterns in model-based user interface development (MBUID) processes. To enable the constructive use of HCI patterns in the model-based development process the informal textual, or graphical notation of HCI patterns has to be overcome.
   Besides that, evaluating the effectiveness of a pattern, i.e. when is a pattern a 'good' pattern is an important issue that has to be tackled to fully benefit from HCI patterns and to improve their applicability in future design processes.
Engineering interactive computer systems for medicine and healthcare (EICS4Med) BIBAFull-Text 341-342
  Ann Blandford; Giuseppe De Pietro; Luigi Gallo; Andy Gimblett; Patrick Oladimeji; Harold Thimbleby
This workshop brings together and develops the community of researchers and practitioners concerned with the design and evaluation of interactive medical devices (infusion pumps, etc) and systems (electronic patient records, etc), to deliver a roadmap for future research in this area. The workshop involves researchers and practitioners designing and evaluating dependable systems in a variety of contexts, and those developing innovative interactive computer systems for healthcare. These pose particular challenges because of the inherent variability -- of patients, system configurations, and so on. Participants will represent a range of perspectives, including safety engineering and innovative design. The focus is: engineering safe and acceptable interactive healthcare systems. The aim is: develop a roadmap for future research on interactive healthcare systems.