HCI Bibliography Home | HCI Conferences | EICS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
EICS Tables of Contents: 09101112131415

ACM SIGCHI 2014 Symposium on Engineering Interactive Computing Systems

Fullname:EICS'14: ACM SIGCHI Symposium on Engineering Interactive Computing Systems
Editors:Fabio Paternò; Carmen Santoro; Jürgen Ziegler
Location:Rome, Italy
Dates:2014-Jun-17 to 2014-Jun-20
Standard No:ISBN: 978-1-4503-2725-1; ACM DL: Table of Contents; hcibib: EICS14
Links:Conference Website
  1. Keynote I
  2. Adaptation session
  3. Frameworks for cross-device interaction session
  4. Collaborative environments session
  5. Doctoral consortium report
  6. Multimodal and web applications session
  7. Model-based UIs session
  8. Touch and gesture-based UIs session
  9. Demo session
  10. Late breaking results session
  11. Keynote II
  12. Analytic techniques session
  13. Prototyping and development frameworks session
  14. Workshop summaries

Keynote I

Making the web more inclusive with adaptive user interfaces BIBAFull-Text 1
  Krzysztof Z. Gajos
I build user interface that adapt their structure, appearance and behavior to the goals, abilities, preferences and cultural norms of their users. Prior work in adaptive user interface community has demonstrated that adaptive and adaptable interfaces can improve users' performance and satisfaction. These findings alone should make adaptation a core component of the user interface design practice. But I argue that adaptive interactive systems are even more fundamentally important: they help overcome implicit biases built into most interfaces and they are a scalable approach for democratizing access to digital resources. To convince you of it, I will first present several examples of situations in which the typical one-size-fits-all user interfaces can be a source of unintended, but systematic discrimination causing some groups to be less likely than others to take advantage of a digital resource in the first place, or causing them to have a less efficient or substantially different experience compared to their peers. I will then present examples of several adaptive user interfaces that successfully provided more equitable experiences to broader populations compared to traditional non-adaptive designs. I will conclude by reflecting on the major challenges that stand in the way of broad adoption of adaptive techniques in practice. In particular, I will highlight the mismatch between the abstractions needed to develop effective adaptive user interfaces and the current software engineering practice.

Adaptation session

Dynamically adapting an AI game engine based on players' eye movements and strategies BIBAFull-Text 3-12
  Stefanie Wetzel; Katharina Spiel; Sven Bertel
Artificial intelligence (AI) game engines have frequently been used to drive computational antagonists when playing games against humans. Limited work exists, however, on using human players' psychophysical measures to directly parametrise AI game engines. Instead, parameters to optimise AI performance are usually derived from general play-related data or user models. This paper presents novel research on using eye movement data in addition to data on users' strategies to adapt the live play of a computational antagonist in the visuo-spatial strategy game, Hex. It offers a set of suitable parameters for both types of data. A systematic evaluation of the approach showed, among other things, that using eye movement data led to significantly better gameplay experience for human players, as they experienced less frustration with sufficient challenge. Findings are discussed not only with regard to designing gameplay experience, but also their more general ramifications on using live psychophysical data for intelligent interactive systems.
What should adaptivity mean to interactive software programmers? BIBAFull-Text 13-22
  Mathieu Magnaudet; Stéphane Chatty
Works about adaptability and adaptivity in interactive systems cover very different issues (user adaptation, context-aware systems, ambient intelligence, ubiquitous computing), not always with the explicit goal of supporting programmers. Based on examples that highlight how weakly discriminative the present terminology is, we propose to separate two concerns: adaptivity as a purely analytical concept, relative to a given viewpoint on the software rather than to its very structure, and its programming as a non specific case of reactive behavior. We describe how simple adaptive behaviors can be programmed with simple interactive behavior patterns, and how more complex patterns can be introduced for intelligent adaptation. Finally we describe an application where, relying on the principles exposed in this paper, interaction and adaptation are combined in a simple and innovative manner.
Updating database schemas without breaking the UI: modeling using cognitive semantic categories BIBAFull-Text 23-31
  Evangelos Kapros; Simon McGinnes
Data management user interfaces are ubiquitous in information systems and web-based applications. From the oldest spreadsheet to the most modern database, end users and administrators alike have interacted with tabular data. Usually, each concept is represented by a table and columns. Change to the structure of each concept requires structural change to the tables and columns, which is costly. Tailor-made database and web applications may overcome this obstacle by designing UIs on top of the data layer, providing some degree of data independence. However, changes in their schemas do not automatically propagate into the user interface, and so their maintenance is expensive.
   In this paper we present a user interface that lets the end user alter the schema without the need for programming skills, eliminating the need for expensive software maintenance. To this end we propose an automatically generated user interface to include schema and data management functions. We built and evaluated an Adaptive Information System user interface (AIS UI), incorporating schema evolution functionality. In usability testing, first-time users were able to perform various data management tasks equally fast or faster than users using Microsoft Access, and on average 43% faster than users using Microsoft Excel. Task completion rates using the AIS significantly exceeded those using Microsoft Access and were comparable (>95%) with those using Microsoft Excel.
Towards a multi-stakeholder engineering approach with adaptive modelling environments BIBAFull-Text 33-38
  Alfonso García Frey; Jean-Sébastien Sottet; Alain Vagner
Human-Computer Interaction (HCI) addresses the study, planning and design of the interaction between people and computers through User Interfaces (UIs). The co-design and co-development of these UIs involve different stakeholders as Developers, Functional Analysts, Usability Experts and Interaction designers among others, all of them responsible for different UI elements (respectively implementation, functional requirements, usability and interaction workflow). Collaboration between stakeholders has been identified as a key factor for UI development. This article investigates how concepts and methods from model-driven engineering (MDE) can contribute to UI development through a collaborative approach. We discuss how UI views (extra-UI, mega-UI) can be useful for multi-stakeholder engineering, and how MDE acts as the backbone that supports them. The global approach is implemented through a first prototype of an Adaptive Modelling Environment (AME) illustrated through a case study. A screencast of the tool is also provided.
Design space for focus+context navigation in web forms BIBAFull-Text 39-44
  Johannes Harms; Christoph Wimmer; Karin Kappel; Thomas Grechenig
Navigation in long forms commonly employs user interface design patterns such as scrolling, tabs, and wizard steps. Since these patterns hide contextual form fields outside the viewport or behind other tabs or pages, we propose to apply the focus+context principle from information visualization to form design. This work presents a design space analysis to support usability engineering of focus+context form navigation. We evaluated the design space's usefulness and applicability in a case study and found the design space has fostered creativity and helped to clearly document design decisions, indicating it can be a valuable support for engineering intelligent, form-based user interfaces.

Frameworks for cross-device interaction session

A framework for the development of multi-display environment applications supporting interactive real-time portals BIBAFull-Text 45-54
  Chi Tai Dang; Elisabeth André
Advances in multi-touch enabled interactive tabletops led to many commercially available products and were increasingly deployed at places beyond research labs, for example at exhibitions, retail stores, or showrooms. At the same time, small multi-touch devices, such as tablets or smartphones, became prevalent in our daily life. When considering both trends, occasions and scenarios where tabletop systems and mobile devices form a coupled interaction space are expected to become increasingly widespread.
   However, application development or research prototypes for those environments will foreseeable require considerable resources when considering nowadays heterogeneity of device platforms and the functionality to establish a connected interaction space. To address these concerns, this paper discusses challenges and answers questions that arose during design and implementation of the Environs framework, a multi-display environment software framework that eases development of interactive distributed applications. In particular, Environs enables applications utilizing video portals that put high requirements on responsiveness and latency.
User interface distribution in multi-device and multi-user environments with dynamically migrating engines BIBAFull-Text 55-64
  Luca Frosini; Fabio Paternò
In this paper we present a framework and associated run-time support for flexible user interface distribution in multi-device and multi-user environments. It supports distribution across dynamic sets of devices, and does not require the use of a fixed server. The distribution updates are processed taking in account device types and user roles. We also report on three example applications and a validation of the presented framework.
XDKinect: development framework for cross-device interaction using Kinect BIBAFull-Text 65-74
  Michael Nebeling; Elena Teunissen; Maria Husmann; Moira C. Norrie
Interactive systems set in multi-device environments continue to attract increasing attention, prompting researchers to experiment with emerging technologies. This paper presents XDKinect -- a lightweight framework that facilitates development of cross-device applications using Kinect to mediate user interactions. The main benefits of XDKinect include its simplicity, adaptability and extensibility based on a flexible client-server architecture. Our framework features a time-based API to handle full-body interactions, a multi-modal API to capture gesture and speech commands, an API to utilise proxemic awareness information, a cross-device communication API, and a settings API to optimise for particular application requirements. A study with developers was conducted to investigate the potential of these features in terms of ease of use, effectiveness and possible use in the future. We show several example applications of XDKinect, as well as discussing advantages and limitations of our framework as revealed by our user study and experiments.

Collaborative environments session

Interaction design patterns for coherent and re-usable shape specifications of human-robot collaboration BIBAFull-Text 75-83
  Tina Mioch; Wietse Ledegang; Rosie Paulissen; Mark A. Neerincx; Jurriaan van Diggelen
Sharing and re-using design knowledge is a challenge for the diverse multi-disciplinary research and development teams that work on complex and highly automated systems. For this purpose, a situated Cognitive Engineering (sCE) methodology was proposed that specifies and assesses the functional user requirements with their design rationale in a coherent and concise way. This paper presents this approach for the development of human-robot collaboration, focusing on a recently added component: the application of interaction design patterns to capture and share design knowledge on the shape of the human-robot interaction (i.e., the communication level). The sCE case study in the urban search and rescue domain provided the specification and assessment of functions and shape of a team-awareness display. Twenty fire fighters participated as operator of a ground or aerial robot, in several realistic earth quake scenarios to assess the functions and shapes of this display in different settings. It showed that the functions (i.e., the task level requirements and rationale) were valid, while the shape (communication level) was (yet) sub-optimal. Based on this evaluation result, a design improvement on the communication level has been proposed without the need to adjust the task-level design solution.
Multi-models-based engineering of collaborative systems: application to collision avoidance operations for spacecraft BIBAFull-Text 85-94
  Célia Martinie; Eric Barboni; David Navarre; Philippe Palanque; Racim Fahssi; Erwann Poupart; Eliane Cubero-Castan
The work presented in this paper is based on a synergistic approach [1] integrating models of operators' tasks (described using the HAMSTERS notation) with models of the interactive system (described using the ICO notation) they are using. This synergistic approach makes it possible to bring together two usually independent (but complementary) representations of the same world. Even though supported by modeling and simulation tools, previous work in this area was rather theoretic focusing on concepts and principles in order to articulate this synergistic use of the models. The current article extends this line of research to address groupware applications. These extensions are performed on HAMSTERS notation in order to describe activities involving multiple users dealing with information flow, knowledge they are required to master and communication protocol (synchronous or asynchronous). Other extensions are performed on PetShop tool (supporting the ICO notation) in order to model and execute local and distant groupware applications. These extensions have been brought together by a more complex synergistic module bringing the two views together. Lastly, these extensions have been used for the modelling, design, and construction of a groupware system dedicated to collision avoidance of spacecraft with space debris. This case study is used to assess the applicability of the contributions and to identify paths for future work.

Doctoral consortium report

The EICS 2014 doctoral consortium BIBAFull-Text 95-96
  Laurence Nigay; Kris Luyten
In this short extended abstract, we present the doctoral consortium of the Engineering Interactive Computing Systems (EICS) 2014 Symposium. Our goal is to make the doctoral consortium a useful event with a maximum benefit for the participants by having a dedicated event the day before the conference as well as the opportunity to present their on-going doctoral work to a wider audience during the conference.

Multimodal and web applications session

A domain-specific textual language for rapid prototyping of multimodal interactive systems BIBAFull-Text 97-106
  Fredy Cuenca; Jan Van den Bergh; Kris Luyten; Karin Coninx
There are currently toolkits that allow the specification of executable multimodal human-machine interaction models. Some provide domain-specific visual languages with which a broad range of interactions can be modeled but at the expense of bulky diagrams. Others instead, interpret concise specifications written in existing textual languages even though their non-specialized notations prevent the productivity improvement achievable through domain-specific ones. We propose a domain-specific textual language and its supporting toolkit; they both overcome the shortcomings of the existing approaches while retaining their strengths. The language provides notations and constructs specially tailored to compactly declare the event patterns raised during the execution of multimodal commands. The toolkit detects the occurrence of these patterns and invokes the functionality of a back-end system in response.
Metadata type system: integrate presentation, data models and extraction to enable exploratory browsing interfaces BIBAFull-Text 107-116
  Yin Qu; Andruid Kerne; Nic Lupfer; Rhema Linder; Ajit Jain
Exploratory browsing involves encountering new information during open-ended tasks. Disorientation and digression are problems that arise, as the user repeatedly loses context while clicking hyperlinks. To maintain context, exploratory browsing interfaces must present multiple web pages at once.
   Design of exploratory browsing interfaces must address the limits of display and human working memory. Our approach is based on expandable metadata summaries. Prior semantic web exploration tools represent documents as metadata, but often depend on semantic web formats and datasets assembled in advance. They do not support dynamically encountered information from popular web sites. Optimizing presentation of metadata summaries for particular types of documents is important as a further means for reducing the cognitive load of rapidly browsing across many documents.
   To address these issues, we develop a metadata type system as the basis for building exploratory browsing interfaces that maintain context. The type system leverages constructs from object-oriented programming languages. We integrate data models, extraction rules, and presentation semantics in types to operationalize type specific dynamic metadata extraction and rich presentation. Using the type system, we built the Metadata In-Context Expander (MICE) interface as a proof of concept. A study, in which students engaged in exploring prior work, showed that MICE's metadata summaries help users maintain context during exploratory browsing.
Towards a behavior-oriented specification and testing language for multimodal applications BIBAFull-Text 117-122
  Marc Hesenius; Tobias Griebe; Volker Gruhn
Initiated by the ubiquity of mobile devices, human computer interaction has evolved beyond the classic PCs' mouse and keyboard setup. Smartphones and tablets introduced new interaction modalities to the mass market and created the need for specialized software engineering methods. While more and more powerful SDKs are released to develop interactive applications, specifying user interaction is still ambiguous and error-prone, causing software defects as well as misunderstandings and frustration among project team members and stakeholders. We present an approach addressing this problems by demonstrating how to incorporate multimodal interaction into user acceptance tests written in near-natural language using Gherkin and formal gesture descriptions.
WiFi proximity detection in mobile web applications BIBAFull-Text 123-128
  Clemens Nylandsted Klokmose; Matthias Korn; Henrik Blunck
We present a technique for enabling WiFi proximity detection in mobile web applications based on proximity-adaptive HTTP responses (PAHR). The technique requires zero installation on the client and is client platform independent. Our reference implementation ProxiMagic is low-cost and provides robust and responsive interactivity based on proximity detection. We demonstrate the technique's applicability through a real-world example application deployed during a month-long participatory art exhibition. We document the reliability and suitability of the simple proximity detection employed in ProxiMagic through a controlled experiment.

Model-based UIs session

Model-driven tools for medical device selection BIBAFull-Text 129-138
  Judy Bowen; Annika Hinze; Selina Reid
Safety-critical medical devices are used in hospitals and medical facilities throughout the world, and are relied upon to function correctly and be usable so as not to endanger patients. While such devices are often designed for specific use-cases in specific locations, in reality they may be used in a much wider range of contexts. In addition, the proliferation of these devices within a single environment means that selecting the most appropriate device for a specific task is not always straightforward. In this paper, we consider ways of modelling the context of use of medical devices and how such models may be used to support tools which provide medical personnel with assistance in making decisions about which devices to use in which circumstances.
Predicting task execution times by deriving enhanced cognitive models from user interface development models BIBAFull-Text 139-148
  Michael Quade; Marc Halbrügge; Klaus-Peter Engelbrecht; Sahin Albayrak; Sebastian Möller
Adaptive user interfaces (UI) offer the opportunity to adapt to changes in the context, but this also poses the challenge of evaluating the usability of many different versions of the resulting UI. Consequently, usability evaluations tend to become very complex and time-consuming. We describe an approach that combines model-based usability evaluation with development models of adaptive UIs. In particular, we present how a cognitive user behavior model can be created automatically from UI development models and thus save time and costs when predicting task execution times. With the help of two usability studies, we show that the resulting predictions can be further improved by using information encoded in the UI development models.
Considering task pre-conditions in model-based user interface design and generation BIBAFull-Text 149-154
  Marco Manca; Fabio Paternò; Carmen Santoro; Lucio Davide Spano
Deriving meaningful and consistent user interface implementations from task models is not trivial because of the large gap in terms of abstraction. This paper focuses on how to handle task preconditions in the design and generation process, an issue which has not adequately been addressed in previous work. We present a solution that is able to manage the information related to task pre-conditions at the various possible abstraction levels. The paper also reports on some example applications that show the generality of the solution and how it can be exploited in various cases.
Generating code skeletons for individual media elements in model-driven development of interactive systems BIBAFull-Text 155-160
  Andreas Pleuss
Model-driven approaches for interactive systems development usually generate User Interfaces (UIs) composed of standard widgets. However, in practice, high quality UIs can require individual media elements such as interactive graphics or animations. However, while a model-driven approach can provide various benefits -- e.g., reduced complexity or multi-platform development -- individual media elements are usually designed by specific experts using visual authoring tools. One solution to resolve this conflict is generating code skeletons which can be directly processed and filled out in visual authoring tools. This paper discusses how such skeletons need to be structured to provide best possible support on the one hand for a model-driven process and on the other hand for the media design in authoring tools.
A domain-specific model-based design approach for end-user developers BIBAFull-Text 161-166
  Anke Dittmar; Mathias Kühn; Peter Forbrig
The paper investigates model-based design (MBD) ideas for supporting end-user developers in creating mobile data collection tools. End-user developers cannot assumed to be able (or willing) to specify formal task models as they are common in MBD approaches. They use their knowledge about domain objects and general task characteristics to specify constraints on the execution of tasks. The paper shows that the restriction to specific task domains makes it possible to tailor the underlying meta-models and transformation rules accordingly and to provide end-users with convenient tool support. In particular, dialog models and their stepwise enrichment and refinement are considered in the paper. General implications of the suggested ideas for MBD are discussed. The proposed approach is implemented using the Eclipse Modeling Framework and a case study demonstrates the applicability of the approach.
Consolidating diverse user profiles based on the profile models of adaptive systems BIBAFull-Text 167-172
  Effie Karuzaki; Anthony Savidis
Profile-based adaptivity is an important ingredient of interactive systems. Today, although users keep many profiles in different applications, adaptive systems still request them explicitly. While lingua franca methods on profiles are suggested, unless standardized, they are hardly deployed by different vendors. We present an approach to consolidate diverse user profiles based on a profile model that is supplied as input. The latter is instantiated in our Gandalf system, where user profiles from various sources are aggregated, merged and mapped to any given model, by also preserving private user attributes. No common models for profiles are assumed, neither any shared models across adaptive systems are prescribed. Our method uses a thesaurus service, while it proposes lightweight rules for structure matching and conflict resolution to accompany the input profile model. Gandalf is under implementation as a web service, and allows adaptive systems to hook custom pre- and post-processing logic on profiles using JavaScript.

Touch and gesture-based UIs session

Formal modelling of dynamic instantiation of input devices and interaction techniques: application to multi-touch interactions BIBAFull-Text 173-178
  Arnaud Hamon; Philippe Palanque; Martin Cronel; Raphaël André; Eric Barboni; David Navarre
Representing the behavior of multi-touch interactive systems in a complete, concise and non-ambiguous way is still a challenge for formal description techniques. Indeed, multi-touch interactive systems embed specific constraints that are either cumbersome or impossible to capture with classical formal description techniques. This is due to both the idiosyncratic nature of multi-touch technology (e.g. the fact that each finger represent an input device and that gestures are directly performed on the surface without an additional instrument) and the high dynamicity of interactions usually encountered in this kind of systems. This paper presents a formal description technique able to model multi-touch interactive systems. We focus the presentation on how to represent the dynamic instantiation of input devices (i.e. finger) and how they can then be exploited dynamically to offer a multiplicity of interaction techniques which are also dynamically instantiated.
A gestural concrete user interface in MARIA BIBAFull-Text 179-184
  Lucio Davide Spano; Fabio Paternò; Gianni Fenu
In this paper, we describe a solution for engineering and modelling user interfaces for supporting input collected through gesture recognition hardware. We describe how we applied such approach by extending the MARIA UIDL, and how the modelling solution can be applied to other UI toolkits. In addition, we detail the model-to-code transformation for obtaining a running application through an example case study.

Demo session

Location based experience design for mobile augmented reality BIBAFull-Text 185-188
  Anton Fedosov; Stefan Misslinger
The main strength of Augmented Reality (AR) technology is the ability to immediately provide context in unfamiliar environments and experiences, representing digital virtual information that is associated with real world objects. AR has a proven record of commercial applications from tourism and entertainment industries to manufacturing and support services. In this work we are presenting a set of design decisions based on empirical observations for outdoor Augmented Reality. Present work led us to develop a consumer AR browser for mobile and wearable devices to be used in natural, uncontrolled settings.
AME: an adaptive modelling environment as a collaborative modelling tool BIBAFull-Text 189-192
  Alfonso García Frey; Jean-Sébastien Sottet; Alain Vagner
The development of User Interfaces (UIs) is a complex task. Researches shown that one of the reasons is the lack of integrated views that often forces developers to implement suboptimal solutions. These integrated views refer to (1) the artifacts that are manipulated by the stakeholders during the UI development process and (2) how these artifacts relate to each other. To overcome the lack of integrated views in the context of model-based UI development this paper introduces AMEs, Adaptive Modelling Environments that support UI development by providing explicit representations of both the artifacts and their relations. A first prototype is depicted in a case study and illustrated with a video. Details of the architecture are provided.
Metadata enriched visualization of keywords in context BIBAFull-Text 193-196
  Daniel Fischl; Arno Scharl
This paper presents an interactive, synchronized and metadata enriched implementation of the Word Tree meta-phor, which is an interactive visualization technique to show Keywords-in-Context (KWIC). Embedded into a Web intelligence platform focusing on climate change coverage, it provides users with a tool to better understand the usage of terms in large document collections. One of the novelties is the implementation of filters for the Word Tree, which shifts the focus of attention directly onto significant phrases, instead of punctuation or fill-words inherent to natural language usage.
IceTT: a responsive visualization for task models BIBAFull-Text 197-200
  Lucio Davide Spano; Gianni Fenu
Task models are useful for designers and domain experts in order to describe sequences of actions that need to be completed for reaching a user's goal. Their hierarchical structure is usually visualized through a tree representation that, for large models, is inclined to grow horizontally and reduces its readability. In this paper we introduce a visualization based on icicle graphs, which is able to adapt the tasks visualization to the screen width, suitable for displaying large models even on small screens.
ESSAVis++: an interactive 2Dplus3D visual environment to help engineers in understanding the safety aspects of embedded systems BIBAFull-Text 201-204
  Ragaad AlTarawneh; Jens Bauer; Shah Rukh Humayoun; Achim Ebert; Peter Liggesmeyer
In this paper, we present demonstration of a 2Dplus3D visual interactive environment called ESSAVis++. It is an enhanced version of the ESSAVis platform and was designed to overcome the limitations of the previous version. Its goal is to facilitate the collaboration between different engineers and to lead to better understanding of the analyzing process of safety aspects in embedded systems. In this work, we provide an overview of ESSAVis++ platform and focus on the new modifications and the set of improvements that we added for providing the enhanced and intuitive visualization features to facilitate extracting important safety aspects about the underlying embedded system.

Late breaking results session

Implementing widgets using sifteo cubes for visual modelling on tangible user interfaces BIBAFull-Text 205-210
  Yves Rangoni; Valérie Maquil; Eric Tobias; Eric Ras
Tangible user interfaces (TUI) have shown advantages for social and contextual interactions (e.g. collaboration). In this paper, we introduce active and reconfigurable tangibles that enhance the use of a TUI. We propose to design and implement different generic widgets, using Sifteo Cubes, based on a formal widget model. As a scenario, we used a BPMN2 collaborative business modelling task and put a focus on some widgets dedicated to this specific exercise. The use of Sifteo Cubes has been evaluated using this scenario by several participants in three case studies. The paper reports the results of these studies using a working prototype of the concepts presented in this paper.
LoMAK: a framework for generating locative media apps from KML files BIBAFull-Text 211-216
  Trien V. Do; Keith Cheverst; Ian Gregory
In this paper, we present the LoMAK framework which enables non-programmers (e.g., people working in the Digital Humanities, History, Geography, Geology and Archaeology areas) to generate locative media mobile apps from KML files, a format that these non-programmers are familiar with. The framework has two primary components: a KML processor web application and an Android mobile 'player' app called LoMAK player. The KML processor parses KML files to: (1) extract points of interest (POI) and their associated media, (2) produce geo-fences for the POIs, and (3) render the PoIs and their geo-fences on a map. The framework also supports the editing of geo-fences, i.e., a new geo-fence can be drawn as a polygon. The POIs and their associated media and geo-fences are then saved as a sharc file on a server. The LoMAK player loads this sharc file to operate as a locative media application.
The frameSoC software architecture for multiple-view trace data analysis BIBAFull-Text 217-222
  Generoso Pagano; Vania Marangozova-Martin
Trace analysis graphical user environments have to provide different views on trace data, in order to be effective in helping the comprehension of the traced application behavior. In this article we propose an open and modular software architecture, the FrameSoC workbench, which defines clear principles for view engineering and for view consistency management. The FrameSoC workbench has been successfully applied in real trace analysis use cases.
PLACID: a planner for dynamically composing user interfaces services BIBAFull-Text 223-228
  Yoann Gabillon; Gaelle Calvary; Humbert Fiorino
Dynamic Services Composition (DSC) aims at composing interactive systems from a set of available services corresponding to the available components. A component consists of a Functional Core and/or of a User Interface (UI) respectively providing computation and/or representation functions. In software engineering, a part of the literature focuses on the dynamic composition of computation services. Making the hypothesis that UI services can also be composed leads to a new research area in Human Computer Interaction: the dynamic composition of UI services. This paper presents two main contributions: the formalization of the problem and its solving by planning.
Phone proxies: effortless content sharing between smartphones and interactive surfaces BIBAFull-Text 229-234
  Alexander Bazo; Florian Echtler
We present Phone Proxies, a technique for effortless content sharing between mobile devices and interactive surfaces. In such a scenario, users often have to perform a lengthy setup process before the actual exchange of content can take place. Phone Proxies uses a combination of custom NFC (near-field communication) tags and optical markers on the interactive surface to reduce the user interaction required for this setup process to an absolute minimum. We discuss two use cases: "pickup", in which the user wants to transfer content from the surface onto their device, and "share", in which the user transfers device content to the surface for shared viewing. We introduce three possible implementations of Phone Proxies for each of these use cases and discuss their respective advantages.
Formal verification of UL using the power of a recent tool suite BIBAFull-Text 235-240
  Raquel Oliveira; Sophie Dupuy-Chessa; Gaelle Calvary
This paper presents an approach to verify the quality of user interfaces in the context of a critical system for nuclear power plants. The technique uses formal methods to perform verification. The user interfaces are described by means of a formal language called LNT and ergonomic properties are formally defined using temporal logics written in MCL language. Our approach moves towards the powerfulness of formal verification of user interfaces, thanks to recent tools to support the process.

Keynote II

Mindless or mindful technology? BIBAFull-Text 241
  Yvonne Rogers
We are increasingly living in our digital bubbles. Even when physically together -- as families and friends in our living rooms, outdoors and public places -- we have our eyes glued to our own phones, tablets and laptops. The new generation of "all about me" health and fitness gadgets, that is becoming more mainstream, is making it worse. Do we really need smart shoes that tell us when we are being lazy and glasses that tell us what we can and cannot eat? Is this what we want from technology -- ever more forms of digital narcissism, virtual nagging and data addiction? In contrast, I argue for a radical rethink of our relationship with future digital technologies. One that inspires us, through shared devices, tools and data, to be more creative, playful and thoughtful of each other and our surrounding environments.

Analytic techniques session

Triangulating empirical and analytic techniques for improving number entry user interfaces BIBAFull-Text 243-252
  Abigail Cauchi; Patrick Oladimeji; Gerrit Niezen; Harry Thimbleby
Empirical methods and analytic methods have been used independently to analyse and improve number entry system designs. This paper identifies key differences in exploring number entry errors combining laboratory studies and analytic methods and discusses the implications of triangulating methods to more thoroughly analyse safety critical design. Additionally, a previously presented analytic method used to analyse number entry interfaces is generalised to analyse more types of number entry systems.
   This paper takes number entry to mean interactively entering a numeric value, as opposed to entering a numeric identifier such as a phone number or ISBN. Many applications of number entry are safety critical, and this paper is particularly motivated by user interfaces in healthcare, for instance for specifying drug~dosage.

Prototyping and development frameworks session

Extracting behavioral information from electronic storyboards BIBAFull-Text 253-262
  Jason B. Forsyth; Tom L. Martin
In this paper we outline methods for extracting behavioral descriptions of interactive prototypes from electronic storyboards. This information is used to help interdisciplinary design teams evaluate potential ideas early in the design process. Using electronic storyboards provides a common descriptive medium where team members from different disciplinary backgrounds can collectively express the intended behavior of their prototype. The behavioral information is extracted by a combination of visual tags applied to elements of the storyboard, analysis of storyboard layout, and natural language processing of text written in the frames. We describe this process, provide a proof of concept example, and discuss design choices in developing this tool.
Supporting design, prototyping, and evaluation of public display systems BIBAFull-Text 263-272
  Morin Ostkamp; Christian Kray
Public displays have become ubiquitous in urban areas. They can efficiently deliver information to many people and increasingly also provide means for interaction. Designing, developing, and testing such systems can be challenging, particularly if a system consists of many displays in multiple locations. Deployment is costly and contextual factors such as placement within and interaction with the environment can have a major impact on the success of such systems. In this paper we propose a new prototyping and evaluation method for public display systems (PDS) that integrates augmented panoramic imagery and a light-weight, graph-based model to simulate PDS. Our approach facilitates low-effort, rapid design of interactive PDS and their evaluation. We describe a prototypical implementation and present an initial assessment based on a comparison with existing methods, our own experiences, and an example case study.
SecSpace: prototyping usable privacy and security for mixed reality collaborative environments BIBAFull-Text 273-282
  Derek Reilly; Mohamad Salimian; Bonnie MacKay; Niels Mathiasen; W. Keith Edwards; Juliano Franz
Privacy mechanisms are important in mixed-presence (collocated and remote) collaborative systems. These systems try to achieve a sense of co-presence in order to promote fluid collaboration, yet it can be unclear how actions made in one location are manifested in the other. This ambiguity makes it difficult to share sensitive information with confidence, impacting the fluidity of the shared experience. In this paper, we focus on mixed reality approaches (blending physical and virtual spaces) for mixed presence collaboration. We present SecSpace, our software toolkit for usable privacy and security research in mixed reality collaborative environments. SecSpace permits privacy-related actions in either physical or virtual space to generate effects simultaneously in both spaces. These effects will be the same in terms of their impact on privacy but they may be functionally tailored to suit the requirements of each space. We detail the architecture of SecSpace and present three prototypes that illustrate the flexibility and capabilities of our approach.
Towards a measurement framework for tools' ceiling and threshold BIBAFull-Text 283-288
  Rui Alves; Claudio Teixeira; Monica Nascimento; Amanda Marinho; Nuno Jardim Nunes
Software development tools are not catching up with the requirements of increasingly complex interactive software products and services. Successful tools are claimed to either be low-threshold/low-ceiling or high-threshold/high-ceiling, however no research to date addressed how to define and measure these concepts. This is increasingly important as these tools undergo an evaluation and adoption process by end-users. Here we hypothesized that the evaluation and adoption of tools is associated with the threshold (learnability). To assess this we conducted a learnability and usability study using three commercial Platform-as-a-Service tools. In this study we used an augmented think-aloud protocol with question asking where ten subjects were asked to create a simple web application. Our data shows that most learnability issues fall into two categories: understanding or locating. No evidence was found that usability defects correlate with the tools learnability score. Though we found an inverse correlation between the amount of issues and the learnability score.
Presenting EveWorks, a framework for daily life event detection BIBAFull-Text 289-294
  Bruno Cardoso; Teresa Romão
In this paper we present EveWorks, a new framework for the development of context-aware mobile applications, focused on the detection of events on people's daily lives. In our framework, events of interest are expressed through statements written in a simple domain-specific language that, being interpreted, allows for changing an application's reactive behavior at runtime. Instead of being focused on programming through technology of framework-specific components, our approach allows developers to express events in terms of more natural constructs -- intervals of time where some data invariants are true, articulated through the operators of James Allen's Interval Algebra.

Workshop summaries

Engineering interactive systems with SCXML BIBAFull-Text 295-296
  Dirk Schnelle-Walka; Stefan Radomski; Torbjörn Lager; Jim Barnett; Deborah Dahl; Max Mühlhäuser
The W3C is about to finalize the SCXML standard to express Harel state-machines as XML documents. In unison with the W3C MMI architecture specification and related work from the W3C MMI working group, this recommendation might be a promising candidate to become the HTML of multi-modal applications".
Engineering gestures for multimodal user interfaces BIBAFull-Text 297-298
  Florian Echtler; Dietrich Kammer; Davy Vanacken; Lode Hoste; Beat Signer
Despite increased presence of gestural and multimodal user interfaces in research as well as daily life, development of such systems still mostly relies on programming concepts which have emerged from classic WIMP user interfaces. This workshop proposes to explore the gap between attempts to formalize and structure development for multimodal interfaces in the research community on the one hand and the lack of adoption of these formal languages and frameworks by practitioners and other researchers on the other hand.
HCI engineering: charting the way towards methods and tools for advanced interactive systems BIBAFull-Text 299-300
  Juergen Ziegler; Jose Creissac Campos; Laurence Nigay
This workshop intends to establish the basis of a roadmap addressing engineering challenges and emerging themes in HCI. Novel forms of interaction and new application domains involve aspects that are currently not sufficiently covered by existing methods and tools. The workshop will serve as a venue to bring together researchers and practitioners interested the Engineering of Human-Computer Interaction and in contributing to the definition of a roadmap for the field. The intention is to continue work on the roadmap in follow-up workshops as well as in the context of the IFIP Working Group on User Interface Engineering.