HCI Bibliography Home | HCI Conferences | ITS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ITS Tables of Contents: 09101112131415

Proceedings of the 2014 ACM International Conference on Interactive Tabletops and Surfaces

Fullname:Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces
Editors:Raimund Dachselt; Nicholas Graham; Kasper Hornbæk; Miguel Nacenta
Location:Dresden, Germany
Dates:2014-Nov-16 to 2014-Nov-19
Publisher:ACM
Standard No:ISBN: 978-1-4503-2587-5; ACM DL: Table of Contents; hcibib: ITS14
Papers:76
Pages:505
Links:Conference Website
  1. Opening Keynote
  2. Closing Keynote
  3. Session 1: Gestures
  4. Session 2: Hardware, Sensing and Frameworks
  5. Session 3: Surfaces for Geo-Applications
  6. Session 4: Multi-Surface
  7. Session 5: Children and Learning
  8. Session 6: Space, Activities and Workplace
  9. Session 7: Touch, Pressure and Reality
  10. Session 8: In the World
  11. Session 9: Tangibles
  12. Posters
  13. Demonstrations
  14. Doctoral Symposium
  15. Tutorials, Workshops and Studios

Opening Keynote

A New You: From Augmented Reality to Augmented Human BIBAFull-Text 1
  Jun Rekimoto
Traditionally, the field of Human Computer Interaction (HCI) was primarily concerned with designing and investigating interfaces between humans and machines. The primary concern of Surface Computing is also still about designing better interfaces to information. However, with recent technological advances, the concept of "enhancing", "augmenting" or even "re-designing" humans themselves is becoming a very feasible and serious topic of scientific research as well as engineering development. "Augmented Human" is a term that I use to refer to this overall research direction. Augmented Human introduces a fundamental paradigm shift in HCI: from human-computer-interaction to human-computer-integration. In this talk, I will discuss rich possibilities and distinct challenges in enhancing human abilities. I will introduce recent projects conducted by our group including design and applications of wearable eye sensing for augmenting our perception and memory abilities, design of flying cameras as our external eyes, a home appliance that can increase your happiness, an organic physical wall/window that dynamically mediates the environment, and an immersive human-human communication called "JackIn".

Closing Keynote

Sensitive Skins in Media Art and Design BIBAFull-Text 3
  Joachim Sauter
After the spread of personal computers in the mid 1980ies, media art and design focussed primarily on the screen as an interface for nearly two decades. In order to overcome the limitation of the screen as a small rectangular, flat, single-user device, ART+COM developed touch-sensitive surfaces applicable to objects of almost any size and form in the early years of the new century. When encountering these new interfaces in exhibitions and in the museum, the audience accepted them emphatically. Instead of being faced with some wall projection with a classic single-user input interface as usual, visitors were enabled to explore content together and interact with each other. The first touch-sensitive tables Behind The Lines (2003) and floating.numbers (2004) for the Jewish Museum in Berlin turned users into interactive participants who could jointly explore the data surface and discuss what they had discovered. The installations' implicit form of a table encouraged users to interact with each other -- after all, tables have served as places of communication and exchange throughout human history. Driven by the success of this multi-user, face-to-face experience ART+COM developed dozens of content and context specific installations ranging from large, rectangular tables to amorphous touch-sensitive sculptures.
   The talk will discuss the history of installations and spaces using dynamic sensitive surfaces including sensitive tables, sculptures, floors, costumes and architecture and spanning from ART+COMs pre- to post-sensitive-surface era.

Session 1: Gestures

Exploring Narrative Gestures on Digital Surfaces BIBAFull-Text 5-14
  Mehrnaz Mostafapour; Mark Hancock
A significant amount of research on digital tables has traditionally investigated the use of hands and fingers to control 2D and 3D artifacts, has even investigated people's expectations when interacting with these devices. However, people often use their hands and body to communicate and express ideas to others. In this work, we explore narrative gestures on a digital table for the purpose of telling stories. We present the results of an observational study of people illustrating stories on a digital table with virtual figurines, and in both a physical sandbox and water with physical figurines. Our results show that the narrative gestures people use to tell stories with objects are highly varied and, in some cases, fundamentally different from the gestures designers and researchers have suggested for controlling digital con-tent. In contrast to smooth, predetermined drags for movement and rotation, people use jiggling, repeated lifting, and bimanual actions to express rich, simultaneous, and independent actions by multiple characters in a story. Based on these results, we suggest that future storytelling designs consider the importance of touch actions for narration, in-place manipulations, the (possibly non-linear) path of a drag, allowing expression through manipulations, and two-handed simultaneous manipulation of multiple objects.
Web on the Wall Reloaded: Implementation, Replication and Refinement of User-Defined Interaction Sets BIBAFull-Text 15-24
  Michael Nebeling; Alexander Huber; David Ott; Moira C. Norrie
System design using novel forms of interaction is commonly argued to be best driven by user-driven elicitation studies. This paper describes the challenges faced, and the lessons learned, in replicating Morris's Web on the Wall guessability study which used Wizard of Oz to elicit multimodal interactions around Kinect. Our replication involved three steps. First, based on Morris's study, we developed a system, Kinect Browser, that supports 10 common browser functions using popular gestures and speech commands. Second, we developed custom experiment software for recording and analysing multimodal interactions using Kinect. Third, we conducted a study based on Morris's design. However, after first using Wizard of Oz, Kinect Browser was used in a second elicitation task, allowing us to analyse and compare the differences between the two methods.Our study demonstrates the effects of using mixed-initiative elicitation with significant differences to user-driven elicitation without system dialogue. Given the recent proliferation of guessability studies, our work extends the methodology to obtain reproducible and implementable user-defined interaction sets.
User-defined Interface Gestures: Dataset and Analysis BIBAFull-Text 25-34
  Daniela Grijincu; Miguel A. Nacenta; Per Ola Kristensson
We present a video-based gesture dataset and a methodology for annotating video-based gesture datasets. Our dataset consists of user-defined gestures generated by 18 participants from a previous investigation of gesture memorability. We design and use a crowd-sourced classification task to annotate the videos. The results are made available through a web-based visualization that allows researchers and designers to explore the dataset. Finally, we perform an additional descriptive analysis and quantitative modeling exercise that provide additional insights into the results of the original study. To facilitate the use of the presented methodology by other researchers we share the data, the source of the human intelligence tasks for crowdsourcing, a new taxonomy that integrates previous work, and the source code of the visualization tool.

Session 2: Hardware, Sensing and Frameworks

A Survey on Multi-touch Gesture Recognition and Multi-touch Frameworks BIBAFull-Text 35-44
  Mauricio Cirelli; Ricardo Nakamura
The multi-touch gesture recognition problem has drawn great attention from the human-computer interaction (HCI) community, mainly since multi-touch surfaces and other touch-capable devices reached the mainstream market. In the past decade, several multi-touch gesture recognition techniques and multi-touch frameworks were proposed. When we started our research on touch-based gestures recognition, we identified some surveys focused on computer vision or accelerometers. However, in several multi-touch surface devices, the multi-touch sensor is the only input method available. We here present a survey on touch-based gestures recognition techniques and frameworks, and propose an extended set of requirements such techniques and frameworks should meet in order to provide better support to multi-touch surface applications.
HuddleLamp: Spatially-Aware Mobile Displays for Ad-hoc Around-the-Table Collaboration BIBAFull-Text 45-54
  Roman Rädle; Hans-Christian Jetter; Nicolai Marquardt; Harald Reiterer; Yvonne Rogers
We present HuddleLamp, a desk lamp with an integrated RGB-D camera that precisely tracks the movements and positions of mobile displays and hands on a table. This enables a new breed of spatially-aware multi-user and multi-device applications for around-the-table collaboration without an interactive tabletop. At any time, users can add or remove displays and reconfigure them in space in an ad-hoc manner without the need of installing any software or attaching markers. Additionally, hands are tracked to detect interactions above and between displays, enabling fluent cross-device interactions. We contribute a novel hybrid sensing approach that uses RGB and depth data to increase tracking quality and a technical evaluation of its capabilities and limitations. For enabling installation-free ad-hoc collaboration, we also introduce a web-based architecture and JavaScript API for future HuddleLamp applications. Finally, we demonstrate the resulting design space using five examples of cross-device interaction techniques.
Uminari: Freeform Interactive Loudspeakers BIBAFull-Text 55-64
  Yoshio Ishiguro; Ali Israr; Alex Rothera; Eric Brockmeyer
We present freeform interactive loudspeakers for creating spatial sound experiences from a variety of surfaces. Surround sound systems are widely used and consist of multiple electromagnetic speakers that create point sound sources within a space. Our proposed system creates directional sound and can be easily embedded into architecture, furniture and many everyday objects. We use electrostatic loudspeaker technology made from thin, flexible, lightweight and low cost materials and can be of different size and shape. In this paper we propose various configurations such as single speaker, speaker array tangible speaker and microphone configurations for creating playful and exciting interactions with spatial sounds. Our research of freeform speakers can create new possibilities for the design of various interactive surfaces.
Multi-push Display using 6-axis Motion Platform BIBAFull-Text 65-68
  Takashi Nagamatsu; Masahiro Nakane; Haruka Tashiro; Teruhiko Akazawa
This study designed and developed a novel tactile display that provides a multi-push sensation by using a 6-axis motion platform. The display mechanically controls the position and orientation of a surface panel. A user touching an on-screen button with one finger can push the button vertically. When the user subsequently touches the display with a second finger, the surface panel declines and the position of the first finger remains unchanged. Thus, the user can push at two positions on the surface panel. We also developed a prototype system with six servomotors and a touch panel based on the principle of frustrated total internal reflection.

Session 3: Surfaces for Geo-Applications

Multi Surface Interactions with Geospatial Data: A Systematic Review BIBAFull-Text 69-78
  Zahra Shakeri Hossein Abad; Craig Anslow; Frank Maurer
Even though Multi Surface Environments (MSE) and how to perform interactions in these environments have received much attention during recent years, interaction with geospatial data in these environments is still limited, and there are many design and interaction issues that need to be addressed. Alongside the rapid rise in the use of Geographic Information Systems (GIS) in group-based decision making, interaction with geospatial data has become highly important. In order to summarize the earlier research in this area, this paper presents a systematic review of MSE interactions with geospatial data; analyzing the existing studies on MSE interaction techniques, discussing issues related to interaction with geospatial data in MSEs and providing a comparison between common GIS tasks and existing interaction techniques in MSEs. Our results indicate that a substantial number of GIS tasks have not been investigated in MSEs.
The Effect of View Techniques on Collaboration and Awareness in Tabletop Map-Based Tasks BIBAFull-Text 79-88
  Christophe Bortolaso; Matthew Oskamp; Greg Phillips; Carl Gutwin; T. C. Nicholas Graham
Digital tabletops have become a natural medium for collaborative planning activities involving maps. Such activities are typically mixed-focus, where users switch between high-level and detailed views of the map and between individual and collaborative work. A wide range of view-sharing techniques such as lenses, zooming and radar views provide both shared and individual access to the same workspace. However, it is not yet sufficiently clear how the choice of view techniques affects collaboration in mixed-focus scenarios. In this paper, we explore the effect of different view techniques on collaborative map-based tasks around tables. We report on two studies in the context of military planning, one in a controlled environment and one in an open-ended scenario carried out by domain experts. Our findings show how the success of different techniques is sensitive to the form of collaboration and to the proximity of work on the table.
Spatial Querying of Geographical Data with Pen-Input Scopes BIBAFull-Text 89-98
  Fabrice Matulic; David Caspar; Moira C. Norrie
Querying geographical data on map applications running on touch devices is mainly performed by typing queries using virtual keyboards. Some of those devices are additionally equipped with styli to facilitate freehand sketching and an-notating. As shown by prior work, such hand-drawn sketches can also be used for intuitive and effective spatial querying of geographical data. Building on that groundwork, we present a set of pen-based techniques to selectively convert map annotations into spatial queries with implicitly or explicitly specified scopes. We show how those techniques can be used for trip-planning tasks involving route-finding and searching of points of interest. In a controlled user study comparing the usability and efficiency of the techniques for different querying patterns, we establish participants' general preference for explicit input scopes and obtain indications that, provided handwriting is correctly recognised, input times are comparable to that of a standard (soft) keyboard-based interface. Based on those results and participant feedback, we propose a number of enhancements and extensions to inform the design of future pen-based map applications.

Session 4: Multi-Surface

Surface Ghosts: Promoting Awareness of Transferred Objects during Pick-and-Drop Transfer in Multi-Surface Environments BIBAFull-Text 99-108
  Stacey D. Scott; Guillaume Besacier; Julie Tournet; Nippun Goyal; Michael Haller
Rekimoto's Pick-and-Drop (P&D) transfer technique is commonly used to support multi-surface object transfer (e.g., between a shared tabletop and tablet) due to its easily understood metaphor of emulating object movement in the physical world. Current multi-surface implementations of P&D provide little to no feedback during transfer, causing confusion for the person performing the action as well as others in the environment. To address this issue, we investigated the use of virtual embodiments to improve awareness of transferred objects, in the context of a real-world group task that relied heavily on cross-device transfer. An iterative design process led to the design of Surface Ghosts virtual embodiments, which take the form of semi-transparent 'ghosts' of the transferred objects displayed under the "owner's" hand on the tabletop during transfer. A user study that compared two Surface Ghosts designs-varied by how explicitly the "owner" was indicated-showed that both designs improved awareness of transferred objects when compared to a no-feedback control condition, especially for tabletop-to-tablet transfers.
PolyChrome: A Cross-Device Framework for Collaborative Web Visualization BIBAFull-Text 109-118
  Sriram Karthik Badam; Niklas Elmqvist
We present PolyChrome, an application framework for creating web-based collaborative visualizations that can span multiple devices. The framework supports (1) co-browsing new web applications as well as legacy websites with no migration costs (i.e., a distributed web browser); (2) an API to develop new web applications that can synchronize the UI state on multiple devices to support synchronous and asynchronous collaboration; and (3) maintenance of state and input events on a server to handle common issues with distributed applications such as consistency management, conflict resolution, and undo operations. We describe PolyChrome's general design, architecture, and implementation followed by application examples showcasing collaborative web visualizations created using the framework. Finally, we present performance results that suggest that PolyChrome adds minimal overhead compared to single-device applications.
ActivitySpace: Managing Device Ecologies in an Activity-Centric Configuration Space BIBAFull-Text 119-128
  Steven Houben; Paolo Tell; Jakob E. Bardram
Mobile devices have become an intrinsic part of people's everyday life. They are multifunctional devices providing ubiquitous access to many different sources of information. Together with traditional personal computers, these devices form a device ecology that provides access to an overlapping information space. Previous studies have shown that users encounter a number of fundamental problems when interacting with these device ecologies, such as lack of transparency, control, intelligibility and context. To mitigate these problems, we introduce ActivitySpace: an activity-centric configuration space that enables the user to integrate and work across several devices by utilizing the space between the devices. This paper presents the conceptual background and design of ActivitySpace and reports on a study with nine participants. Our study shows that ActivitySpace helps users to easily manage devices and their allocated resources while also exposing a number of usage patterns.
SleeD: Using a Sleeve Display to Interact with Touch-sensitive Display Walls BIBAFull-Text 129-138
  Ulrich von Zadow; Wolfgang Büschel; Ricardo Langner; Raimund Interactive Media Lab Dachselt
We present SleeD, a touch-sensitive Sleeve Display that facilitates interaction with multi-touch display walls. Large vertical displays allow multiple users to interact effectively with complex data but are inherently public. Also, they generally cannot present an interface adapted to the individual user. The combination with an arm-mounted, interactive display allows complex personalized interactions. In contrast to hand-held devices, both hands remain free for interacting with the wall. We discuss different levels of coupling between wearable and wall and propose novel user interface techniques that support user-specific interfaces, data transfer, and arbitrary personal views. In an iterative development process, we built a mock-up using a bendable e-Ink display and a fully functional prototype based on an arm-mounted smartphone. In addition, we developed several applications that showcase the techniques presented. An observational study we conducted demonstrates the high potential of our concepts.

Session 5: Children and Learning

Structure Editing of Handwritten Mathematics: Improving the Computer Support for the Calculational Method BIBAFull-Text 139-148
  Alexandra Mendes; Roland Backhouse; Joao F. Ferreira
We present a structure editor that aims to facilitate the presentation and manipulation of handwritten mathematical expressions. The editor is oriented to the calculational mathematics involved in algorithmic problem solving and it provides features that allow reliable structure manipulation of mathematical formulae, as well as flexible and interactive presentations. We describe some of its most important features, including the use of gestures to manipulate algebraic formulae, the structured selection of expressions, definition and redefinition of operators in runtime, gesture's editor, and handwritten templates. The editor is made available in the form of a C# class library which can be easily used to extend existing tools. For example, we have extended Classroom Presenter, a tool for ink-based teaching presentations and classroom interaction. We have tested and evaluated the editor with target users. The results obtained seem to indicate that the software is usable, suitable for its purpose and a valuable contribution to teaching and learning algorithmic problem solving.
P.I.A.N.O.: Faster Piano Learning with Interactive Projection BIBAFull-Text 149-158
  Katja Rogers; Amrei Röhlig; Matthias Weing; Jan Gugenheimer; Bastian Könings; Melina Klepsch; Florian Schaub; Enrico Rukzio; Tina Seufert; Michael Weber
Learning to play the piano is a prolonged challenge for novices. It requires them to learn sheet music notation and its mapping to respective piano keys, together with articulation details. Smooth playing further requires correct finger postures. The result is a slow learning progress, often causing frustration and strain. To overcome these issues, we propose P.I.A.N.O., a piano learning system with interactive projection that facilitates a fast learning process. Note information in form of an enhanced piano roll notation is directly projected onto the instrument and allows mapping of notes to piano keys without prior sight-reading skills. Three learning modes support the natural learning process with live feedback and performance evaluation. We report the results of two user studies, which show that P.I.A.N.O. supports faster learning, requires significantly less cognitive load, provides better user experience, and increases perceived musical quality compared to sheet music notation and non-projected piano roll notation.
Exploring Visual Cues for Intuitive Communicability of Touch Gestures to Pre-kindergarten Children BIBAFull-Text 159-162
  Vicente Nacher; Javier Jaen; Alejandro Catala
Pre-kindergarten children are becoming frequent users of multi-touch technology and, according to previous studies they are able to perform several multi-touch gestures successfully. However, they do not use these devices supervised at all times. Consequently, interactive applications for pre-kindergarteners need to convey their underlying design intent and interactive principles with respect to touch interaction. In this paper, we present and evaluate two approaches to communicate three different touch gestures (tap, drag and scale up) to pre-kindergarten users. Our results show, firstly, that it is possible to effectively communicate them using visual cues and, secondly, that an animated semiotic approach is better than an iconic one.
Improving Pre-Kindergarten Touch Performance BIBAFull-Text 163-166
  Vicente Nacher; Javier Jaen; Alejandro Catala; Elena Navarro; Pascual Gonzalez
Multi-touch technology provides users with a more intuitive way of interaction. However, pre-kindergarten children, a growing group of potential users, have problems with some basic gestures according to previous studies. This is particularly the case of the double tap and long pressed gestures, which have some issues related to spurious entry events and time-constrained interactions, respectively. In this paper, we empirically test specific strategies to deal with these issues by evaluating off-the-shelf implementations of these gestures against alternative implementations that follow these guidelines. The study shows that the implementation of these design guidelines has a positive effect on success rates of these two gestures, being feasible their inclusion in future multi-touch applications targeted at pre-kindergarten children.

Session 6: Space, Activities and Workplace

DT-DT: Top-down Human Activity Analysis for Interactive Surface Applications BIBAFull-Text 167-176
  Gang Hu; Derek Reilly; Mohammed Alnusayri; Ben Swinden; Qigang Gao
As environmental and multi-display configurations become more common, HCI research is becoming increasingly concerned with actions around these displays. Tracking human activity is challenging, and there is currently no single solution that reliably handles all scenarios without excessive instrumentation. In this paper we present a novel tracking and analysis approach using a top-down 3D camera. Our hierarchical tracking approach models local and global affinities, scene constraints and motion patterns to find and track people in space, and a novel salience occupancy pattern (SOP) is used for action recognition. We present experiences applying our approach to build a proxemics-aware tabletop display prototype, and to create an exhibit combining a large vertical display with an interactive floor-projection.
NetBoards: Investigating a Collection of Personal Noticeboard Displays in the Workplace BIBAFull-Text 177-183
  Erroll Wood; Peter Robinson
NetBoards are situated displays designed to fulfil and augment the role of non-digital personal noticeboards in the workplace. Traditionally, these are small corkboards or whiteboards situated outside offices belonging to individuals or small groups of people. By replacing these with large, networked, high-fidelity touch-enabled displays, we attempt to replicate the existing physical systems' flexibility and ease-of-use, while enabling more expressive content creation techniques and remote connectivity. We have developed an understanding of the deployment environment and every-day noticeboard practices using an ethnographic study which guided system design. Users can write messages or sketch drawings on their NetBoard, as well as post images and other web-based media. NetBoards can be accessed over the internet, allowing remote viewing and modification. Initial observations of 9 deployed units demonstrate the system's flexibility, showing it being used for maintaining group awareness, workplace personalization, playful communication, and showcasing research.
Supporting Situation Awareness in Collaborative Tabletop Systems with Automation BIBAFull-Text 185-194
  Y.-L. Betty Chang; Stacey D. Scott; Mark Hancock
Human operators collaborating to complete complex tasks, such as a team of emergency response operators, need to maintain a high level of situation awareness to appropriately and quickly respond to critical changes. Even though automation can help manage complex tasks and rapidly update information, it may create confusion that negatively impacts operators' situation awareness, and result in sub-optimal decisions. To improve situation awareness in co-located environments on digital tabletop computers, we developed an interactive event timeline that enables exploration of historical system events, using a collaborative digital board game as a case study. We conducted a user study to examine two factors, placement of timelines for multiple users and location of awareness feedback, to understand their impact on situation awareness. The study revealed that interaction with the timeline was correlated with improved situation awareness, and that displaying feedback both on the game board and timeline was the most preferred.

Session 7: Touch, Pressure and Reality

An Empirical Characterization of Touch-Gesture Input-Force on Mobile Devices BIBAFull-Text 195-204
  Faisal Taher; Jason Alexander; John Hardy; Eduardo Velloso
Designers of force-sensitive user interfaces lack a ground-truth characterization of input force while performing common touch gestures (zooming, panning, tapping, and rotating). This paper provides such a characterization firstly by deriving baseline force profiles in a tightly-controlled user study; then by examining how these profiles vary in different conditions such as form factor (mobile phone and tablet), interaction position (walking and sitting) and urgency (timed tasks and untimed tasks). We conducted two user studies with 14 and 24 participants respectively and report: (1) force profile graphs that depict the force variations of common touch gestures, (2) the effect of the different conditions on force exerted and gesture completion time, (3) the most common forces that users apply, and the time taken to complete the gestures. This characterization is intended to aid the design of interactive devices that integrate force-input with common touch gestures in different conditions.
Characterising the Physicality of Everyday Buttons BIBAFull-Text 205-208
  Jason Alexander; John Hardy; Stephen Wattam
A significant milestone in the development of physically-dynamic surfaces is the ability for buttons to protrude outwards from any location on a touch-screen. As a first step toward developing interaction requirements for this technology we conducted a survey of 1515 electronic push buttons in everyday home environments. We report a characterisation that describes the features of the data set and discusses important button properties that we expect will inform the design of future physically-dynamic devices and surfaces.
Towards Habitable Bits: Digitizing the Built Environment BIBAFull-Text 209-218
  Yuichiro Takeuchi
Recently, there has been a growing number of research efforts aimed at the digitization of architectural space. Whereas conventional attempts at integrating digital technology into architectural space have typically viewed architecture as an inflexible backdrop onto which layers of digital devices/services/information can be overlaid, the newly emerging efforts instead strive to inject architecture itself with the distinct plasticity of digital bits. In this paper, we will first provide a review of this nascent body of work, weaving together the disparate streaks of technical development into a consilient research trajectory that can be interpreted as a modern-day extension to Weiser's original vision of Ubiquitous Computing. Next we turn to the two decades' worth of criticisms raised against the UbiComp ideal, to expose what perspectives are missing from the fledgling efforts, and to identify the roles the HCI community can play in shaping the future of this promising line of work.

Session 8: In the World

ePlan Multi-Surface: A Multi-Surface Environment for Emergency Response Planning Exercises BIBAFull-Text 219-228
  Apoorve Chokshi; Teddy Seyed; Francisco Marinho Rodrigues; Frank Maurer
Emergency response planning is a process that involves many different stakeholders who may communicate concurrently with different channels and exchange different information artefacts. The planning typically occurs in an emergency operations centre (EOC) and involves personnel both in the room and also in the field. The EOC provides an interesting context for examining the use of tablets, tabletops and high resolution wall displays, and their role in facilitating information and communication exchange in an emergency response planning scenario. In collaboration with a military and emergency response simulation software company in Calgary, Alberta, Canada, we developed ePlan Multi-Surface, a multi-surface environment for communication and collaboration for emergency response planning exercises. In this paper, we describe the domain, how it informed our prototype, and insights on collaboration, interactions and information dissemination in multi-surface environments for EOCs.
Designing a Remote Video Collaboration System for Industrial Settings BIBAFull-Text 229-238
  Veronika Domova; Elina Vartiainen; Marcus Englund
In industry settings, it is essential to keep the production up and running at all times. In case of new machine installation or process failure, technical support from the equipment manufacturer often needs to be contacted. In these cases, local workers and remote experts need to collaborate to solve the occurred problem. Ideally, in order to cut costs and speed up the process, the situation should be handled remotely, without bringing the expert to the site. This paper describes the design and implementation of a remote video collaboration system, which enables effective communication between a field worker and a remote expert. The system includes a smartphone/tablet application used by the field worker to capture and stream video to a desktop application of the remote expert. Furthermore, the system enables instantly synchronized snapshots and annotations between both parties. A performed field study testing the design choices, potential and limitations by including target users showed that the system holds good potential given that some key issues are handled.
The Usability of a Tabletop Application for Neuro-Rehabilitation from Therapists' Point of View BIBAFull-Text 239-248
  Mirjam Augstein; Thomas Neumayr; Irene Schacherl-Hofer
The success of virtual environments in neuro-rehabilitation crucially relies on the acceptance of its users. Thus, in the development of virtual rehabilitation environments usually a user-centered design process is followed. Most approaches concentrate on the patient as end user, however, the target group might be two-fold. The fun.tast.tisch. project aims at the development of a tabletop-based software system for therapists and patients as supporting means in conventional neuro-rehabilitation. Both target groups are involved in the development process; different studies are conducted to ensure the therapeutic adequateness and the system's usability. The latter is especially important for therapists as they are responsible for navigating through the modules and configuring them for the patients individually. This paper describes two fun.tast.tisch. modules that have been evaluated regarding usability for therapists in a longer-term study in summer 2013 in a rehabilitation hospital, and presents the study's results.
HyPR Device: Mobile Support for Hybrid Patient Records BIBAFull-Text 249-258
  Steven Houben; Mads Frost; Jakob E. Bardram
The patient record is one of the central artifacts in medical work that is used to organize, communicate and coordinate important information related to patient care. In many hospitals a double record consisting of an electronic and paper part is maintained. This practice introduces a number of configuration problems related to finding, using and aligning the paper and electronic patient record. In this paper, we describe the exploration into the Hybrid Patient Record (HyPR) concept. Based on design requirements derived from a field study, followed by a design study using a technology probe, we introduce the HyPR Device, a device that merges the paper and electronic patient record into one system. We provide results from a clinical simulation with eight clinicians and discuss the functional, design and infrastructural requirements of such hybrid patient records. Our study suggests that the HyPR device decreases configuration work, supports mobility in clinical work and increases awareness on patient data.

Session 9: Tangibles

ACTO: A Modular Actuated Tangible User Interface Object BIBAFull-Text 259-268
  Emanuel Vonach; Georg Gerstweiler; Hannes Kaufmann
We introduce a customizable, reusable actuated tangible user interface object: ACTO. Its modular design allows quick adaptations for different scenarios and setups on tabletops, making otherwise integral parts like the actuation mechanism or the physical configuration interchangeable. Drawing on the resources of well-established maker communities makes prototyping especially quick and easy. This allows the exploration of new concepts without the need to redesign the whole system, which qualifies it as an ideal research and education platform for tangible user interfaces. We present a detailed description of the hardware and software architecture of our system. Several implemented example configurations and application scenarios demonstrate the capabilities of the platform.
BullsEye: High-Precision Fiducial Tracking for Table-based Tangible Interaction BIBAFull-Text 269-278
  Clemens Nylandsted Klokmose; Janus Bager Kristensen; Rolf Bagge; Kim Halskov
This paper proposes a series of techniques for improving the precision of optical fiducial tracking on tangible tabletops. The motivation is to enable convincing interactive projection mapping on tangibles on the table, which requires a high precision tracking of the location of tangibles. We propose a new fiducial design optimized for GPU based tracking, a technique for calibrating light that allows for computation on a greyscale image rather than a binarized black and white image, an automated technique for compensating for optical distortions in the camera lenses, and a tracking algorithm implemented primarily in shaders on the GPU. The techniques are realized in the BullsEye computer vision software. We demonstrate experimentally that BullsEye provides sub-pixel accuracy down to a tenth of a pixel, which is a significant improvement compared to the commonly used reacTIVision software.
An Interaction Model for Grasp-Aware Tangibles on Interactive Surfaces BIBAFull-Text 279-282
  Simon Voelker; Christian Corsten; Nur Al-huda Hamdan; Kjell Ivar Øvergård; Jan Borchers
Tangibles on interactive surfaces enable users to physically manipulate digital content by placing, manipulating, or removing a tangible object. However, the information whether and how a user grasps these objects has not been mapped out for tangibles on interactive surfaces so far. Based on Buxton's Three-State Model for graphical input, we present an interaction model that describes input on tangibles that are aware of the user's grasp. We present two examples showing how the user benefits from this extended interaction model. Furthermore, we show how the interaction with other existing tangibles for interactive tabletops can be modeled.

Posters

3D Tabletop User Interface Using Virtual Elastic Objects BIBAFull-Text 283-288
  Hiroaki Tateyama; Takumi Kusano; Takashi Komuro
In this paper, we propose a method to reduce the inconsistency between virtual and real spaces in manipulating a 3D virtual object with users' fingers. When a user tries to hold a virtual object, fingers do not stop on the surface of the object and thrust into the object since virtual objects cannot give reaction force. We therefore try to prevent fingers from thrusting into a virtual object by letting the object deform or glide through the fingers. A virtual object is deformed by using a spring-based model and solving the equation of equilibrium. Whether the object glides through the fingers or not is determined by calculating resultant force added to the object and resultant force of static friction when the fingers touch the object. Based on these methods, we constructed a 3D tabletop interface that enables interaction with virtual objects with a greater sense of reality.
Cyber Chamber: Multi-user Collaborative Assistance System for Online Shopping BIBAFull-Text 289-294
  Masafumi Muta; Kenji Mukai; Ryoutarou Toumoto; Motoi Okuzono; Junichi Hoshino; Hiromi Hirano; Soh Masuko
Recently, online shopping continues to thrive in modern societies, especially via tablets and smartphones; however, in current online shopping environments, there is insufficient support for shopping with several people, which is often what shoppers do together in the nonvirtual world. In this paper, we propose a novel system that provides a new online shopping experience for multiple users. Here, we developed Cyber Chamber, a system that enables users to interact with tablets and wall projection for each user to search items privately via their tablets and then share those publicly on the wall. To evaluate the effectiveness of our system, we performed experiments. From our results, we confirmed that our proposed system successfully supports the ease of discussion, information sharing, and ease with which multiple users can together consider combinations of goods.
Corona: Haptic Sensation Using Body-Carried Electrostatic Charge for Body Area Network Feedback Companion BIBAFull-Text 295-298
  Adiyan Mujibiya
Improvements in Body Area Network (BAN) technical feasibility have encouraged research, which propose applications exploring data transferring touch interaction. In this work, we aim to provide tactile feedback for these applications to improve user experience. We propose Corona, a wearable tactile feedback device that uses electrostatic force to provide physical stimuli when a user touches objects that are electrically grounded, or of opposite polarity. Unlike previous approach, we neither physically actuate objects nor use any hand-worn tactile feedback devices. We introduce a mechanism to artificially build up electrostatic charge within the human body, and further leverage effects of electrostatic discharge to create novel physical stimuli. In this paper, we report underlying theory of operation as well as details of our prototype implementation. We illustrate our results using two applications: a) SandStorm: visualization of actuated force field using sand, and b) CheckMate: future vision of a touch-based transaction checkout system equipped with Corona to provide haptic sensation for more reassuring feedbacks.
Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface BIBAFull-Text 299-304
  Xu Zhao; Takehiro Niikura; Takashi Komuro
In this paper we evaluate the relation between visual and haptic feedback in a 3D touch panel interface and show the optimal latency for natural interaction. We developed a system that consists of an autostereoscopic display and a high-speed stereo camera. With this system, virtual objects are stereoscopically-presented, and the objects respond to the finger movement that is obtained using the stereo camera. We conducted an experiment to evaluate visual and haptic synchronization and the result showed that visual and haptic feedback was the most synchronized with latencies around 150 ms, while finger and button movements were more synchronized with smaller latencies. We also conducted a comparison experiment to explore which synchronization is more important and the result showed that the visual synchronization of finger and button movements is more important than visual and haptic synchronization.
UbiBeam: An Interactive Projector-Camera System for Domestic Deployment BIBAFull-Text 305-310
  Jan Gugenheimer; Pascal Knierim; Julian Seifert; Enrico Rukzio
Previous research on projector-camera systems has focused for a long time on interaction inside a lab environment. Currently they are no insight on how people would interact and use such a device in their everyday lives. We conducted an in-situ user study by visiting 22 households and exploring specific use cases and ideas of portable projector-camera systems in a domestic environment. Using a grounded theory approach, we identified several categories such as interaction techniques, presentation space, placement and use cases. Based on our observations, we designed and implement UbiBeam, a domestically deployable projector-camera system. The system comprises a projector, a depth camera and two servomotors to transform every ordinary surface into a touch-sensitive information display.
Task Assignment and Visualization on Tabletops and Smartphones BIBAFull-Text 311-316
  Benedikt Haas; Florian Echtler
This paper introduces a system to assign and visualize tasks of projects for tabletop computers and smartphone devices. In general, task assignment is a highly collaborative process and requires intensive discussions among project participants. Therefore, face-to-face communication is recommended for this process because it promises better results than network-based communication. It is faster in reaching a consensus, richer in terms of quality of communication and tends to be more satisfying for the group (compared to computer-mediated communication). The use of a tabletop system promises to deliver benefits of both worlds: the productivity and availability of computer-mediated systems and the social benefits of face-to-face communication. To this end, we introduce MultiTask, a digital tool to plan projects and assign related tasks. MultiTask consists of two components: a tabletop application for table discussions and an Android companion application with limited interaction possibilities in order to provide better availability while on the move. An explorative evaluation involving qualitative methodologies was conducted in order to improve the system and identify participants' preferences regarding this approach.
X-O Arch Menu: Combining Precise Positioning with Efficient Menu Selection on Touch Devices BIBAFull-Text 317-322
  Felix Thalmann; Ulrich von Zadow; Marcel Heckel; Raimund Dachselt
Support for precise positioning is crucial for many touch applications, and an efficient way to select the action at that point is very desirable in many cases as well. We draw upon existing work in the area of touch accuracy and touch menus to contribute the X-O Arch Menu. Our menu seamlessly combines precise positioning and fast, hierarcial menu selection. Furthermore, we introduce a novel optimization to pie menus that allows usage in limited screen space. The menu is fully implemented; we have created a touch-enabled version of a commercially available application using it.
Second Look: Combining Interactive Surfaces with Wearable Computing to support Creative Writing BIBAFull-Text 323-326
  Pedro Campos; Frederica Gonçalves; Michael Martins; Miguel Campos; Paulo Freitas
We present "Second Look", a platform of interactive surfaces and wearable computing for helping people, in particular creative writers, to overcome writer's block. The novelty of our systems stems from the addition of wearable devices (Google Glass) and crowdsourcing to improve creative writing on tablets and phones. A primary challenge in developing and evaluating creativity support tools is that we are not able to detect when a person is being creative. Our approach improves current ones by exploring the "in-the-moment" creativity and supporting it with adaptive ubiquitous technologies that try to keep people in a creative experience peak for a longer period of time.
Making Tabletop Interaction Accessible for Blind Users BIBAFull-Text 327-332
  Andreas Kunz; Dirk Schnelle-Walka; Ali Alavi; Stephan Pölzer; Max Mühlhäuser; Klaus Miesenberger
Tabletop systems and their interaction capabilities are typically a domain for sighted people only. While the content on the tabletop can already be made accessible to blind people, the interaction above the tabletop is still inaccessible. This paper describes our approach towards making the above tabletop interaction accessible to blind people by using LEAP sensors and speech recognition.
Intelligent Ink Annotation Framework that uses User's Intention in Electronic Document Annotation BIBAFull-Text 333-338
  Hiroki Asai; Hayato Yamana
Annotating documents is one of the indispensable interaction between human and documents. The annotation system of electronic documents enables to implement effective functions, such as information retrieval and annotation-based navigation, by using the annotation information; however, traditional systems require users to perform gestures in addition to common gestures for paper-based documents. This can reduce "learnability" of the system. We propose an intelligent ink annotation framework that helps the system to increase the learnability of annotation systems by detecting recognizable intentions from natural annotation behavior on paper-based documents. Our framework recognizes "Targeting Content" and "Commenting," which are related to extraction of annotation information. We have developed a prototype annotation system using our proposed framework and conducted a user study to identify future direction.
Studying Teacher Cognitive Load in Multi-tabletop Classrooms Using Mobile Eye-tracking BIBAFull-Text 339-344
  Luis P. Prieto; Yun Wen; Daniela Caballero; Kshitij Sharma; Pierre Dillenbourg
Increasing affordability is making multi-tabletop spaces (e.g., in school classrooms) a real possibility, and first design guidelines for such environments are starting to appear. However, there are still very few studies into the usability of such multi-tabletop classrooms as a whole, considering well-established constructs such as cognitive load. In this poster we present an exploratory study of the usage of mobile eye-tracking techniques to follow cognitive load of a teacher during a lesson in such a multi-tabletop space. By analyzing several eye-tracking measures over three sessions of a collaborative learning lesson on fractions, we obtained insights on the user experience of the facilitator in these concrete sessions. We also show the potential of eye-tracking to identify critical episodes in the usage of a multi-tabletop space under realistic usage conditions, in a less intrusive and more objective manner.
Combining Timeline and Graph Visualization BIBAFull-Text 345-350
  Robert Morawa; Tom Horak; Ulrike Kister; Annett Mitschick; Raimund Dachselt
Timelines are as important for presenting temporal data as node-link diagrams are relevant for displaying graphs and relations in general. Yet, both are rarely combined. We present Time Shadows to precisely indicate a node's place in time, revealing associated temporal data and relations. We also introduce Time Beads. Created as a focus and context interaction technique for time-based graphs, Time Beads allow to continuously or discretely change the level of detail and to set multiple arbitrary focus regions if needed. We implemented both techniques in a prototype and conducted an initial user study.
Single Locus of Control in a Tangible Paper-based Tabletop Application: An Exploratory Study BIBAFull-Text 351-356
  Daniela Caballero; Yun Wen; Luis P. Prieto; Pierre Dillenbourg
Multiple loci of control is one of the main affordances of tangible tabletop UIs due to their capability for simultaneous manipulation. However, there is a tension between the efficiency given by simultaneous manipulation and the need to coordinate and reflect in group activity. We implemented a central point of control to synchronize the group work and afford opportunities for equal participation in a tabletop application. In this study, we analyzed log and video data of seven groups of primary school students using a tabletop application. The results show that log data about this central control's position and rotation can be a predictor of equal participation, also helping interpret group performance. Finally, we discuss the implications of such findings, e.g., to provide teachers with useful information about group collaborative processes.
Interact! An Evidence-Based Framework For Digitally Supported School Field Trips BIBAFull-Text 357-362
  Alexandra Tanner; Beat Vollenwyder; Ulrike Schock; Michael Kalt; Magdalena Mateescu; Doris Agotai; Peter Gros; Manfred Vogel; Carmen G. Zahn
Interact! is an evidence-based framework for the support of school class visits to science exhibitions featuring a large vertical multi-touch surface and a set of tablet computers. The overall framework is designed to foster interest in computer sciences and collaborative learning of selected topics by drawing upon subject areas of personal importance and encouraging small group knowledge processes. Interact! was implemented as a prototype and tested in a field experiment with 53 pupils.
Using Scalable, Interactive Floor Projection for Production Planning Scenario BIBAFull-Text 363-368
  Michael Otto; Michael Prieur; Enrico Rukzio
This paper introduces a novel system for interactive evaluation and verification of manual assembly processes. The approach utilizes a scalable, interactive augmented floor surface in combination with a tangible tabletop hardware and a material zone planning software. The floor projection hardware is used for true to scale assembly station layout visualizations. The advantages and drawbacks of a low-cost, true to scale visualization will be discussed. A preliminary evaluation of the proposed system during a production planning workshop has shown that the low-cost implementation is suitable for reaching the production planning goals. Additionally true to scale visualizations support users to estimate distances, speeds and spatial relationships within the digital layout. In future work, motion capture and tracking systems will be integrated and registered to the augmented floor surface area, the projection area will be extended and efficiency improvements will be showed up.
Interactive Surface Composition Based on Arduino in Multi-Display Environments BIBAFull-Text 369-374
  Lili Tong; Audrey Serna; Sébastien George; Aurélien Tabard; Gilles Brochet
Multi-display environments (MDEs) are becoming increasingly common. They combine numerous displays in a variety of physical arrangements to support a wide range of tasks and interactions. In this context, we are investigating how to support the dynamic reconfiguration of devices by users. For instance how users can seamlessly add new displays in the environment and how the overall interaction space can be adapted accordingly. We present an architecture based on a Web server and websockets that enables rapid pairing of devices (local or remote), and simple magnetic switches to provide location awareness of devices relative to each others. We apply this concept to a simple game, whose world can be extended by holding devices next to each other.
Overcoming Interaction Barriers in Large Public Displays Using Personal Devices BIBAFull-Text 375-380
  Victor Cheung; Diane Watson; Jo Vermeulen; Mark Hancock; Stacey Scott
This work presents a design space in which personal devices are used as a means to facilitate "socially safe", ad-hoc interaction with large public displays. Unlike most existing work that focuses on facilitating content placement and transfer, this approach aims at minimizing the effort required to initiate, sustain, and withdraw from interaction with a large public display, and to communicate these capabilities to passersby. We identify barriers hindering this process, and offer advice on overcoming them based on existing work and our own experiences with these displays. We illustrate how this design concept can be applied, and motivate applications in other domains.
Interactive Tactile Maps for Blind People using Smartphones? Integrated Cameras BIBAFull-Text 381-385
  Timo Götzelmann
Tactile maps may support blind persons in orientation and understanding geographical relations, but their availability is still very limited. However, recent technologies such as 3D printers allow to autonomously print individual tactile maps which can be linked with interactive applications. Besides geographical depictions, textual annotation of maps is crucial. However, this often adds much complexity to tactile maps. To limit tactile complexity, interactive approaches may help to complement maps by the auditive modality. The presented approach integrates barcodes into tactile maps to allow their detection by standard smartphones' cameras. Automatically, more detailed map data is obtained to auditively support the exploration of the tactile map. Our experimental implementation shows the principal feasibility and provides the basis of ongoing comprehensive user studies.
A Cooperative Multitouch Scrum Task Board for Synchronous Face-to-Face Collaboration BIBAFull-Text 387-392
  Jessica Rubart
The Scrum planning approach to software development is a widespread agile methodology. In a setting, which is ideal for Scrum, the team is collocated. Daily Scrum meetings support the team in organizing itself. Team members meet in front of a task board and update their status of work, originally based on paper-based notes. In this poster, we present a Multitouch Scrum Task Board, which is designed for being used by a team synchronously during face-to-face collaboration. Tasks can be created, explored, sorted, resized, and visually arranged in parallel simultaneously by multiple users on a multitouch display. We present our first experiences using the cooperative Scrum Task Board on a multitouch table.
Exploring Multi-Surface Interactions in Retail Environments BIBAFull-Text 393-398
  Sydney Pratte; Teddy Seyed; Frank Maurer
Over the past several years, physical retail outlets have seen a noticeable decline in shoppers, as digital shopping has provided a newer and less costly shopping alternative for consumers. Online digital shopping provides immediate product information but lacks the experience of a physical product. Newer immersive digital shopping environments bring together digital product information with physical products. In this paper we present our early work in exploring the applicability and interaction space of spatially aware multi-surface environments in a retail space. We give an overview on our prototype designed with an industry partner, followed by early feedback on the role of multi-surface environments and interactions in the retail space.
A Multi-Display System for Deploying and Controlling Home Automation BIBAFull-Text 399-402
  Yucheng Jin; Chi Tai Dang; Christian Prehofer; Elisabeth André
In this paper, we present a concept of using a home devices mashup tool to wire home devices on a tabletop display in combination with web based UIs on mobile devices to control home devices. This concept is realized by a multi-display system supported by the open-source framework Environs. The mashup tool on the tabletop enables multiple people to deploy home networked devices co-located collaboratively with a more natural and intuitive interface. Moreover, web based UIs on mobile devices enable individuals to control home devices universally with high accessibility and mobility.
ThumbCam: Returning to single touch interactions to explore 3D virtual environments BIBAFull-Text 403-408
  Daniel Mendes; Maurício Sousa; Alfredo Ferreira; Joaquim Jorge
Three-dimensional virtual environments are present in many different applications, being used even in small handheld devices. To navigate in these environments using such devices, most of current solutions rely on multi-touch interactions. However, previous works have already stated that multi-touch gestures on smartphones are not always feasible. In this work we present ThumbCam, a novel single-touch technique for camera manipulation on 3D virtual environments. With our solution, the user is able to move and look around and circle points of interest, while interacting using only his thumb. We compare ThumbCam with other state-of-the-art techniques, showing that it can offer more operations with a single touch. A qualitative user evaluation revealed that users found our solution appealing.
Bancada: Using Mobile Zoomable Lenses for Geospatial Exploration BIBAFull-Text 409-414
  Francisco Marinho Rodrigues; Teddy Seyed; Frank Maurer; Sheelagh Carpendale
Nowadays, looking at the path between two points on a city map has become a simple task using any modern tablet, smartphone or laptop. However, when exploring maps with different information across multiple layers and scales, users experience information discontinuity. Bancada is a multi-display system developed to investigate the exploration of geospatial information using multiple mobile devices in a multi-display environment. In Bancada, tablets are Zoomable Magic Lenses that augment, through specific geospatial layers, an overview map displayed on a tabletop or on a wall display. Users interact with lenses using touch gestures to pan and zoom; and multi-layer maps can be built by overlapping different lenses. Currently, Bancada is being used to research user interfaces separated across multiple devices and interactions with high-resolution mobile devices. Future work with Bancada includes (i) evaluating the user performance when using one tablet or multiple tablets to control all lenses; (ii) exploring what and how interactions can be performed on an overview map; and (iii) exploring how lenses can be changed.

Demonstrations

ClothLens Demo: Simultaneous Multi-User Interaction with Shared Content on a Tabletop BIBAFull-Text 415-418
  Christian Lander; Sven Gehring
We present ClothLens, a prototype that allows simultaneous interaction of multiple users with a digital map shown on a tabletop display. With our prototype interaction with the content by panning or zooming reduces the conflicts due to interferences. ClothLens utilizes the Focus+Context pattern enabling users to create personal lenses on top of the map. The actual content is organized as a physical cloth object that is bend or stretched according to the interactions.
The Interactive Dining Table, or Pass the Weather Widget, Please BIBAFull-Text 419-422
  Florian Echtler; Raphael Wimmer
Large-scale interactive surfaces are nearly ubiquitous in research labs and showrooms around the world today. However, unlike other recent interactive technologies such as smartphones, they have not yet found their way into people's everyday lives. Possible reasons include high cost as well as a lack of suitable applications. In this paper, we present our prototypical implementation of a low-cost, unobtrusive interactive surface, integrated with the dining table in a real-world living room. To motivate our approach, we explore three scenarios highlighting potential applications for our system and present their prototypical implementations. In the first scenario, we extend regular board games with interactive components without sacrificing their unique haptic experience. In the second scenario, we investigate ambient notifications which provide a method for delivering information to the users subconsciously. Finally, we explore the concept of augmented dining in which the appearance of food placed on the table is augmented or modified by the system.
CubeQuery: Tangible Interface for Creating and Manipulating Database Queries BIBAFull-Text 423-426
  Ricardo Langner; Anton Augsburg; Raimund Dachselt
We demonstrate CubeQuery, a tangible user interface providing a physical way to both create and manipulate basic database queries. This interactive installation is designed for individual faceted browsing and allows users to explore contents of a database, i.e., a music library. While each tangible represents an individual search parameter of a search request, the physical arrangement of multiple tangibles permits the combination of search parameters by utilizing basic logical operators. Goal of this research is to explore the practicality of spatial arrangement of tangibles to ease the process of faceted browsing.
Fusion of Mixed-Reality Tabletop and Location-Based Applications for Pervasive Games BIBAFull-Text 427-430
  Chris Zimmerer; Martin Fischbach; Marc Latoschik
Quest UbiquX fuses a multimodal mixed reality implementation of a traditional tabletop role-play game with a location-based mobile aspect to provide a novel Ubiquitous gaming eXperience (UbiquX). Mobile devices are used to progress the game in a single-player adventure phase based on the players location in the real world. In addition, they support the interaction with an interactive tabletop surface in a collaborative skirmish phase.
Multi-Touch Manipulation of Magic Lenses for Information Visualization BIBAFull-Text 431-434
  Ulrike Kister; Patrick Reipschläger; Raimund Dachselt
We introduce touch-enabled magic lenses that can be manipulated and parametrized through fluent interactions. Interaction with lenses for information visualization and data exploration has mostly been limited to single-user, single-function lenses. In this work, we present our prototype on lenses where lens function, parameters and combination of functions can be manipulated using fluent touch interaction. To achieve this, our tool consists of a widget-based approach for novice users as well as a set of continuous gestures for expert users. Additionally, we support the combination of lenses and thereby create a multi-purpose lens tool.
Demonstrating HuddleLamp: Spatially-Aware Mobile Displays for Ad-hoc Around-the-Table Collaboration BIBAFull-Text 435-438
  Roman Rädle; Hans-Christian Jetter; Nicolai Marquardt; Harald Reiterer; Yvonne Rogers
We present HuddleLamp, a desk lamp with an integrated RGB-D camera that precisely tracks the movements and positions of mobile displays and hands on a table. This enables a new breed of spatially-aware multi-user and multi-device applications for around-the-table collaboration without an interactive tabletop. At any time users can add or remove displays and reconfigure them in space in an ad-hoc manner without the need of installing any software or attaching markers. Additionally, hands are tracked to detect interactions above and between displays, enabling fluent cross-device interactions. The demo consists of the technical implementation of HuddleLamp's hybrid sensing and a Web-based architecture for installation-free ad-hoc collaboration. We demonstrate our implementation by showing a variety of possible interaction techniques.
FlexiWall: Exploring Layered Data with Elastic Displays BIBAFull-Text 439-442
  Mathias Müller; Anja Knöfel; Thomas Gründer; Ingmar Franke; Rainer Groh
By their deformable screen-materials elastic displays and projection screens provide physical three-dimensional interaction modalities like push, pull or bend. Compared with conventional Multi-Touch displays they offer an additional interaction dimension which can be used to explore data. In this article we describe the FlexiWall, a large elastic display, and several example applications using layered data sets. The exploration of several layers and correlations between these is not common to traditional user interfaces, where the interaction is often constrained to two dimensions. Therefore, new forms of interaction are introduced. We furthermore propose additional techniques and tools to explore layered data sets, e.g. utilizing transparent objects when interacting with elastic displays.
Demonstration and Applications of Fiberio: A Touchscreen That Senses Fingerprints BIBAFull-Text 443-446
  Sven Köhler; Christian Holz; Patrick Baudisch
We present a demonstration and applications of Fiberio, a rear-projected multitouch table that identifies users biometrically based on their fingerprints during each touch interaction. Fiberio accomplishes this using a new type of screen material: a large fiber optic plate. The plate diffuses light on transmission, thereby allowing it to act as projection surface. At the same time, the plate reflects light specularly, which produces the contrast required for fingerprint sensing. Fiberio additionally offers all the functionality known from diffused illumination systems and is the first interactive tabletop that authenticates users during touch interaction without the need for any identification tokens.
ComforTable: A Tabletop for Relaxed and Playful Interactions in Museums BIBAFull-Text 447-450
  Michael Storz; Kalja Kanellopoulos; Claudia Fraas; Maximilian Eibl
We present the ComforTable an all-in-one interactive tabletop system with integrated seats and a camera based user tracking system. The seats allow groups of users relaxed interactions with the interface. The system and its applications are made for and tested in museums and exhibitions. A card game and a pong game support up to six players playing competitively together. The tracking system recognizes persons approaching the table. This information is used to blend in helpful instructions and interaction options next to the users's position.
NEMOSHELL Demo: Windowing System for Concurrent Applications on Multi-user Interactive Surfaces BIBAFull-Text 451-454
  Junghan Kim; Inhyeok Kim; Taehyoung Kim; Young Ik Eom
Recently, the prevalence of large interactive surfaces renewed interests in windowing systems because of the advantages of enabling concurrent applications. We present the NEMOSHELL windowing system for multi-user interactive surfaces. We developed the system based on the Wayland system, a replacement for the Linux X. NEMOSHELL is designed to support multiple simultaneous applications, legacy input devices, legacy applications, and dynamic user interfaces. Finally, our demonstrations illustrate the potential of our system design.

Doctoral Symposium

Explicit & Implicit Interaction Design for Multi-Focus Visualizations BIBAFull-Text 455-460
  Simon Butscher
Many tasks that have to be performed to analyze data in large visual information spaces require the user to have several foci. This is for example the case for comparing or organizing digital artefacts. In my research, I explore alternative interaction concepts for multi-focus visualizations in the context of single and multi-user scenarios. Alongside explicit interaction for navigating within multi-focus visualizations I investigate implicit interaction for making the visualization react on the social and spatial context. To evaluate different designs, measures like task completion time, spatial memory and subjective preferences are examined.
Towards an Interaction Model for Multi-Surface Personal Computing Spaces BIBAFull-Text 461-466
  Henri Palleis
The prototype Curve was developed at our lab and its basic effects on touch interaction as well as elementary applications were explored in the recent dissertation of Hennecke [3]. My work is concerned with the concretion of his initial findings. In particular I am interested in (1) contextualizing the device by exploring specific application scenarios and (2) finding adequate interaction models that allow people using the display's input and output capabilities effectively and conveniently. In my thesis I want to provide guidelines that help user interface designers to develop interaction techniques for multi-surface personal computing spaces that comprise both horizontal and vertical touchscreens and show potential benefits of a seamless connection between them (e.g. a curved display segment).
Improving Interaction Discoverability in Large Public Interactive Displays BIBAFull-Text 467-472
  Victor Cheung
There is increasing interest in utilizing large, interactive displays in public spaces such as museums, retail stores, information centres, etc. in order to provide a more engaging user experience. Yet, prior studies have consistently reported that these systems are underutilized and, thus, not providing the desired user experience. My thesis aims to model the underlying interaction process with public interactive displays. A descriptive model of this interaction process will allow researchers and practitioners to better understand the unique design issues of these systems. It will also help specify existing and potential design advice, in order to better understand which stages of the process this design advice addresses, and which stages need more attention. My thesis also aims to develop a laboratory-based experimental methodology that enables more rapid and controlled evaluation of potential interaction design strategies for public interactive displays. My research is expected to provide insights for readers to design and build better and more usable large interactive systems for public settings.
Touching the Third Dimension BIBAFull-Text 473-478
  Paul Lubos
Natural interaction offers the user a great immersive experience and research has shown that especially multi-touch is easier to learn and utilize than classic human-computer interfaces, such as the keyboard and the mouse. However, creating 3D user interfaces (3D UIs) is a difficult endeavor, especially in fully immersive virtual environments (IVEs), such as head-mounted displays, because problems like distance underestimation and the lack of haptic feedback complicate the design and require new innovative solutions. Our research aims to identify the problems during the design of 3D UIs and the interaction with virtual objects in 3D space, and find ways to solve or avoid those problems, both for fully immersive environments and stereoscopic multi-touch displays. Furthermore, we aim to investigate new ways to utilize 3D UIs and improve on the available input devices.
Supporting Everyday Thinking Practices in Information Visualization Interfaces BIBAFull-Text 479-484
  Jagoda Walny
People commonly sketch externalizations on paper and whiteboards as part of their everyday thinking processes. While common, this practice is little understood, particularly as it may relate to digital visual representation (such as information visualization) and sketch-based interaction. My research aims to better understand these thinking sketches from the perspective of information visualization and pen-and-touch interaction and to apply this understanding to the design of interfaces that can better support complex and freeform everyday thinking practices.
Exploiting Spatial Memory and Navigation Performance in Dynamic Peephole Environments BIBAFull-Text 485-490
  Jens Müller
One way to handle the representation of (and the navigation in) datasets that exceed the available display space is by provisioning movable viewports which display a subset of the entire space. Unlike static viewports, where the information space is moved (e.g. by panning), dynamic viewports -- so-called dynamic peepholes -- allow the user to move the viewport in physical space and thereby enable egocentric navigation in digital information spaces. In my research I investigate how information spaces and the navigation with peepholes need to be designed in order to exploit spatial memory and navigation performance. In particular I focus on the interplay between the physical and the digital aspects and how they affect user performance. For study purposes I use a tablet as a dynamic peephole. I will conduct controlled studies in our research lab, which is equipped with 24 infrared cameras and enable a precise tracking of the lab environment.

Tutorials, Workshops and Studios

Tactile/Haptic User Interfaces for Tabletops and Tablets BIBAFull-Text 491-493
  Limin Zeng; Gerhard Weber
Tactile/haptic user interfaces have been becoming one of the important approaches to improve user experiences on various tabletops and tablets. This workshop proposes to bring researchers who are working or interested in the field together, and share their research experiences. In addition to tactile displays and tablets, the workshop focuses on haptic tangible user interfaces, fusion of visual and tactile/haptic feedback, and accessibility studies via tactile/haptic user interfaces. Through this workshop we plan to establish a working group for possible collaboration.
Collaboration Meets Interactive Surfaces: Walls, Tables, Tablets, and Phones (CMIS) BIBAFull-Text 495-498
  Craig Anslow; Pedro Campos; Alfredo Ferreira
This workshop proposes to bring together researchers who are interested in improving collaborative experiences through the combination of multiple interaction surfaces with diverse sizes and formats, ranging from large-scale walls, to tables, tablets and phones. The opportunities for innovation exist, but the ITS, CSCW, and HCI communities have not yet thoroughly addressed the problem of bringing effective collaboration activities together using multiple interactive surfaces, especially in complex work domains. Of particular interest is the potential synergy that one can obtain by effectively combining different-sized surfaces.
Tutorial: Hot Topics in Personal Fabrication Research BIBAFull-Text 499-502
  Stefanie Mueller; Alexandra Ion; Patrick Baudisch
In this tutorial, we survey novel ways for interacting with personal fabrication machines, such as laser cutters, milling machines, and 3D printers. The goal is to provide attendees with an overview of recent HCI research in personal fabrication and together with attendees build a roadmap for future research directions. Towards this goal, the tutorial will provide background knowledge in how personal fabrication machines work, which types of objects they can fabricate, and how they are currently being operated.
TUIO Hackathon BIBAFull-Text 503-505
  Martin Kaltenbrunner; Florian Echtler
TUIO is an open framework that defines a common protocol and API for tangible- and multitouch-surfaces. The protocol is based on Open Sound Control and allows the platform-independent encoding and transmission of an abstract description of interactive surfaces, including touch events and tangible object states. While the original TUIO specification has been implemented for various hardware and software environments, there are not as many feature complete reference implementations of the next TUIO generation, although its specification [2] has been finalized and partially implemented by community members. The TUIO Hackathon at the International Conference for Interactive Tabletops and Surfaces is addressing expert users and developers of hardware and software environments for surface-based tangible user interfaces that are interested in experimenting with this new framework, with the goal of initiating the development and integration of new TUIO implementations.