HCI Bibliography Home | HCI Conferences | ITS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ITS Tables of Contents: 09101112131415

Proceedings of the 2010 ACM International Conference on Interactive Tabletops and Surfaces

Fullname:Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Editors:Antonio Krüger; Johannes Schöning; Daniel Wigdor; Michael Haller
Location:Saarbrücken, Germany
Dates:2010-Nov-07 to 2010-Nov-10
Publisher:ACM
Standard No:ISBN: 1-4503-0399-4, 978-1-4503-0399-6; ACM DL: Table of Contents hcibib: ITS10
Papers:89
Pages:327
Links:Conference Home Page
  1. Displays
  2. Context 1
  3. Meta gestures
  4. Devices & algorithms
  5. Applications
  6. Context 2
  7. Interactions
  8. Physical, tangible, virtual
  9. Teaching & learning
  10. Information visualization
  11. Posters
  12. Demos
  13. Doctoral Symposium

Displays

BendDesk: dragging across the curve BIBAFull-Text 1-10
  Malte Weiss; Simon Voelker; Christine Sutter; Jan Borchers
We present BendDesk, a hybrid interactive desk system that combines a horizontal and a vertical interactive surface via a curve. The system provides seamless touch input across its entire area. We explain scalable algorithms that provide graphical output and multi-touch input on a curved surface. In three tasks we investigate the performance of dragging gestures across the curve, as well as the virtual aiming at targets. Our main findings are: 1) Dragging across a curve is significantly slower than on flat surfaces. 2) The smaller the entrance angle when dragging across the curve, the longer the average trajectory and the higher the variance of trajectories across users. 3) The curved shape of the system impairs virtual aiming at targets.
MudPad: tactile feedback and haptic texture overlay for touch surfaces BIBAFull-Text 11-14
  Yvonne Jansen; Thorsten Karrer; Jan Borchers
We introduce MudPad, a system capable of localized active haptic feedback on multitouch screens. We use an array of electromagnets combined with an overlay containing magnetorheological (MR) fluid to actuate a tablet-sized area. As MudPad has a very low reaction time it is able to produce instant multi-point feedback for multitouch input, ranging from static levels of surface softness to a broad set of dynamically changeable textures. Our system does not only convey global confirmative feedback on user input but allows the UI designer to enrich the entire interface with a tactile layer conveying local semantic information. This also allows users to explore the interface haptically.
Cool interaction with calm technologies: experimenting with ice as a multitouch surface BIBAFull-Text 15-18
  Antti Virolainen; Arto Puikkonen; Tuula Kärkkäinen; Jonna Häkkilä
In this paper we describe our interactive ice-wall installation, which is a multi-touch surface built from ice. Our demo seeks to stretch the boundaries of current ubiquitous computing systems by trying out a new material, which embeds itself to the environment -- here, outdoors in a snowy winter. In addition to the function of the interactive installation, where we show that ice as a material can be used for such purposes, we seek to offer an inspirational aspect to the design of ubiquitous computing systems. We also present the feedback collected from 33 surveyed and 10 interviewed users who interacted with the system.
Multi-point interactions with immersive omnidirectional visualizations in a dome BIBAFull-Text 19-28
  Hrvoje Benko; Andrew D. Wilson
This paper describes an interactive immersive experience using mid-air gestures to interact with a large curved display: a projected dome. Our Pinch-the-Sky Dome is an immersive installation where several users can interact simultaneously with omnidirectional data using freehand gestures. The system consists of a single centrally-located omnidirectional projector-camera unit where the projector is able to project an image spanning the entire 360 degrees and a camera is used to track gestures for navigation of the content. We combine speech commands with freehand pinch and clasping gestures and infra-red laser pointers to provide a highly immersive and interactive experience to several users inside the dome, with a very wide field of view for each user. The interactive applications include: 1) the astronomical data exploration, 2) social networking 3D graph visualizations, 3) immersive panoramic images, 4) 360 degree video conferencing, 5) a drawing canvas, and 6) a multi-user interactive game. Finally, we discuss the user reactions and feedback from two demo events where more than 1000 people had the chance to experience our work.

Context 1

VisTACO: visualizing tabletop collaboration BIBAFull-Text 29-38
  Anthony Tang; Michel Pahud; Sheelagh Carpendale; Bill Buxton
As we design tabletop technologies, it is important to also understand how they are being used. Many prior researchers have developed visualizations of interaction data from their studies to illustrate ideas and concepts. In this work, we develop an interactional model of tabletop collaboration, which informs the design of VisTACO, an interactive visualization tool for tabletop collaboration. Using VisTACO, we can explore the interactions of collaborators with the tabletop to identify patterns or unusual spatial behaviours, supporting the analysis process. VisTACO helps bridge the gap between observing the use of a tabletop system, and understanding users' interactions with the system.

Meta gestures

Gesture play: motivating online gesture learning with fun, positive reinforcement and physical metaphors BIBAFull-Text 39-48
  Andrew Bragdon; Arman Uguray; Daniel Wigdor; Stylianos Anagnostopoulos; Robert Zeleznik; Rutledge Feman
Learning a set of gestures requires a non-trivial investment of time from novice users. We propose a novel approach based on positive reinforcement for motivating the online learning of multi-touch gestures: introducing simple, game-like elements to make gesture learning fun and enjoyable. We develop 3 metaphors, button widgets, animated spring widgets, and physical props, as primitives for simple, physically-based puzzles which afford the disclosure of static and dynamic hand gestures. Using these metaphors, we implemented a gesture set representing 14 of 16 gesture types in an established hand gesture taxonomy. We present the results of a quantitative and qualitative evaluation which indicate this approach motivates gesture rehearsal more so than video demonstrations, while memory recall was equivalent overall but improved in the short-term, for controlled tasks.
Towards a formalization of multi-touch gestures BIBAFull-Text 49-58
  Dietrich Kammer; Jan Wojdziak; Mandy Keck; Rainer Groh; Severin Taranko
Multi-touch is a technology which offers new styles of interaction compared to traditional input devices like keyboard and mouse. Users can quickly manipulate objects or execute commands by means of their fingers and hands. Current multi-touch frameworks offer a set of standard gestures that are easy to use when developing an application. In contrast, defining new gestures requires a lot of work involving low-level recognition of touch data. To address this problem, we contribute a discussion of strategies towards a formalization of gestural interaction on multi-touch surfaces. A test environment is presented, showing the applicability and benefit within multi-touch frameworks.
Tool support for testing complex multi-touch gestures BIBAFull-Text 59-68
  Shahedul Huq Khandkar; S. M. Sohan; Jonathan Sillito; Frank Maurer
Though many tabletop applications allow users to interact with the application using complex multi-touch gestures, automated tool support for testing such gestures is limited. As a result, gesture-based interactions with an application are often tested manually, which is an expensive and error prone process. In this paper, we present TouchToolkit, a tool designed to help developers automate their testing of gestures by incorporating recorded gestures into unit tests. The design of TouchToolkit was informed by a small interview study conducted to explore the challenges software developers face when debugging and testing tabletop applications. We have also conducted a preliminary evaluation of the tool with encouraging results.

Devices & algorithms

Using a depth camera as a touch sensor BIBAFull-Text 69-72
  Andrew D. Wilson
We explore the application of depth-sensing cameras to detect touch on a tabletop. Limits of depth estimate resolution and line of sight requirements dictate that the determination of the moment of touch will not be as precise as that of more direct sensing techniques such as capacitive touch screens. However, using a depth-sensing camera to detect touch has significant advantages: first, the interactive surface need not be instrumented. Secondly, this approach allows touch sensing on non-flat surfaces. Finally, information about the shape of the users and their arms and hands above the surface may be exploited in useful ways, such as determining hover state, or that multiple touches are from same hand or from the same user. We present techniques and findings using Microsoft Kinect.
Seeing through the fog: an algorithm for fast and accurate touch detection in optical tabletop surfaces BIBAFull-Text 73-82
  Christopher Wolfe; T. C. Nicholas Graham; Joseph A. Pape
Fast and accurate touch detection is critical to the usability of multi-touch tabletops. In optical tabletops, such as those using the popular FTIR and DI technologies, this requires efficient and effective noise reduction to enhance touches in the camera's input. Common approaches to noise reduction do not scale to larger tables, leaving designers with a choice between accuracy problems and expensive hardware. In this paper, we present a novel noise reduction algorithm that provides better touch recognition than current alternatives, particularly in noisy environments, without imposing higher computational cost. We empirically compare our algorithm to other noise reduction approaches using data collected from tabletops at research labs in Canada and Europe.
Construction and evaluation of multi-touch screens using multiple cameras located on the side of the display BIBAFull-Text 83-90
  Otto Korkalo; Petri Honkamaa
We applied a computer vision based method and developed multi-touch technology to be adopted in various display types. In the selected design, multiple cameras are placed on the side of the display with their optical axis parallel to the screen. The display edges opposite the cameras are illuminated and fingers are detected since they block the light in the camera images. The approach is scalable and can be used in a wide variety of displays. Due to self-occlusion of the touchpoints, it is challenging to relate camera measurements to tracked points. In this paper, we present our approach for tracking and managing multiple touchpoints in such camera set-ups. We describe the mathematical background for modeling and calibrating the cameras, the design of the extended Kalman filter for point tracking, and the logic for adding, updating and removing the touchpoints. We analyze the potential accuracy and robustness of the system using several simulations and present two different real-life implementations of the approach.
Z-touch: an infrastructure for 3d gesture interaction in the proximity of tabletop surfaces BIBAFull-Text 91-94
  Yoshiki Takeoka; Takashi Miyaki; Jun Rekimoto
Sensing the depth (distance from the surface) of fingers/hands near a tabletop is very important. It allows us to use three-dimensional (3D) gesture interaction in multi-touch applications as we do in the real world. We introduce Z-touch, a multi-touch table that can sense the approximate postures of fingers or hands in the proximity of the tabletop's surface. Z-touch uses a vision-based posture sensing system. Multilayered infrared (IR) laser planes are synchronized with shutter signals from a high-speed camera, which captures each layer of the laser images. A depth map is obtained by using the captured image. Our prototype works at 30 fps. Z-touch not only uses with the finger/hand contact points but also the angle of the hovering fingers. The interaction with the finger angles is unique and allows users to control multiple parameters by using a single finger. In this study, we introduce the principle of the method of finger detection and its applications (e.g., drawing, map zooming viewer, Bezier curve control).

Applications

Switch: exploring the design of application and configuration switching at tabletops BIBAFull-Text 95-104
  Christopher James Ackad; Anthony Collins; Judy Kay
In all but the purest appliance interfaces, users need some of the fundamental core facilities for general computing interface elements: to change applications; change the files the application uses; and control which interface elements are present on the table. While these facilities have been refined for desktops, the particular affordances and limitations of tabletops call for a rethink of the interfaces for these actions. We describe the design process for Switch, which supports the core functions of application and configuration switching at an interactive tabletop. We began with several low-fidelity prototypes, evaluating these to a refined set of four. We then evaluated each of these using Heuristic Evaluation with 4 experts and Cognitive Walkthrough by 5 experts. From this, we created the final Switch design which we evaluated for usability with a think-aloud study by 8 users. We conclude that Switch is easy to learn and use for the core facilities for general computing. We reflect on lessons learnt and directions for the future. Our key contributions are the exploration of user interface support for a set of the most fundamental core facilities for general computing at tabletops, our use of these to design the Switch tool and our usability evaluation of Switch, providing a foundation for the design of the core user interface elements that will enable people to make flexible use of tabletops.
Touching the depths: introducing tabletop interaction to reservoir engineering BIBAFull-Text 105-108
  Nicole Sultanum; Ehud Sharlin; Mario Costa Sousa; Daniel N. Miranda-Filho; Rob Eastick
Modern reservoir engineering is dependent on 3D visualization tools. However, as we argue in this paper, the current tools used in this domain are not completely aligned with the reservoir engineer's interactive needs, and do not address fundamental user issues, such as collaboration. We base our work on a set of observations of reservoir engineers, and their unique interactive tasks and needs. We present insightful knowledge of the domain, and follow with a prototype for an interactive reservoir visualization system, on the Microsoft Surface. We conclude by presenting a design critique we performed using our prototype, and reflecting on the impact we believe tabletop interaction will have on the domain of reservoir engineering.
WaveWindow: public, performative gestural interaction BIBAFull-Text 109-112
  Mark Perry; Steve Beckett; Kenton O'Hara; Sriram Subramanian
Retail products are often experienced through transparent barriers such as shop windows, vending machines or display cabinets. Such surfaces offer opportunities for digital augmentation to enhance the experience at this point of contact. To explore this domain and its challenges, we have developed and evaluated the WaveWindow. This is an interactive see-through display that allows users to interact with digital content that overlays physical items behind a semi-transparent screen. Navigating and selecting content is achieved by waving and knocking on the display. We performed a user study and the resulting user interactions were recorded and analysed, and a number of design recommendations are made for gestural interaction in public settings and their application in a retail setting.
A set of multi-touch graph interaction techniques BIBAFull-Text 113-116
  Sebastian Schmidt; Miguel A. Nacenta; Raimund Dachselt; Sheelagh Carpendale
Interactive node-link diagrams are useful for describing and exploring data relationships in many domains such as network analysis and transportation planning. We describe a multi-touch interaction technique set (IT set) that focuses on edge interactions for node-link diagrams. The set includes five techniques (TouchPlucking, TouchPinning, TouchStrumming, TouchBundling and PushLens) and provides the flexibility to combine them in either sequential or simultaneous actions in order to address edge congestion.
A multi-touch tabletop for robust multimedia interaction in museums BIBAFull-Text 117-120
  Nuno Correia; Tarquínio Mota; Rui Nóbrega; Luís Silva; Andreia Almeida
The introduction of interaction technology in museum settings requires special care in different aspects including the relations with the different participants (public, artists, and curators), and the ability of the technology to provide rewarding experiences over extended periods of time for a demanding audience. This paper describes a hardware and software multi-touch table deployed in a contemporary art exhibition with many visitors. The design requirements are discussed and presented along with the development process. The paper also focuses on the different interaction and navigation mechanisms available and how they are used by the visitors and participants.

Context 2

Proxemic interaction: designing for a proximity and orientation-aware environment BIBAFull-Text 121-130
  Till Ballendat; Nicolai Marquardt; Saul Greenberg
In the everyday world, much of what we do is dictated by how we interpret spatial relationships, or proxemics. What is surprising is how little proxemics are used to mediate people's interactions with surrounding digital devices. We imagine proxemic interaction as devices with fine-grained knowledge of nearby people and other devices -- their position, identity, movement, and orientation -- and how such knowledge can be exploited to design interaction techniques. In particular, we show how proxemics can: regulate implicit and explicit interaction; trigger such interactions by continuous movement or by movement of people and devices in and out of discrete proxemic regions; mediate simultaneous interaction of multiple people; and interpret and exploit people's directed attention to other people and objects. We illustrate these concepts through an interactive media player running on a vertical surface that reacts to the approach, identity, movement and orientation of people and their personal devices.

Interactions

IdLenses: dynamic personal areas on shared surfaces BIBAFull-Text 131-134
  Dominik Schmidt; Ming Ki Chong; Hans Gellersen
IdLenses is a novel interaction concept to realize user-aware interfaces on shared surfaces. Users summon virtual lenses which allow for personalized input and output. The ability to create a lens instantaneously anywhere on the surface, and to move it around freely, enables users to fluidly control which part of their input is identifiable, and which shall remain anonymous. In this paper, we introduce the IdLenses concept and its interaction characteristics. Further, we discuss how it enables the personalization of input and output on shared surfaces.
FlashLight: optical communication between mobile phones and interactive tabletops BIBAFull-Text 135-138
  Tobias Hesselmann; Niels Henze; Susanne Boll
Mobile phones can be used as mediators between users and interactive tabletops in several scenarios, including authentication and the sharing of information. Existing radio-based methods such as WiFi or Bluetooth offer a high-speed communication channel, but have serious limitations regarding the tabletop-phone-human interaction. They are not able to locate mobile phones placed on the surface, often require fairly complex coupling procedures for establishing connections, and are potentially vulnerable to eavesdropping attacks. In this paper, we present a method for establishing a bidirectional communication channel between mobile phones and vision-based interactive surfaces utilizing the built-in flash-light and camera of mobile phones and the screen and camera of vision-based tabletops. We establish an entirely visual, secure and bidirectional communication channel at a speed superior to previous vision-based approaches, enabling users to establish connections and transfer data to and from interactive surfaces using ordinary out-of-the-box hardware.
What caused that touch?: expressive interaction with a surface through fiduciary-tagged gloves BIBAFull-Text 139-142
  Nicolai Marquardt; Johannes Kiemer; Saul Greenberg
The hand has incredible potential as an expressive input device. Yet most touch technologies imprecisely recognize limited hand parts (if at all), usually by inferring the hand part from the touch shapes. We introduce the fiduciary-tagged glove as a reliable, inexpensive, and very expressive way to gather input about: (a) many parts of a hand (fingertips, knuckles, palms, sides, backs of the hand), and (b) to discriminate between one person's or multiple peoples' hands. Examples illustrate the interaction power gained by being able to identify and exploit these various hand parts.
Sketched menus and iconic gestures, techniques designed in the context of shareable interfaces BIBAFull-Text 143-146
  Mohammed Belatar; François Coldefy
Suppose a user is interacting with other persons around a digital tabletop or in front of a digital wall. A user wants to launch a new graphical component or an application in the part of the screen next to him. Traditional methods such as popup menus allow him to first open the application and afterwards let him move, resize and orient the component appropriately. Meanwhile, the component may cover some objects the others users are looking at or interacting with. How to avoid this disruption of other persons' activity?
   This paper describes menu techniques for adding a new user's interface object on a shared device while preserving mutual awareness of the participants without disturbing them in their interaction. We present Sketched Menu, Abbreviated Sketched Menu and Iconic Gestures. These techniques let a user specify the shape, the size, the location and the orientation of the desired object before its creation.
   Sketched menus and iconic gestures preserve the mutual awareness. These techniques allow both adaptation of a user to the current context -the actions of the other users and the spatial arrangement of the objects on the tabletop-and the others to be aware of the foreseen action and the related space claim.
Multitouch puppetry: creating coordinated 3D motion for an articulated arm BIBAFull-Text 147-156
  Michael Kipp; Quan Nguyen
Controlling a high-dimensional structure like a 3D humanoid skeleton is a challenging task. Intuitive interfaces that allow non-experts to perform character animation with standard input devices would open up many possibilities. Therefore, we propose a novel multitouch interface for simultaneously controlling the many degrees of freedom of a human arm. We combine standard multitouch techniques and a morph map into a bimanual interface, and evaluate this interface in a three-layered user study with repeated interactions. The multitouch interface was found to be as easy to learn as the mouse interface while outperforming it in terms of coordination. For the analysis, we propose a novel quantity-based coordination measure. For the systematic exploration of the design space, we suggest using dataflow diagrams. Our results show that even complex multitouch interfaces can be easy to learn and that our interface allows non-experts to produce highly coordinated arm-hand animations with subtle timing.

Physical, tangible, virtual

Tangible views for information visualization BIBAFull-Text 157-166
  Martin Spindler; Christian Tominski; Heidrun Schumann; Raimund Dachselt
In information visualization, interaction is commonly carried out by using traditional input devices, and visual feedback is usually given on desktop displays. By contrast, recent advances in interactive surface technology suggest combining interaction and display functionality in a single device for a more direct interaction. With our work, we contribute to the seamless integration of interaction and display devices and introduce new ways of visualizing and directly interacting with information. Rather than restricting the interaction to the display surface alone, we explicitly use the physical three-dimensional space above it for natural interaction with multiple displays. For this purpose, we introduce tangible views as spatially aware lightweight displays that can be interacted with by moving them through the physical space on or above a tabletop display's surface. Tracking the 3D movement of tangible views allows us to control various parameters of a visualization with more degrees of freedom. Tangible views also facilitate making multiple -- previously virtual -- views physically "graspable". In this paper, we introduce a number of interaction and visualization patterns for tangible views that constitute the vocabulary for performing a variety of common visualization tasks. Several implemented case studies demonstrate the usefulness of tangible views for widely used information visualization approaches and suggest the high potential of this novel approach to support interaction with complex visualizations.
Physical and digital media usage patterns on interactive tabletop surfaces BIBAFull-Text 167-176
  Jürgen Steimle; Mohammadreza Khalilbeigi; Max Mühlhäuser; James D. Hollan
Concurrent interaction with physical and digital media is ubiquitous in knowledge work. Although tabletop systems increasingly support activities involving both physical and digital media, patterns of use have not been systematically assessed. This paper contributes the results of a study of spatial usage patterns when physical and digital items are grouped and sorted on a tabletop work surface. In addition, analysis reveals a dual character of occlusion, involving both inconvenient and desirable aspects. We conclude with design implications for hybrid tabletop systems.
Hybrid documents ease text corpus analysis for literary scholars BIBAFull-Text 177-186
  Stephan Deininghaus; Max Möllers; Moritz Wittenhagen; Jan Borchers
We present a study that explores how literary scholars interact with physical and digital documents in their daily work. Motivated by findings from this study, we propose refactoring the working environment of our target audience to improve the integration of digital material into established paper-centric processes. This is largely facilitated through the use of hybrid documents, i.e., cross-modal compound documents that employ a printed book for rich, tangible interaction in tandem with a digital component for matching interactive augmentation on a digital workbench. The results from two user studies in which we evaluated increasingly detailed prototypes demonstrate that this design offers better support for central workflows in literary studies than currently prevalent approaches.

Teaching & learning

Towards a teacher-centric approach for multi-touch surfaces in classrooms BIBAFull-Text 187-196
  Iyad AlAgha; Andrew Hatch; Linxiao Ma; Liz Burd
The potential of tabletops to enable simultaneous interaction and face-to-face collaboration can provide novel learning opportunities. Despite significant research in the area of collaborative learning around tabletops, little attention has been paid to the integration of multi-touch surfaces into classroom layouts and how to employ this technology to facilitate teacher-learner dialogue and teacher-led activities across multi-touch surfaces. While most existing techniques focus on the collaboration between learners, this work aims to gain a better understanding of practical challenges that need to be considered when integrating multi-touch surfaces into classrooms. It presents a multi-touch interaction technique, called TablePortal, which enables teachers to manage and monitor collaborative learning on students' tables. Early observations of using the proposed technique within a novel classroom consisting of networked multi-touch surfaces are discussed. The aim was to explore the extent to which our design choices facilitate teacher-learner dialogue and assist the management of classroom activity.
Digital mysteries: designing for learning at the tabletop BIBAFull-Text 197-206
  Ahmed Kharrufa; David Leat; Patrick Olivier
We present the iterative design, implementation, and validation of a collaborative learning application for school children designed for a digital tabletop. Digital mysteries, is based on the mysteries paper-based learning technique. Our work is distinctive in that the design process, the design choices, and the implementation framework are all grounded in theories of both collaborative interaction and learning. Our hypothesis was that, if well utilized, the digital tabletop's unique affordances would allow for the creation of collaborative learning tools that were better than traditional paper-or computer-based tools. The two main design goals for the digital version are supporting externalization of thinking and higher-level thinking skills. The evaluation of the final version provided evidence that use of the application increases the probability that effective learning mechanisms will occur and encourages higher-level thinking through reflection. We conclude the paper with design guidelines for tabletop collaborative learning applications.
Collaborative concept mapping at the tabletop BIBAFull-Text 207-210
  Roberto Martínez Maldonado; Judy Kay; Kalina Yacef
Concept mapping is a technique where users externalise their conceptual and propositional knowledge of a domain in a way that can be readily understood by others. It is widely used in education, so that a learner's understanding is made available to their peers and to teachers. There is considerable potential educational benefit in collaborative concept mapping, and the tabletop is an ideal tool for this. This paper describes Cmate, a tabletop collaborative concept mapping system. We describe its design process and how this draws upon both the principles of concept mapping and on those for creating educational applications on tabletops.

Information visualization

OA-graphs: orientation agnostic graphs for improving the legibility of charts on horizontal displays BIBAFull-Text 211-220
  Fouad Alallah; Dean Jin; Pourang Irani
Horizontal displays are emerging as a standard platform for engaging participants in collaborative tasks. Little is known about how groups of people view visualizations in these collaborative settings. Several techniques have been proposed to assist, such as duplicating or reorienting the visual displays. However, when visualizations compete for pixels on the display, prior solutions do not work effectively. We first ran an experiment to identify whether orientation on horizontal displays impacts the legibility of simple visualizations such as charts. The results reveal that users are best at reading a chart when it is the right side up, taking them 20% less time to read than when it is upside down. This insight led us to develop the Orientation Agnostic Graph (OA-Graph), making use of a radial layout designed to be legible regardless of orientation. In a second experiment we found that users can read OA-Graphs better than when the graphs are upside down but less well than traditional graphs in the right side up. The design of our novel visualization, informed by radial visualization methods will assist designers in developing charts that are not easily affected by user orientation, an issue that is prevalent in collaborative table-top systems. Certain tasks such as observing relative differences can benefit from OA-Graphs.
Integrating 2D mouse emulation with 3D manipulation for visualizations on a multi-touch table BIBAFull-Text 221-230
  Luc Vlaming; Christopher Collins; Mark Hancock; Miguel Nacenta; Tobias Isenberg; Sheelagh Carpendale
We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch environment was difficult because the mouse emulation of touch surfaces is often insufficient to provide full information visualization functionality. We present a unified design, combining many Rizzos that have been designed not only to provide mouse capabilities but also to act as zoomable lenses that make precise information access feasible. The Rizzos and the information visualizations all exist within a touch-enabled 3D window management system. Our approach permits touch interaction with both the 3D windowing environment as well as with the contents of the individual windows contained therein. We describe an implementation of our technique that augments the VisLink 3D visualization environment to demonstrate how to enable multi-touch capabilities on all visualizations written with the popular prefuse visualization toolkit.
Hugin: a framework for awareness and coordination in mixed-presence collaborative information visualization BIBAFull-Text 231-240
  KyungTae Kim; Waqas Javed; Cary Williams; Niklas Elmqvist; Pourang Irani
Analysts are increasingly encountering datasets that are larger and more complex than ever before. Effectively exploring such datasets requires collaboration between multiple analysts, who more often than not are distributed in time or in space. Mixed-presence groupware provide a shared workspace medium that supports this combination of co-located and distributed collaboration. However, collaborative visualization systems for such distributed settings have their own cost and are still uncommon in the visualization community. We present Hugin, a novel layer-based graphical framework for this kind of mixed-presence synchronous collaborative visualization over digital tabletop displays. The design of the framework focuses on issues like awareness and access control, while using information visualization for the collaborative data exploration on network-connected tabletops. To validate the usefulness of the framework, we also present examples of how Hugin can be used to implement new visualizations supporitng these collaborative mechanisms.

Posters

A slim tabletop interface based on high resolution LCD screens with multiple cameras BIBAFull-Text 241-242
  YoungSeok Ahn; HyungSeok Kim; Mingue Lim; Jun Lee; Jee-In Kim
We propose a slim tabletop interface based on high resolution LCD screens with multiple cameras. In order to perceive multi-touch inputs, we adopt the FTIR (Frustrated Total Internal Reflection) method. Dual LCD panels are used to construct a slim and wide output screen of the tabletop interface. The height of the tabletop interface is 20 cm and dual cameras with dual mirrors are used. Our proposed interface is more flexible and more extensible comparing to previous tabletop interfaces.
A collaborative touch-based newspaper editor concept BIBAFull-Text 243-244
  Simon Bergweiler; Matthieu Deru; Alassane Ndiaye
The digital workflow of the conception of a newspaper or a magazine involves several tasks. While professional solutions like QuarkXpress or Adobe InDesign combined with InCopy only offer the possibility for one person to modify a document at the same time, we have developed a concept in form of a prototype that will help journalists and the editorial staff to work collaboratively together using a touch-based table with a new approach of pagination and text editing through multitouch gestures.
Integrating a multitouch kiosk system with mobile devices and multimodal interaction BIBAFull-Text 245-246
  Simon Bergweiler; Matthieu Deru; Daniel Porta
We present Calisto, a service-oriented information kiosk system for public places, like museums or hotel lobbies. Calisto supports collaboration between multiple users. They can connect their mobile devices to the large public terminal and share interesting facts and media contents via intuitive multimodal interaction. The novel contribution of our work is a seamless combination of a touch-based kiosk system and mobile devices for accessing heterogeneous information services.
Usage of multimodal maps for blind people: why and how BIBAFull-Text 247-248
  Anke Brock; Philippe Truillet; Bernard Oriola; Christophe Jouffrais
Multimodal interactive maps are a solution for providing the blind with access to geographic information. Current projects use a tactile map set down on a monotouch display with additional sound output. In our current project we investigated the usage of multitouch displays for this purpose. In this paper, we outline our requirements concerning the appropriate multitouch tactile device and we present a first prototype. We conclude with future working propositions.
From digital to physical: learning physical computing on interactive surfaces BIBAFull-Text 249-250
  Bettina Conradi; Martin Hommer; Robert Kowalski
We want to investigate the benefits of interactive surfaces for guiding novices in the learning process of physical computing. As interactive surfaces can augment physical objects with digital information, we utilize these to foster the learning experience in three ways: (1) by supporting multiple users for collaborative learning, (2) providing in-place knowledge mediation about physical prototyping and electronics and (3) by digitally guiding users from exploring physical components to creating a physical prototype.
Surface-poker: multimodality in tabletop games BIBAFull-Text 251-252
  Chi Tai Dang; Elisabeth André
Multimodal interaction and face-to-face communication between players are aspects of traditional board games which contribute to their popularity. Such aspects are also typical of digital tabletop games. But in addition to that, digital systems allow for a higher level of multimodality through utilizing novel interaction devices or physiological input data. In this paper, we describe a multimodal tabletop Poker game which makes use of both: additional interaction devices and physiological input data. We outline the tabletop game, the interaction modalities involved, and observations of players.
Applying bimanual interaction principles to text input on multi-touch surfaces and tabletops BIBAFull-Text 253-254
  Liam Don; Shamus P. Smith
Multi-touch surfaces and tabletops present new challenges and possibilities for text input. By basing designs on established theoretical models of bimanual interaction, it is possible to evaluate the best choice of bimanual technique for a novel form of text input. As a first step, we propose an asymmetric bimanual text entry method for the purpose of evaluation. Early results indicate that text entry performance improves more quickly using the novel method, while overall speed is very similar.
A multi-touch alignment guide for interactive displays BIBAFull-Text 255-256
  Mathias Frisch; Ricardo Langner; Sebastian Kleinau; Raimund Dachselt
Precise alignment of graphical objects and creating proper layouts is crucial in many domains, such as graphic design or graph editing. In this paper we are presenting a multi-touch alignment guide for interactive displays. It allows adjusting the alignment and spacing of graphical objects by multi-touch input and bimanual interaction.
Can "touch" get annoying? BIBAFull-Text 257-258
  Jens Gerken; Hans-Christian Jetter; Toni Schmidt; Harald Reiterer
While touch interaction with tabletops is now widely accepted as a very natural and intuitive form of input, only little research has been carried out to understand whether and how it might interfere with our natural ways of gestural communication. This poster presents a study that aims at understanding the importance of touching physical and virtual artifacts during discussion or collaboration around a table. Furthermore, it focuses on how users compensate for conflicts between non-interactivity and interactivity created by unintended touch interaction when using a multi-touch enabled tabletop. In our study, we asked participants to explain illustrations of technical or physical mechanisms, such as the workings of an airplane wing. We observed whether and how they used gestures to do so on a touch sensitive Microsoft Surface tabletop and on a sheet of paper. Our results suggest that touching is an essential part of such an activity and that the compensation strategies people adapt to avoid conflicts may reduce precision of communication and increase the physical strain on the user.
Collaborative sketching with distributed displays and multimodal interfaces BIBAFull-Text 259-260
  Florian Geyer; Hans-Christian Jetter; Ulrike Pfeil; Harald Reiterer
In this paper we describe a system design for supporting creative group activities using distributed displays and multimodal interaction. We describe the rationale behind our approach and proposed interaction techniques for supporting collaborative sketching. Our goal is to understand how the interface metaphor, display space and interaction modalities may influence exploration and communication in collaborative settings.
Supporting creativity workshops with interactive tabletops and digital pen and paper BIBAFull-Text 261-262
  Florian Geyer; Daniel Klinkhammer; Harald Reiterer
In this paper we report our findings from an exploratory design study using a combination of an interactive tabletop and digital pen & paper technology during a full-day creativity workshop with creative professionals. We describe the applied creativity technique, the system design and the employed interaction techniques. The preliminary results of our study show that this combination of interaction modalities introduces a rich design space for creativity support systems and informal design tools.
ScatterTouch: a multi touch rubber sheet scatter plot visualization for co-located data exploration BIBAFull-Text 263-264
  Mathias Heilig; Stephan Huber; Mischa Demarmels; Harald Reiterer
This paper introduces a touch-sensitive two-dimensional scatter plot visualization, to explore and analyze movie data. The design focuses on the ability to work co-located with several users. These are able to create several focus regions through distortion techniques triggered by multi touch gestures. Furthermore, the introduced visualization is an example how promising concepts from InfoVis research can be transferred onto multi touch tables in order to offer more natural interaction.
SCiVA: a design process for applications on interactive surfaces BIBAFull-Text 265-266
  Tobias Hesselmann; Susanne Boll
When creating user interfaces for interactive surfaces, developers and researchers are confronted with the question what design processes can be applied to build an efficiently usable system, as existing techniques are often not sufficiently considering the special constraints and requirements of surface computers. In this poster, we present SCiVA, a 5-step iterative process for designing gesture-based, visual interfaces for interactive surfaces. We identify crucial steps in the development and suggest user-centric methods that can be applied to the respective steps.
Medical education on an interactive surface BIBAFull-Text 267-268
  Maria Kaschny; Sandra Buron; Ulrich von Zadow; Kai Sostmann
We present initial results of SimMed, an ongoing interdisciplinary project for the use of interactive tables in medical education. The project is motivated by the need of combining theoretical knowledge with practice in medical education and the time-consuming task of finding appropriate patients for teaching. Students in medicine are able to interact realistically with a virtual patient displayed on the interactive table to diagnose and cure illnesses. The project is still under development.
A language to define multi-touch interactions BIBAFull-Text 269-270
  Shahedul Huq Khandkar; Frank Maurer
Touch has become a common interface for human computer interaction. From portable hand held devices like smart phones to tabletops, large displays and even devices that project on arbitrary surfaces support touch interface. However, at the end, it is the applications that bring meaning for these technologies to people. Incorporating a touch interface in application requires translating meaningful touches into system recognizable events. This process often involves complex implementations that are sometimes hard to fine tune. Due to the lack of higher-level frameworks, developers often end up writing code from scratch to implement touch interactions in their application. To address this, we present a domain-specific language to define multi-touch interaction that hides the low level implementation complexities from application developers. This allows them to focus on designing touch interactions that are natural and meaningful to the application context without worrying about implementation complexities.
TouchBridge: augmenting active tangibles for camera-based multi-touch surfaces BIBAFull-Text 271-272
  Cassim Ladha; Karim Ladha; Jonathan Hook; Daniel Jackson; Gavin Wood; Patrick Olivier
The augmentation of tabletop multi-touch surfaces with physical objects is a well visited approach for the realisation of tangible user interfaces. Interaction with objects on a multi-touch surface is often limited to spatial manipulation and common marker-based techniques for object tracking typically provide little more than position and orientation information for the objects. We present a novel, low-cost, approach for tracking objects on a camera-based multi-touch surface. Our approach utilises modulated Infrared light to provide a bi-directional communication channel between objects and the surface and thereby presents the opportunity for much richer forms of interaction with physical objects.
PhysicsBox: playful educational tabletop games BIBAFull-Text 273-274
  Ricardo Langner; John Brosz; Raimund Dachselt; Sheelagh Carpendale
We present PhysicsBox, a collection of three multi-touch, physics-based, educational games. These games, based on concepts from elementary science have been designed to provide teachers with tools to enrich lessons and support experimentation. PhysicsBox combines two current trends, the introduction of multi-touch tabletops into classrooms and research on the use of simulated physics in tabletop applications. We also provide a Java library that supports hardware independent multi-touch event handling for several tabletops.
Multitouch navigation in zoomable user interfaces for large diagrams BIBAFull-Text 275-276
  Dionysios Marinos; Chris Geiger; Tobias Schwirten; Sebastian Göbel
In this work we present a zoomable user interface (ZUI) to navigate in a large hierarchical graph using two multitouch systems. The first system is implemented using the radar-TOUCH device [6], a sensor device based on laser technology that detects hand interaction on or in front of large planar surfaces. The second system uses a multitouch table [2] that allows both for finger and pen interaction. The software running on the table serves as a navigation and analysis tool. Both systems use the same renderer to visualize a very large graph based diagram on a large vertical surface.
IdWristbands: IR-based user identification on multi-touch surfaces BIBAFull-Text 277-278
  Tobias Meyer; Dominik Schmidt
Multi-touch surfaces are predestined to be used by many users at the same time. However, most systems are not able to distinguish touches of different users. This paper presents ongoing work on using wristbands, equipped with infrared LEDs, to provide touches with identities. The LEDs transmit coded light pulses as an identifier. We are using a specific blinking-pattern to determine the orientation of the wristband in order to reliably associate corresponding touches.
Experiences in conceiving and prototyping a commercial business application using multi-touch technology BIBAFull-Text 279-280
  Claudia Nass; Kerstin Klöckner; Rudolf Klein; Hartmut Schmitt; Sarah Diefenbach
In this paper, we describe methods and challenges regarding the conception and prototypical implementation of a business application that uses multi-touch technology in its input device. We illustrate the use of a new conception and specification approach called DESIGNi and our experiences using the new Microsoft Expression Studio, which is specifically intended to support the development of commercial multi-touch software. We focused on the redesign of an application called "Graphical Knowledge Editor" (GKE), which is used to model processes and workflows of a call center. This system was evaluated in a study that showed how the users perceived the quality of the new software.
Bounsight table: a view-dependent display with a single front projection BIBAFull-Text 281-282
  Ryo Oshima; Yasuaki Kakehi
When we use a tabletop display, we need a display system that can show shared and personalized information to each user around it. For this purpose, we think view dependent displays, which can show different images according to the viewing directions, are suitable. So far, we have developed some types of view-dependent tabletop displays. However, there were some limitations of system sizes and shapes since we adopted a rear-projection approach. On the other hand, this time we propose a novel view-dependent display using just single overhead projector. In addition, it can show images on the surface of tabletop physical object. In this paper, we describe the system design and an application of this system.
pPen: enabling authenticated pen and touch interaction on tabletop surfaces BIBAFull-Text 283-284
  Yongqiang Qin; Chun Yu; Hao Jiang; Chenjun Wu; Yuanchun Shi
This paper introduced pPen, a pressure-sensitive digital pen enabled precise pressure and touch input on vision based interactive tabletops. With the help of pPen inputs and feature matching technology, we implemented a novel method supporting multi-user authenticated interaction in the bimanual pen and touch scenario: login can be performed just by stroking one's signature with pPen on the table; a binding between user and pPen was created at the same time, then each interaction command made by pPen is user differentiating. We also conducted laboratory user studies, which later proved the safety and the high resistance to shoulder surfing problem: in the evaluation procedure, any attacker can never log into other user's working space.
Selecting targets on large display with mobile pointer and touchscreen BIBAFull-Text 285-286
  Umar Rashid; Aaron Quigley; Jarmo Kauko
We present an empirical study that compares Zoom&Pick (ZP) and Semantic Snarfing (SS) which are techniques for selecting targets on a large display using a mobile device. ZP uses a mobile pointer to zoom-in the region of interest and select the targets on the large display. SS involves pointing at the large display to transfer the zoom-in view of the pointed region onto the mobile touchscreen and make selections thereafter. The experimental results indicate that SS outperforms ZP in terms of speed.
Player-defined configurable soft dialogues: an extensible input system for tabletop games BIBAFull-Text 287-288
  Anthony Savidis; Yannis Lilis
We present a reusable input system for tabletop games relying on player-defined soft dialogues with touch-based input, minimizing the input device management needs of tabletop games. Dialogues are defined in XML files, including graphical appearance, hot zones, and commands posted to the game. Soft dialogues can be interactively moved on the game terrain, and support animation-based show / hide. Their implementation contact side with the game system concerns: (i) registration of the necessary handlers for commands posted by dialogues; (ii) invocation of the dialogues display function in the game rendering loop; and (iii) request for opening dialogues using logical ids.
eGrid: supporting the control room operation of a utility company with multi-touch tables BIBAFull-Text 289-290
  Elaf Selim; Frank Maurer
This work presents eGrid, a software environment designed for utility companies to enable the collaboration of control center team members in their daily tasks of analyzing and managing the electrical grid of a city. Despite the recent advances in geospatial data analysis applications, the need still arises for the design of innovative applications which facilitate the collaboration of control center team members in discussing problems and finding solutions, eliminating the headache of synchronizing paper maps frequently to reflect the changes made daily in electricity circuits. eGrid utilizes multi-touch digital table-top hardware to allow multiple users to interact concurrently with domain specific geographic information system (GIS) using finger touches and natural hand gestures. It is an advanced prototype of an actual application of tabletop technology in a non-trivial domain. The development of eGrid shows strong buy-in from the intended end-user community.
Towards making graphical user interface palettes tangible BIBAFull-Text 291-292
  Martin Spindler; Michel Hauschild; Raimund Dachselt
In this work, we present techniques that aim at physically detaching the graphical user interface (GUI) from digital content that is displayed on a tabletop. For this purpose, we extended a pen-enabled tabletop with several spatially aware (6DOF) paper-like mobile displays. In contrast to previous work, we provide a fully dynamic solution. This means the GUI is not printed onto paper displays, but instead is dynamically projected by a ceiling-mounted projector. Based on this technical foundation, we developed a prototypic graphics editor that we use as a basis for several novel interaction techniques that particularly utilize the height above the tabletop for accomplishing common tasks typical for the domain.
Some thoughts on a model of touch-sensitive surfaces BIBAFull-Text 293-294
  Raphael Wimmer
Interactive surfaces employ a variety of touch sensing technologies and implementations. Each of these offers unique strengths and limitations. An important metric for touch-sensitive surfaces is their tracking performance. This poster abstract presents some thoughts on a generic model of touch sensors and touch-sensitive surfaces. In it, a sensor is characterized by its range/precision dependency. This may allow to compare different touch-sensitive surfaces and predict performance of new sensor systems.
Development of eye-tracking tabletop interface for media art works BIBAFull-Text 295-296
  Michiya Yamamoto; Munehiro Komeda; Takashi Nagamatsu; Tomio Watanabe
A tabletop interface can enable interactions with images and real objects by using various sensors; therefore, such an interface can be applied for the creation of many artworks in the field of media arts. In this study, by focusing on gaze-and-touch interaction, we have proposed the concept of an eye-tracking tabletop interface (ETTI) as a new type of interaction interface for the creation of media artworks. Further, we have developed a prototype ETTI and an interactive art application, "Hyakunin-Eyesshu". Thus, we have paved the way for new surface interactions and a new variety of media artworks with precise gaze estimation.

Demos

Alternative multitouch gestures for map interaction BIBAFull-Text 297
  Eva Artinger; Martin Schanzenbach; Florian Echtler
Interaction with virtual maps is a common task on tabletop interfaces, particularly in the context of command-and-control applications. In nearly all cases, widely known gestures such as pinch-to-zoom are employed. To explore alternatives and variations of this mode of interaction, we have defined five alternative gesture sets for the tasks of modifying the map view and selecting map objects in an emergency management scenario. In our demo we will present an interactive map on a multitouch table, which provides an overview in an emergency situation. To interact with the map a variety of gestures can be explored by the conference attendees.
e-science on the surface BIBAFull-Text 298
  Tom Bartindale; Jared Jackson; Patrick Olivier
Often large amounts of computing power and storage resources are needed to facilitate e-science experiments, and much research has gone into providing software and hardware for these high-end needs. Once datasets are produced by these systems, few tools exist to comprehensively help researchers analyze, share and publish their findings and conclusions. We demonstrate a tool developed to allow the annotation, visualizing and sharing of large related data sets. Based around tangible objects, this system demonstrates a number of novel interactions for scientific research.
lumino: tangible building blocks based on glass fiber bundles: invited demo BIBAFull-Text 299
  Patrick Baudisch; Torsten Becker; Frederik Rudeck
We present luminos, tangible building blocks that allow users to assemble physical 3D structures on a tabletop computer. All luminos are tracked using the table's built-in camera, including those luminos located on top of other luminos. To enable this, each lumino contains a glass fiber bundle that allows the camera to "see through it". Luminos thereby extend the concept of fiducial markers commonly used with tabletop computers to the third dimension. Yet they preserve many of the benefits of regular tabletop markers: luminos are unpowered, self-contained objects that require no calibration, making it easy to maintain a large number of them. The Construction Kit demo allows attendees to slip into the role of a (very simple) architect, while the table takes on the role of a "civil engineer". Attendees can try out different 3D constructions; at the same time, the table tracks what is being constructed. This allows the table to critique constructions, displays piece lists and running totals of construction cost. To round things up, we will bring prototyping materials, such as plastic fibers, aluminum blocks, and tools, so that ITS attendees can get a sense of how to make their own luminos.
Multi-touch surface as input device BIBAFull-Text 300
  Andreas Dippon
In consideration of the rapid development of displays and multi-touch technologies, many workspaces could feature integrated multi-touch displays in the near future and therefore the possibility of using them as input devices for other computers needs to be reviewed. The idea is, to get rid of many different input devices (e.g. keyboard, mouse, multi-touch-pad) by using a single multi-touch display. Furthermore the display can be used as an additional monitor to show e.g. toolbars, which can be directly manipulated through multi-touch gestures. In this demo, an adaptive keyboard (different layouts, shortcut icons) and a multi-touch-pad, shown on a large-scale multi-touch device, can be used to control a standard windows laptop. The device works also as a second monitor where direct touch input can be used.
Interactive learning experience on nanotechnology BIBAFull-Text 301
  Sílvia Alcaraz Domínguez; Narcís Parés Burguès; Joan Mora Guiard
We present an Interactive Learning Experience on Nanotechnology in which users learn that gold nanoparticles can help cure cancer. This informal learning experience is based on a vertical, rear-projected screen of 2x1.5m (6.56x4.92 feet) and an IR reflection multi-touch system. The application shows simplified visuals of a cancer tumor at one end of a blood vessel. At the opposite end of the blood vessel there are ingredients to cure the tumor. Through gestures, users create a dose with one of the possible combinations of those ingredients and drag it through the blood vessel to the tumor. While dragging each dose, users can observe how it affects the body and the tumor. These observations provide clues on which ingredients should be part of the dose in order to eliminate the tumor completely. In this context, the main learning goal of the experience is achieved when users discover that such dose includes a gold nanoparticle. This experience is suited for a museum setting because it activates users while allowing them to learn at their own pace. In addition, it provides a comfortable, non-encumbered learning experience for up to four users, because they can interact with their hands while standing up.
TouchLab: the all-in-one multi-touch marketing tool BIBAFull-Text 302
  Peter Eschler; Wolfram Kresse; Sebastian Demmerle
The TouchLab is an all-in-one, ready-to-use marketing touch-table for presenting products, concepts or ideas in an interactive 3D-environment using sound, light, multi-touch technology and innovative gestures.
   The TouchLab is an all-in-one, ready-to-use marketing touch-table for presenting products, concepts or ideas in an interactive 3D-environment using sound, light, multi-touch technology and innovative gestures.
lzrdm: collaborative multi-touch sequencer BIBAFull-Text 303
  Marc René Frieß; Niklas Klügel; Georg Groh
In this demonstration video, we show an approach, how IT based composition of electronic music can be supported with a collaborative application on a tabletop interface, mediating between single-user style music composition tools and co-located collaborative music improvisation. We explain how to create the compositional structures, as well as how to add note-events and synthesizers. Thereby we focus on the general and on the collaborative aspects of the application.
Editing and exploring node-link diagrams on pen- and multi-touch-operated tabletops BIBAFull-Text 304
  Mathias Frisch; Sebastian Schmidt; Jens Heydekorn; Miguel A. Nacenta; Raimund Dachselt; Sheelagh Carpendale
This project addresses the design of interaction techniques for the creation and manipulation of node-link diagrams on multi-touch and pen enabled displays. Analysis and creation of node-link diagrams is an important activity, and one that can benefit greatly from the enhanced interaction bandwidth and collaborative affordances of interactive tabletops. The applications that we will demonstrate implement a broad set of novel interaction techniques for editing and manipulating node-link diagrams. Some techniques have been implemented with hybrid input (pens + touch). They allow flexible creation and manipulation of diagram elements by sketching and structural editing. This includes connecting and copying nodes by bimanual input and changing types of edges by gestures. Other techniques are meant to support users in analyzing diagrams. For example, by strumming or bundling edges, it is easy to see what nodes are connected by the edges.
Waves: multi-touch VJ interface BIBAFull-Text 305
  Jonathan Hook; Patrick Olivier
Waves is a multi-touch interface for VJing (the live performance of visual media). The system allows performers to interact with a range of visual media by manipulating spline curves on an interactive surface. Interaction is highly visible to the audience and as such the contribution of the performer may be understood. Furthermore, the spline curves provide metonymical interaction, and as such give the performer the ability to perceive and directly manipulate the underlying properties of visual media.
MudPad: a tactile memory game BIBAFull-Text 306
  Yvonne Jansen; Thorsten Karrer; Jan Borchers
MudPad is a system capable of localized active haptic feedback on a touch screen. We use an array of electromagnets combined with an overlay containing magnetorheological (MR) fluid to actuate a tablet-sized area. As MudPad has a very low reaction time, it is able to produce instant multi-point feedback for touch input, ranging from static levels of surface softness to a broad set of dynamically changeable textures. Our system does not only convey global confirmative feedback on user input but allows the UI designer to enrich the whole interface with a tactile layer conveying local semantic information. This also allows users to explore the interface haptically.
Xpaaand: interacting with rollable displays BIBAFull-Text 307
  Mohammadreza Khalilbeigi; Roman Lissermann; Jan Riemann; Dima Burlak; Jürgen Steimle
We envision future mobile devices to feature rollable displays. This enables dynamic form factors and new interaction techniques for interactive surfaces. We propose physical resizing of the display as a novel input technique in addition to established techniques, such as touch or pen input. For this purpose we simulate rollable displays using a passive display approach. In this video, we introduce novel tangible interaction techniques for 1) viewport resizing; 2) navigation in hierarchies; 3) visual clipboards; 4) semantic 1D zoom within documents; 5) geometric 2D zoom for map navigation; and 6) switching between applications.
Demo for digital mysteries: designing for learning at the tabletop BIBAFull-Text 308
  Ahmed Kharrufa; Patrick Olivier; David Leat
We present Digital Mysteries, a collaborative learning application for school children designed for tabletops. It is based on the mysteries paper-based learning technique. Our work is distinctive in that the design process, the design choices, and the implementation framework are all grounded in theories of both collaborative interaction and learning. Our hypothesis was that, if well utilized, the digital tabletop's unique affordances would allow for the creation of collaborative learning tools that were better than traditional paper- or computer-based tools. The main design goals are supporting externalization of thinking and higher-level thinking skills, in addition to encouraging effective collaboration.
TouchBridge: augmenting active tangibles for camera-based multi-touch surfaces BIBAFull-Text 309
  Cassim Ladha; Karim Ladha; Jonathan Hook; Daniel Jackson; Patrick Olivier
The augmentation of tabletop multi-touch surfaces with physical objects is a well visited approach for the realisation of tangible user interfaces. Interaction with objects on a multi-touch surface is often limited to spatial manipulation and common marker-based techniques for object tracking typically provide little more than position and orientation information for the objects. We present a novel, low-cost, approach for tracking objects on a camera-based multi-touch surface. Our approach utilises modulated Infrared light to provide a bi-directional communication channel between objects and the surface and thereby presents the opportunity for much richer forms of interaction with physical objects.
Comparing most recent 3D manipulation techniques for multi-touch displays BIBAFull-Text 310
  Anthony Martinet; Laurent Grisoni; Géry Casiez
Our demo's objective is to let the audience compare the efficiency of 3D manipulation techniques. The demo consists in building a 3D house made of simple pieces (roof, wall and floors) by translating and rotating each piece at the right location. We implemented techniques from the literature as well as a new technique. A physics engine handles the collision between elements to increase realism of the scene.
Holocubtile: 3D multitouch brings the virtual world into the user's hands BIBAFull-Text 311
  Jean Baptiste De la Rivière; Nicolas Dittlo; Emmanuel Orvain; Cédric Kervégant; Mathieu Courtois
Multitouch tactile technology is mostly restricted to 2D interaction, but interaction with 3D virtual environments has been studied for many years. We still have to hear of a single interface that would offer all the necessary degrees of freedom (DOF) in an intuitive enough way. The Cubtile is a novel interaction device that tries to take the strength of tactile input to the 3D worlds, thanks to the multitouch tactile cube. The demo we are proposing combines into a single system this innovative 3D multitouch interface with an augmented reality-like setup that is based on a mirror and brings the 3D object between the user's hands.
iliGHT 3D touch: a multiview multitouch surface for 3d content visualization and viewpoint sharing BIBAFull-Text 312
  Jean Baptiste De la Rivière; Nicolas Dittlo; Emmanuel Orvain; Cédric Kervégant; Mathieu Courtois; Toni Da Luz
Multitouch tables are known to be well suited to small groups collaboration. However, as soon as several people stand on different sides of the table, their viewpoints on the content differ, which is especially disturbing when 3D perspective projection is used. To offer a collaborative platform combining both the strenghths of collaborative tactile interaction and 3D immersive visualization, we developed a multitouch table that provides a stereo head-tracked immersive multiview 3D visualization to two users. This helps to give access to 3D visualization on tabletops, but also gave rise to many issues, such as parallax management or 3D stereo and real hands depth conflicts, to which we tried to propose rough initial solutions we are demonstrating here in the scope of ITS 2010.
PhoneTouch: a technique for direct phone interaction on surfaces BIBAFull-Text 313
  Dominik Schmidt
PhoneTouch is a novel technique for integration of mobile phones and interactive surfaces. The technique enables use of phones to select targets on the surface by direct touch, facilitating for instance pick&drop-style transfer of objects between phone and surface. The technique is based on separate detection of phone touch events by the surface, which determines location of the touch, and by the phone, which contributes device identity. Our current implementation uses the phones' internal microphones and the table's camera system for touch detection.
radarTOUCH: multi-touch interaction on large planar surfaces BIBAFull-Text 314
  Tobias Schwirten; Chris Geiger; Dionysios Marinos
The radarTOUCH demo presents a sensor device dedicated to detect multitouch interaction on or in front of large planar surfaces. The device uses a rotating IR laser range finder to detect obstacles. Based on the laser technology used in this project the device allows to scan large interaction spaces (up to 25 m) and even to provide touchless interaction in 2D space. A custom-built driver connects the device to any TUIO-based application or to simulate mouse events (move, click, scroll) by means of gestures. We will present a set of selected application scenarios taken from the area of fair exhibitions and from design review presentations of mechatronic systems to illustrate the applicability of our sensor device.
Novel fields of application for tangible displays above the tabletop BIBAFull-Text 315
  Martin Spindler; Christian Tominski; Michel Hauschild; Heidrun Schumann; Raimund Dachselt
In these demonstrations, we present novel uses for tangible magic lenses, i.e., spatially aware lightweight displays that can be moved through the physical 3D space on or above a tabletop. In the first demo, we demonstrate the usefulness of tangible views to support interaction with information visualizations, such as graph, matrix, and space-time-cube visualizations as well as scatter and parallel coordinate plots. Tangible views facilitate making multiple -- previously virtual -- views physically "graspable". In addition, by tracking the 3D movement of tangible views, we can control various visualization parameters with more degrees of freedom. In the second demo, we showcase tangible user interface palettes (TUIP) by means of a simple graphics editor application written for a pen-supported tabletop environment. TUIPs provide a novel way of making traditional graphical user interfaces (GUI) tangible and thus more flexible. We demonstrate how users can arrange GUI palettes more easily by physically moving them on or above the table surface. Hereby, image content is dynamically projected onto paper-like mobile displays. Users can adjust palette sizes by physically un-folding them. We also show how the height above the tabletop can be used for interaction, e.g., for browsing file content and navigating through image details.
humanaquarium BIBAFull-Text 316
  Robyn Taylor; Guy Schofield; John Shearer; Pierre Boulanger; Patrick Olivier
humanaquarium is a movable performance space designed to explore the dialogical relationship between artist and audience. Two musicians perform inside the cube-shaped box, collaborating with participants to co-create an aesthetic audio-visual experience. The front wall of the humanaquarium is a touch-sensitive FTIR window. Max/MSP is used to translate the locations of touches on the window into control data, manipulating the tracking of software synthesizers and audio effects generated in Ableton Live, and influencing a Jitter visualization projected upon the rear wall of the cube.
The BendDesk demo: multi-touch on a curved display BIBAFull-Text 317
  Malte Weiss; Simon Voelker; Jan Borchers
BendDesk is a curved interactive display that merges a vertical and a horizontal multi-touch surface with a curve. Users sitting at the table can perform multi-touch input on the entire surface. This demo shows the capabilities and potential applications of such a setup. We also present Bend Invaders, one of the first arcade games on a curved interactive surface. Our demo intends to encourage the discussion about the future of multi-touch in desk environments.
GlobalData: multi-user interaction with geographic information systems on interactive surfaces BIBAFull-Text 318
  Ulrich von Zadow; Florian Daiber; Johannes Schöning; Antonio Krüger
The geographical domain was often used as a showcase to show the possibilities of multi-touch interaction. Nonetheless, researchers have rarely investigated multi-user interaction with GIS -- in fact, most of the geographical tabletop applications are not suited to multi-user interaction. Our multitouch application, GlobalData, allows multiple people to interact and collaborate in examining global, geolocated data. In idle mode, the device simply shows a stylized map of the earth. Users can open circular GeoLenses. These circles show the same map segment as the underlying base map and superimpose different data layers on it.

Doctoral Symposium

Multi-touch table user interfaces for co-located collaborative software visualization BIBFull-Text 319
  Craig Anslow
Applying proxemics to mediate people's interaction with devices in ubiquitous computing ecologies BIBFull-Text 320
  Nicolai Marquardt
Using a mobile phone to improve the usability of tabletop computers BIBFull-Text 321
  Christopher McAdam
Tailoring tabletop interfaces for musical control BIBFull-Text 322
  Liam O'Sullivan
Analyzing visual attention for designing distributed interaction spaces across mobile phones and large displays BIBFull-Text 323
  Umar Rashid
Interscopic multi-touch environments BIBFull-Text 324
  Dimitar Valkov
Performance animation of 3D content on multi-touch interactive surfaces BIBFull-Text 325
  Benjamin Walther-Franks