HCI Bibliography Home | HCI Conferences | ITS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ITS Tables of Contents: 09101112131415

Proceedings of the 2015 ACM International Conference on Interactive Tabletops and Surfaces

Fullname:Proceedings of the Tenth ACM International Conference on Interactive Tabletops and Surfaces
Editors:Nuno Nunes; Enrico Costanza; Patrick Olivier; Johannes Schöning
Location:Madeira, Portugal
Dates:2015-Nov-15 to 2015-Nov-18
Publisher:ACM
Standard No:ISBN: 978-1-4503-3899-8; ACM DL: Table of Contents; hcibib: ITS15
Papers:83
Pages:504
Links:Conference Website
  1. Opening Keynote
  2. Closing Keynote
  3. Session 1: Cognition
  4. Session 2: High-Speed and Haptic Feedback
  5. Session 3: Fingers, Handprints and Dynamic Mirrors
  6. Session 4: Let's Get Practical
  7. Session 5: Large Displays
  8. Session 6: Artistic Sand & Biking
  9. Session 7: Development and Toolkits
  10. Session 8: Back to the Future
  11. Session 9: Latency and Shape Change
  12. Posters
  13. Demos
  14. Doctoral Symposium
  15. Workshops

Opening Keynote

In Touch with the Future BIBAFull-Text 1
  Charles Spence
It was recently announced that there are now more than three billion touch screens in circulation [1]. What such a figure emphasizes, I think, is that touch is becoming an increasingly pervasive feature of our interaction with many technologies, from touch screen to vibrating cars all the way through to vibrating cutlery -- really [2,3,6]! But what, exactly, is the advantage, or advantages of engaging this, our largest, sense? And what are the trade-offs? After all, a quick look at the history of the tactile stimulation of the skin surface resounds with promises that have gone unfulfilled: Anyone seen the tactile television, or bankers walking around with a vibrating belt communicating the latest stock market figures recently? I thought not! While I believe that effectively stimulating the skin surface holds many promises in the years to come, success in the real-world will only come from a proper understanding of the psychological limitations in tactile information processing, involving everything from tactile sensory suppression whenever we move (see [4]) through limits on multisensory perception and attention [2]. In this talk, I will highlight what I see as the biggest limitations of tactile stimulation/communication [5], before looking at some of the latest innovations that have really gotten me excited over the last couple of years in terms of showcasing what the world of digital tactile stimulation can potentially offer.

Closing Keynote

From Surface to Space BIBAFull-Text 3
  Jinha Lee
Humans have evolved to develop sophisticated skills to sense and interact with their physical surroundings; however, the majority of our interaction with digital information nowadays is confined to a single, small, flat screen. In this talk, I will discuss how we can leverage our kinesthetic skills and collaborative nature to enhance interaction with digital data, by representing the data in physical space or on multiple distributed devices. I will highlight some of my recent projects that demonstrate this vision including 3D spatial desktop, levitating interfaces, and a haptic wrist-watch for the visually impaired.

Session 1: Cognition

The Influence of Multi-Touch Interaction on Procedural Training BIBAFull-Text 5-14
  Sarah Buchanan; Jared Bott; Joseph J., Jr. LaViola
This paper explores the use of multi-touch interaction in a 3D training environment as a way to enhance learning of sensorimotor skills as well as procedural knowledge. We present a between subjects experiment with 36 participants distributed into 3 groups that use multi-touch interaction, interaction with the physical apparatus, and a control group using basic mouse-based interaction. A post-training test carried out 3 days later evaluated performance in conducting the real world task from memory. Results show that the multi-touch interaction and the real world groups had significantly better performance scores than the mouse interaction group, with no significant difference between multi-touch and real world groups. Our results demonstrate that multi-touch interaction trained participants on the task as well as training on the actual equipment, indicating multi-touch interaction is a potential replacement for the physical apparatus when doing procedural training.
Training of Cognitive Performance in Complex Tasks with a Tabletop-Based Rehabilitation System BIBAFull-Text 15-24
  Mirjam Augstein; Thomas Neumayr; Isabel Karlhuber; Sabine Dielacher; Sylvia Öhlinger; Josef Altmann
As previously shown, tabletops can be used in neuro-rehabilitation to train different abilities that have been (temporarily) lost due to acquired brain injury, like motor skills or attention or visuo-constructive skills. However, complex skills like problem solving abilities have been considered only marginally in computer-based rehabilitation systems until now, partly because "complex skills" consist of a high number of distinct capabilities which makes it difficult to cover at least a large part. Nevertheless, the fun.tast.tisch.~system has recently been extended by a module that aims at training complex skills. This paper describes therapeutic considerations underlying the design and implementation, discusses the interaction modalities used (e.g., the concept of pen-based interaction on tabletops in addition to touch and tangibles) and provides an overview on the module's interaction and graphic design. The module has already been tested with a small number of selected patients in the therapy setting. The findings of these initial tests are summarized and discussed.
Blind People Interacting with Large Touch Surfaces: Strategies for One-handed and Two-handed Exploration BIBAFull-Text 25-34
  Tiago Guerreiro; Kyle Montague; João Guerreiro; Rafael Nunes; Hugo Nicolau; Daniel J. V. Gonçalves
Interaction with large touch surfaces is still a relatively infant domain, particularly when looking at the accessibility solutions offered to blind users. Their smaller mobile counterparts are shipped with built-in accessibility features, enabling non-visual exploration of linearized screen content. However, it is unknown how well these solutions perform in large interactive surfaces that use more complex spatial content layouts. We report on a user study with 14 blind participants performing common touchscreen interactions using one and two-hand exploration. We investigate the exploration strategies applied by blind users when interacting with a tabletop. We identified six basic strategies that were commonly adopted and should be considered in future designs. We finish with implications for the design of accessible large touch interfaces.
Knobology Revisited: A Comparison of User Performance between Tangible and Virtual Rotary Knobs BIBAFull-Text 35-38
  Simon Voelker; Kjell Ivar Øvergård; Chat Wacharamanotham; Jan Borchers
We present an experimental comparison of tangible rotary knobs and touch-based virtual knobs in three output conditions: eyes-on, eyes-free, and peripheral. Twenty participants completed a simple rotation task on a interactive surface with four different input techniques (two tangibles and two virtual touch widgets) in the three output conditions, representing the distance from the locus of attention. We found that users were in average 20% faster using tangible knobs than using the virtual knobs. We found that tangible knobs retains performance even if they are not in the locus of attention of the users. We provide four recommendations of suitable choosing knobs based on tasks and design constraints.

Session 2: High-Speed and Haptic Feedback

3D Tabletop User Interface with High Synchronization Accuracy using a High-speed Stereo Camera BIBAFull-Text 39-42
  Takumi Kusano; Takashi Komuro
In this paper, we try to compare standard and low latency mid-air 3D interactions in a tabletop user interface. For this purpose, we developed a 3D tabletop user interface that allows direct manipulation of 3D virtual objects presented on an autostereoscopic display. The system uses a high-speed stereo camera to synchronize real hands and virtual objects, both temporally and spatially, with a high degree of accuracy. In the developed system, we realized a processing speed resulting in a frame rate of 200 fps and a latency of 28.33 ms. We conducted a user study for evaluating the usability and sense of reality and the results showed that there was a significant difference in the time required to complete an object moving operation between the 200 fps and 30 fps cases.
Quantifying the Targeting Performance Benefit of Electrostatic Haptic Feedback on Touchscreens BIBAFull-Text 43-46
  Yang Zhang; Chris Harrison
Touchscreens with dynamic electrostatic friction are a compelling, low-latency and solid-state haptic feedback technology. Work to date has focused on minimum perceptual difference, texture rendering, and fingertip-surface models. However, no work to date has quantified how electrostatic feedback can be used to improve user performance, in particular targeting, where virtual objects rendered on touchscreens can offer tactile feedback. Our results show that electrostatic haptic feedback can improve targeting speed by 7.5% compared to conventional flat touchscreens.

Session 3: Fingers, Handprints and Dynamic Mirrors

Estimating 3D Finger Angle on Commodity Touchscreens BIBAFull-Text 47-50
  Robert Xiao; Julia Schwarz; Chris Harrison
We describe a novel approach for estimating the pitch and yaw of fingers relative to a touchscreen's surface, offering two additional, analog degrees of freedom for interactive functions. Further, we show that our approach can be achieved on off-the-shelf consumer touchscreen devices: a smartphone and smartwatch. We validate our technique though a user study on both devices and conclude with several demo applications that illustrate the value and immediate feasibility of our approach.
One-Touch Pose Detection on Touchscreen Smartphones BIBAFull-Text 51-54
  Karsten Seipp; Kate Devlin
We present a technique that allows distinguishing between index finger and thumb input on touchscreen phones, achieving an average accuracy of 82.6% in a real-life application with only a single touch. We divide the screen into a virtual grid of 9mmx9mm units and use a dedicated set of training data and algorithms for classifying new touches in each screen location. Further, we present correlations between physical and digital touch properties to extend previous work.
Dynamir: Optical Manipulations Using Dynamic Mirror Brushes BIBAFull-Text 55-58
  Florent Berthaut; Deepak Ranjan Sahoo; Jess McIntosh; Diptesh Das; Sriram Subramanian
Mirror surfaces are part of our everyday life. Among them, curved mirrors are used to enhance our perception of the physical space, e.g., convex mirrors are used to increase our field of view in the street, and concave mirrors are used to zoom in on parts our face in the bathroom. In this paper, we investigate the opportunities opened when these mirrors are made dynamic, so that their effects can be modulated to adapt to the environment or to a user's actions. We introduce the concept of dynamic mirror brushes that can be moved around a mirror surface. We describe how these brushes can be used for various optical manipulations of the physical space. We also present an implementation using a flexible mirror sheet and three scenarios that demonstrate some of the interaction opportunities.
CapAuth: Identifying and Differentiating User Handprints on Commodity Capacitive Touchscreens BIBAFull-Text 59-62
  Anhong Guo; Robert Xiao; Chris Harrison
User identification and differentiation have implications in many application domains, including security, personalization, and co-located multiuser systems. In response, dozens of approaches have been developed, from fingerprint and retinal scans, to hand gestures and RFID tags. In this work, we propose CapAuth, a technique that uses existing, low-level touchscreen data, combined with machine learning classifiers, to provide real-time authentication and even identification of users. As a proof-of-concept, we ran our software on an off-the-shelf Nexus 5 smartphone. Our user study demonstrates twenty-participant authentication accuracies of 99.6%. For twenty-user identification, our software achieved 94.0% accuracy and 98.2% on groups of four, simulating family use.

Session 4: Let's Get Practical

TalkingTiles: Supporting Personalization and Customization in an AAC App for Individuals with Aphasia BIBAFull-Text 63-72
  Thomas Huijbregts; James R. Wallace
The development of 'Post-PC' interactive surfaces, such as smartphones and tablets, and specialized support software informed by HCI research has created new opportunities for Augmentative and Alternative Communication (AAC) technologies. However, it is unclear to what degree these opportunities have been realized in practice. We conducted a field study to explore the use of one such application, TalkingTiles, by individuals with aphasia. Following a training session and one week of use, we conducted interviews with participants, their partners, and their caregivers at a local support facility. We found that TalkingTiles can be effective in supporting communication when used in concert with other communication methods, and when time can be invested in customizing the app. We discuss our findings, and implications for design with respect to customizability, simplicity, and the limitations of interactive surfaces in supporting communication.
"Callout Bubble Saved My Life": Workspace Awareness Support in BYOD Classrooms BIBAFull-Text 73-82
  Y.-L. Betty Chang; Cresencia Fong; Edward Tse; Mark Hancock; Stacey D. Scott
Co-located students working in a bring-your-own-device (BYOD) classroom have limited awareness of their peers' work. We investigated the design of an awareness cue for students aged 6 to 17, in a large web-based canvas shared among tablets and laptops. By incorporating teacher and student feedback in an iterative design process, the project's goal was to support workspace awareness needs on touch devices, as well as to ensure age-appropriateness and technical feasibility. Specifically, we aimed to balance awareness, distraction, and clutter. We designed an awareness cue for students, a Callout Bubble, which is displayed near the object being manipulated by a peer, and fades away over time. A study of 71 students and 4 teachers revealed that, with our awareness cue design, students' awareness of their peers' actions in the shared canvas was significantly correlated with increased task focus and decreased frustration levels when peer conflicts arose. We also found that students understood the awareness information conveyed and were able to self-monitor and coordinate within the group.
Smart Makerspace: An Immersive Instructional Space for Physical Tasks BIBAFull-Text 83-92
  Jarrod Knibbe; Tovi Grossman; George Fitzmaurice
We present the Smart Makerspace; a context-rich, immersive instructional workspace for novice and intermediate makers. The Smart Makerspace guides makers through the completion of a DIY task, while providing detailed contextually-relevant assistance, domain knowledge, tool location, usage cues, and safety advice. Through an initial exploratory study, we investigate the challenges faced in completing maker tasks. Our observations allow us to define design goals and a design space for a connected workshop. We describe our implementation, including a digital workbench, augmented toolbox, instrumented power-tools and environmentally aware audio. We present a qualitative user study that produced encouraging results; providing features that users unanimously found useful.

Session 5: Large Displays

Studying Attraction Power in Proxemics-Based Visual Concepts for Large Public Interactive Displays BIBAFull-Text 93-102
  Victor Cheung; Stacey D. Scott
A key challenge in designing interfaces for large interactive displays deployed in public settings is to draw (and keep) a passerby's attention. Proxemic interactions -- a design approach that applies human spatial behavior to guide system behavior in response to a user's proximity to a display -- has been proposed for attracting and engaging potential users. Yet, the effectiveness of this approach has not been evaluated. Moreover, little research exists in the broader literature on the relative efficacy of possible visual design strategies to attract and engage large display users. We conducted a study to the effectiveness of promising visual concepts applied in a proxemic interactions framework: content motion and user shadows. While both visual concepts were more effective than a control condition at capturing attention, the inclusion of user's shadow was found to have stronger attraction power than content motion alone. In contrast, they were found to be ineffective for communicating possible user interactions in the display, limiting their potential to facilitate further system use.
What People Really Remember: Understanding Cognitive Effects When Interacting with Large Displays BIBAFull-Text 103-106
  Philipp Panhey; Tanja Döring; Stefan Schneegass; Dirk Wenig; Florian Alt
This paper investigates how common interaction techniques for large displays impact on recall in learning tasks. Our work is motivated by results of prior research in different areas that attribute a positive effect of interactivity to cognition. We present findings from a controlled lab experiment with 32 participants comparing mobile phone-based interaction, touch interaction and full-body interaction to a non-interactive baseline. In contrast to prior findings, our results reveal that more movement can negatively influence recall. In particular we show that designers are facing an immanent trade-off between designing engaging interaction through extensive movement and creating memorable content.
Understanding Researchers' Use of a Large, High-Resolution Display Across Disciplines BIBAFull-Text 107-116
  Fateme Rajabiyazdi; Jagoda Walny; Carrie Mah; John Brosz; Sheelagh Carpendale
A driving force behind the design of increasingly large and high resolution displays (LHRDs) has been the need to support the explosion of data in the natural sciences such as physics, chemistry, and biology. However, our experience with an LHRD accessible to researchers across multiple disciplines has shown that they are useful for a wide range of research activities involving large images and data. We conducted in-context, semi-structured interviews with researchers from a variety of disciplines about their experiences using the LHRD with their own data. Notably, it became apparent that the size and resolution of the LHRD supported a multitude of activities related to observation, for which zooming or other enlargement methods on standard resolution screens were not sufficient. The interview findings lead to implications for further research into supporting a broader range of disciplines in using large, high-resolution displays.
BodyLenses: Embodied Magic Lenses and Personal Territories for Wall Displays BIBAFull-Text 117-126
  Ulrike Kister; Patrick Reipschläger; Fabrice Matulic; Raimund Dachselt
Magic lenses are popular tools to provide locally altered views of visual data. In this paper, we introduce the concept of BodyLenses, special kinds of magic lenses for wall displays that are mainly controlled by body interactions. After motivating the rationale for body-centric lenses, we present a comprehensive design space of BodyLenses, where we analyse fundamental aspects such as appearance, function, interaction and use in multi-user contexts. Within that space, we investigated and implemented a number of design alternatives and propose solutions for lens positioning, dynamic shape modification, distance-based parameter mappings and the use of BodyLenses as portable tool belts. We demonstrate the practicality of our novel concepts with four realised application scenarios. With this work, we hope to lay the foundation for future research and systems based on body-driven lenses.

Session 6: Artistic Sand & Biking

BricoSketch: Mixing Paper and Computer Drawing Tools in Professional Illustration BIBAFull-Text 127-136
  Theophanis Tsandilas; Magdalini Grammatikou; Stéphane Huot
Illustrators are advanced users of both traditional and computer-assisted drawing tools, and therefore, observing their strategies is very valuable for research on drawing interfaces. We interviewed four professional illustrators in their work environment. We also followed the work of an artist for a two-year period. We observed that artists mix a variety of techniques that involve specialized computer software and hardware such as Adobe Photoshop, a graphics tablet and a scanner, and traditional physical tools such as pencils, paper, and customized light tables. Our findings inspired BricoSketch, an augmented paper interface that enables illustrators to zoom into parts of their drawings and work at different levels of detail on paper. Our early results demonstrate that BricoSketch supports real tasks, improving productivity on paper while enhancing illustrators' creative ways of working.
Ghost Touch: Turning Surfaces into Interactive Tangible Canvases with Focused Ultrasound BIBAFull-Text 137-140
  Asier Marzo; Richard McGeehan; Jess McIntosh; Sue Ann Seah; Sriram Subramanian
Digital art technologies take advantage of the input, output and processing capabilities of modern computers. However, full digital systems lack the tangibility and expressiveness of their traditional counterparts. We present Ghost Touch, a system that remotely actuate the artistic medium with an ultrasound phased array. Ghost Touch transforms a normal surface into an interactive tangible canvas in which the users and the system collaborate in real-time to produce an artistic piece. Ghost Touch is able to detect traces and reproduce them, therefore enabling common digital operations such as copy, paste, save or load whilst maintaining the tangibility of the traditional medium. Ghost Touch has enhanced expressivity since it uses a novel algorithm to generate multiple ultrasound focal points with specific intensity levels. Different artistic effects can be performed on sand, milk&ink or liquid soap.
Eyes-Free Touch Command Support for Pen-Based Digital Whiteboards via Handheld Devices BIBAFull-Text 141-150
  Fabrice Matulic; Maria Husmann; Seraiah Walter; Moira C. Norrie
Digital whiteboards that only sense pen input are limited in their interactive capabilities. One way to artificially add touch support is through personal mobile devices, which people carry with them. This work investigates how smartphones can be used as portable quick-access toolboxes held by the non-dominant hand to provide assistive touch commands for pen-driven whiteboard tasks. We developed two interface designs, one based on a classic remote with standard GUI controls and another optimised for eyes-free operation to eliminate gaze shifts between the two devices. In a controlled evaluation based on an established mode-switching study protocol, we compare the two phone interfaces and a baseline technique consisting of a pen-triggered popup menu on the whiteboard. Our results show a superior efficiency of the phone UIs over the popup. The eyes-free UI only partially performed better than the classic interface at the subtask level after subtracting the costs of errors.
Gesture Bike: Examining Projection Surfaces and Turn Signal Systems for Urban Cycling BIBAFull-Text 151-159
  Alexandru Dancu; Velko Vechev; Adviye Ayça Ünlüer; Simon Nilson; Oscar Nygren; Simon Eliasson; Jean-Elie Barjonet; Joe Marshall; Morten Fjeld
Interactive surfaces could be employed in urban environments to make people more aware of moving vehicles, showing drivers' intentions and the subsequent position of vehicles. To explore the usage of projections while cycling, we created a system that displays a map for navigation and signals cyclist intention. The first experiment compared the task of map navigation on a display projected on a road surface in front of the bicycle with a head-up display (HUD) consisting of a projection on a windshield. The HUD system was considered safer and easier to use. In our second experiment, we used projected surfaces to implement concepts inspired by Gibson's perception theory of driving that were combined with detection of conventional cycling gestures to signal and visualize turning intention. The comparison of our system with an off-the-shelf turn signal system showed that gesture input was easier to use. A web-based follow-up study based on the recording of the two signalling systems from the perspective of participants in traffic showed that with the gesture-projector system it was easier to understand and predict the cyclist intention.

Session 7: Development and Toolkits

The dBoard: A Digital Scrum Board for Distributed Software Development BIBAFull-Text 161-170
  Morten Esbensen; Paolo Tell; Jacob B. Cholewa; Mathias K. Pedersen; Jakob Bardram
In this paper we present the dBoard -- a digital Scrum Board for distributed Agile software development teams. The dBoard is designed as a 'virtual window' between two Scrum team spaces. It connects two locations with live video and audio, which is overlaid with a synchronized and interactive digital Scrum board, and it adapts the fidelity of the video/audio to the presence of people in front of it. The dBoard is designed to work (i) as a passive information radiator from which it is easy to get an overview of the status of work, (ii) as a media space providing awareness about the presence of remote co-workers, and (iii) as an active meeting support tool. The paper presents a case study of distributed Scrum in a large software company that motivates the design of the dBoard, and details the design and technical implementation of the dBoard. The paper also reports on an initial user study, which shows that users found the dBoard both useful and easy to use. Based on this work, we suggest that superimposing collaborative applications onto live video is a useful way of designing collaborative meeting and awareness systems.
SoD-Toolkit: A Toolkit for Interactively Prototyping and Developing Multi-Sensor, Multi-Device Environments BIBAFull-Text 171-180
  Teddy Seyed; Alaa Azazi; Edwin Chan; Yuxi Wang; Frank Maurer
As ubiquitous environments become increasingly commonplace with newer sensors and forms of computing devices (e.g. wearables, digital tabletops), researchers have continued to design and implement novel interaction possibilities. However, as the number of sensors and devices continues to rise, researchers still face numerous instrumentation, implementation and cost barriers before being able to take advantage of the additional capabilities. In this paper, we present the SoD-Toolkit -- a toolkit that facilitates the exploration and development of multi-device interactions, applications and ubiquitous environments by using combinations of low-cost sensors to provide spatial-awareness. The toolkit offers three main features. (1) A "plug and play" architecture for seamless multi-sensor integration, allowing for novel explorations and ad-hoc setups of ubiquitous environments. (2) Client libraries that integrate natively with several major device and UI platforms. (3) Unique tools that allow designers to prototype interactions and ubiquitous environments without a need for people, sensors, rooms or devices. We demonstrate and reflect on real-world case-studies from industry-based collaborations that influenced the design of our toolkit, as well as discuss advantages and limitations of our toolkit.
Coordinating Collaborative Interactions in Web-based Mobile Applications BIBAFull-Text 181-190
  Kennedy Kambona; Lode Hoste; Elisa Gonzalez Boix; Wolfgang De Meuter
Mobile applications for interactive surfaces that utilize the web as a platform now have the ability to provide richer interactions hitherto unrealized by running them on isolated devices. These modern applications can now support proximal and remote collaborative interactions for multiple clients simultaneously connected to each other. Most technologies however currently lack programming language abstractions for coordinating complex interactions, such as to define, detect and combine complex events coming from multiple clients or other software entities. Furthermore, they lack the expressiveness required to support non-trivial levels of collaborative interactions for connected clients.
   In this paper we identify two software mechanisms that web-based mobile applications should provide to support the development of collaborative interactions: distributed event composition and group coordination. We present the Mingo framework, which provides dedicated coordination programmer constructs for these two mechanisms by blending techniques common in complex event processing and group communication. Consequently, we validate our framework by implementing a mobile drawing application with support for collaborative interactions and evaluate it by comparing it with a related implementation.

Session 8: Back to the Future

Pins 'n' Touches: An Interface for Tagging and Editing Complex Groups BIBAFull-Text 191-200
  Sven Strothoff; Wolfgang Stuerzlinger; Klaus Hinrichs
Influenced by mouse/pen-based user interfaces, most touch-based object tagging techniques rely mostly on a single interaction point. Once objects are tagged, typically only individual object inclusions/exclusions are possible. Yet, for tagging larger groups with complex spatial layouts, more refinement may be necessary to achieve the desired result. We apply visual tag markers to objects to visualize their group association. Through a new multi-touch pin gesture that "pins" one or more objects "down" while tagging, our new interface is capable of efficiently grouping objects, as identified in our user studies.
Personal Device as a Controller for Interactive Surfaces: Usability and Utility of Different Connection Methods BIBAFull-Text 201-204
  Jouni Vepsäläinen; Antonella Di Rienzo; Matti Nelimarkka; Jouni A. Ojala; Petri Savolainen; Kai Kuikkaniemi; Sasu Tarkoma; Giulio Jacucci
The popularity of touch-screen-equipped smart phones has made them an attractive choice for interacting with large display surfaces, especially in public spaces. The challenge in using a personal mobile device for interaction in such a setting lies in the usability of methods to initiate the interaction, as the users may give up if the interaction is not immediately successful. For this reason, a few commercial systems have already opted to use web-based interaction instead of dedicated mobile applications. However, the usability of different methods of initiating the web-based interaction has not been extensively studied. In this paper we present the results of a laboratory usability study with 20 participants, in which we studied how the users experienced four different methods of initiating web-based interaction between a smart phone and a large display surface. The compared initiation methods were NFC, QR code, typing an URL and connecting to a WiFi access point. Additionally, in order to study how the users experienced the quality of the connection, the first three methods were used over 3G. Our results indicate typing an URL to be the most usable method for initiating the connection between the smart phone and the large display surface. The difference in quality between the 3G and WiFi connections was deemed hardly noticeable by the subjects. We acknowledge that our results are only preliminary, and the subject needs to be studied in a more realistic setting to get a more comprehensive picture.

Session 9: Latency and Shape Change

Reducing Latency with a Continuous Prediction: Effects on Users' Performance in Direct-Touch Target Acquisitions BIBAFull-Text 205-214
  Elie Cattan; Amélie Rochet-Capellan; Pascal Perrier; François Bérard
Latency in direct-touch systems creates a spatial gap between the finger and the digital object when dragging. This breaks the illusion of presence, and has a negative effect on users' performances in common tasks such as target acquisitions. Latency can be reduced with faster hardware, but reaching imperceptible levels of latency with a hardware-only approach is a difficult challenge and an energy inefficient solution. We studied the use of a continuous prediction of the touch location as an alternative to the hardware only approach to reduce the latency gap. We implemented a low latency touch surface and experimented with a constant speed linear prediction with various system latencies in the range [25ms-75ms]. We ran a user experiment to objectively assess the benefits of the prediction on users' performances in target acquisition tasks. Our study reveals that the prediction length is strongly constrained by the nature of target acquisition tasks, but that the approach can be successfully applied to counteract a large part of the negative effect of latency on users' performances.
A Predictive Approach for an End-to-End Touch-Latency Measurement BIBAFull-Text 215-218
  Elie Cattan; Amélie Rochet-Capellan; François Bérard
With direct-touch interaction, users are sensitive to very low levels of latency, in the order of a few milliseconds. Assessing the end-to-end latency of a system is thus becoming an important part of touch-devices evaluation, and this must be precise and accurate. However, current latency estimation techniques are either imprecise, or they require complex setups involving external devices such as high-speed cameras. In this paper, we introduce and evaluate a novel method that does not require any external equipment and can be implemented with minimal efforts. The method is based on short-term prediction of the finger movement. The latency estimation is obtained on the basis of user calibration of the prediction to fully compensate the lag. In a user study, we show that the technique is more precise than a similar "low overhead' approach that was recently presented.
A Public Ideation of Shape-Changing Applications BIBAFull-Text 219-228
  Miriam Sturdee; John Hardy; Nick Dunn; Jason Alexander
The shape-changing concept where objects reconfigure their physical geometry has the potential to transform our interactions with computing devices, displays and everyday artifacts. Their dynamic physicality capitalizes on our inherent tactile sense and facilitates object re-appropriation. Research both within and outside HCI continues to develop a diverse range of technological solutions and materials to enable shape-change. However, as an early-stage enabling technology, the community has yet to identify important applications and use-cases to fully exploit its value. To expose and document a range of applications for shape-change, we employed unstructured brainstorming within a public engagement study. A 74-participant brainstorming exercise with members of the public produced 336 individual ideas that were coded into 11 major themes: entertainment, augmented living, medical, tools & utensils, research, architecture, infrastructure, industry, wearables, and education & training. This work documents the methodology and resultant application ideas along with reflections on the approach for gathering application ideas to enable shape-changing interactive surfaces and objects.

Posters

ART-Chess: A Tangible Augmented Reality Chess on Tabletop BIBAFull-Text 229-233
  Frédéric Rayar; David Boas; Rémi Patrizio
Chess is a traditional board game that is still popular. Several works have been done to make its gaming experience richer using augmented reality technology. But the proposed user interactions to move the chess pieces are still unfamiliar and not appealing to chess players. To answer this issue, we propose ART-Chess, a tangible augmented reality chess on tabletop. Using a camera-aware tabletop, the recognition of markers is made easier, which allows a human player to use real chess pieces. The game is augmented, thanks to an AR headset, with 3D virtual chess pieces.
Augmenting Remote Presence For Interactive Dashboard Collaborations BIBAFull-Text 235-240
  Rodrigo Pizarro; Mark Hall; Pablo Bermell-Garcia; Mar Gonzalez-Franco
We implement the use of silhouette representation for collaboration using interactive dashboards. In order to test its effectiveness against other modalities of interaction we ran a guided data exploration task in a Visual Analytics tool using a tactile dashboard in three modes: face-to-face, teleconference, and enhanced teleconference with silhouette representation. Even though no performance differences were found across the conditions, results show increased coordination abilities of the participants when the remote person is represented by a silhouette, furthermore important behavioral changes related to the presence illusion are also found only in the silhouette condition.
Walls Have Ears: Using Conductive Surfaces of Furniture and Everyday Objects for Room-Wide Power Usage and Crowd Activity Sensing BIBAFull-Text 241-246
  Adiyan Mujibiya; Junji Torii
We propose a design of self-powering room-wide power usage and crowd activity sensing that uses a single point of contact on a conductive surface of furniture or everyday objects such as metal frame of table or window, desktop PC case, rack, cabinet, and so on. We utilize these surfaces for energy scavenging from capacitive reactance and electromagnetic inductance which are typically occur in indoor house and office where there are many electrical appliances. We incorporate energy store and release strategy, and treat the charge accumulation time as a function of electrical appliance usage and human activity within a certain location. In this paper, we highlight principle of operation and prototype implementation result showing sensing log of an office room for 7 days timespan.
MoBat: Sound-Based Localization of Multiple Mobile Devices on Everyday Surfaces BIBAFull-Text 247-252
  Adrian Kreskowski; Jakob Wagner; Jannis Bossert; Florian Echtler
We present MoBat, a combined hard- and software system designed to locate and track multiple unmodified mobile devices on any regular table using passive acoustic sensing. Barely audible sound pulses are emitted from mobile devices, picked up by four microphones located in the corners of the surface and processed in a low-latency pipeline to extract position data. We demonstrate an average positional accuracy and precision of about 3 cm on a table of 1 m x 2 m size, and discuss possible usage scenarios regarding proxemics and tangible interaction.
Connected Paper, EKKO and Analytic Futures: News and Paper Data BIBAFull-Text 253-258
  John Mills; Mark Lochrie; Andy Dickinson; Tom Metcalfe; Paul Egglestone
Advances in conductive inks and increasingly accessible and flexible platforms, such as Arduino and Raspberry Pi, are allowing researchers to transform a range of surfaces, including paper and additive layer objects, into capacitive surfaces. When imbued with Internet connectivity, and placed within the "Internet of things", opportunities to create interactive surfaces that respond to touch and offer audio playback or other data transfer via additional connected peripherals emerge. This poster explores the potential for web-connected paper interfaces with the media and publishing sector and an accompanying content management and system-analytics package to present a range of content, design, interaction and revenue-based opportunities for related industries. It also hints at how paper could be a viable interactive surface and posits potential related work on a wider and cross-industry spectrum.
Collaborative Position Patterns for Pairs Working with Shared Tiled-Wall Display using Mobile Devices BIBAFull-Text 259-264
  Ragaad AlTarawneh; Razan N. Jaber; Shah Rukh Humayoun; Achim Ebert
Recent advances in smart mobile devices (e.g., smartphones and tablets) encourage researchers to utilize them in collaborative environments as a medium of interaction with large shared interactive wall displays. Such collaborative setups have additional advantages over the previous ones, e.g., users' freedom to move around the environment, full view of the whole screen, etc. In this work, we focus on finding different possible collaborative position patterns for pairs of users working in a collaborative setup, equipped with a shared interactive tiled-wall display and multiple mobile devices. For this, we observed users in a controlled study with 36 participants, who performed the collaborative test in 18 pairs.
pART bench: A Hybrid Search Tool for Floor Plans in Architecture BIBAFull-Text 265-270
  Veronika Krauß; Ekaterina Fuchkina; Gabriela Molina León; Oana-Iuliana Popescu; Florian Echtler; Sven Bertel
Architectural databases often contain thousands of different floor plans which have either been collected from historical designs or, more recently, auto-generated by suitable algorithms. Searching for a floor plan that fits specific requirements in such a database involves setting a large number of parameters, such as lines of sight, lighting levels, room types and many more. We present pART bench, a hybrid tabletop/tablet tool which allows the use of intuitive touch commands and tangible objects to quickly adjust search parameters, view resulting floor plans and iteratively refine the search. We report on a comprehensive requirements analysis with practising architects, on the design process, and describe our prototypical implementation of the system, both on a tablet and on a PixelSense tabletop device.
Hi-AppV: Viewing Tabletop Application Windows on Mobile Devices BIBAFull-Text 271-276
  Shah Rukh Humayoun; Jahanzeb Khan; Syed Moiz Hasan; Ragaad AlTarawneh; Achim Ebert
In interactive surface supported collaborative environments, team members may need to look on some important information available on some hidden or minimized application in order to perform the collaborative task on the current active application, which may occupying the whole screen to give a better view to all team members. Targeting this issue, we provide Hi-AppV (Hidden Applications Viewer) framework that enables the team members to see the view-image of any currently opened application on tabletop (either hidden behind the current active application or minimized) to their mobile devices. Further, team members can also switch any opened application as the current active one through their mobile devices. Overall, the framework provides an intuitive interaction in the collaborative environment for performing collaborative tasks.
Capture The Flag: Engaging In A Multi-Device Augmented Reality Game BIBAFull-Text 277-282
  Suzanne Mueller; Andreas Dippon; Gudrun Klinker
We present a Capture the Flag based game that investigates the possible engagements in a multi-device game. The distinction between a publicly used space and a player's private space is made and utilized to display different information to players. The tablet and the Augmented Reality component are used to see how players can be drawn to a certain physical space, to create a social and engaging game.
Tabletop-Supported Rehabilitation with fun.tast.tisch.: Lessons Learned BIBAFull-Text 283-288
  Mirjam Augstein; Thomas Neumayr; Isabel Karlhuber; Sabine Dielacher; Josef Altmann; Sylvia Öhlinger
Within the fun.tast.tisch. project, new therapeutic approaches involving an interactive tabletop have been explored. The project did not aim at replacing conventional therapy but at providing optimal support for both therapists and patients. This paper summarizes findings of two years of experience with the system with a focus on patients' and therapists' attitude and system perception. It further outlines implementation challenges and findings on therapeutic efficacy.
PhySig: Incorporating Physical Products with Digital Signage in Shopping Environments BIBAFull-Text 289-294
  Masafumi Muta; Soh Masuko; Adiyan Mujibiya
In this paper, we propose the concepts, design guidelines, and implementation details of PhySig, a novel interactive in-store shopping experience that annotates information about products, which are physically located in the store, by a projector and enables shared content between the products and mobile devices. Multiple users access the system from their own mobile devices to move a cursor projected on the wall or floor of the store to select an item and obtain additional information associated with the item in the projected space. They can also download the information to their mobile devices or upload their related content, including comments, reviews, or photos, of the item. In short, PhySig is an example of how to extend and combine digital signage with the surrounding physical environment.
Studying User-Defined Gestures Toward Off the Screen Interactions BIBAFull-Text 295-300
  Kohei Matsumura
This study describes the concept of off-the-screen interactions. People often point or gesture not only on the touch-sensitive screen of a device but also around the screen. In this study on off-the-screen interactions, we investigated user-defined gestures on the surrounding area of the screen through a user study using paper prototypes. Through our results, we found that there are different styles of off-the-screen interactions and requirements that are to be considered for realizing our concept.
A See-Through Display for Interactive Museum Showcases BIBAFull-Text 301-306
  Andrea Bellucci; Paloma Diaz; Ignacio Aedo
We present a prototype of an interactive showcase for museum exhibits that exploits a multi-touch see-through display screen. The transparent display makes it possible to superimpose an overlay of digital information while observing the physical object behind the screen. We developed an application that allows visitors to tell their experience with the object as well as to introduce comments on other visitors contributions. Our goal is to engage users in the creation of digital narratives with cultural heritage artifacts, thus constructing new layers of meaning that are parallel to the official ones provided by curators. Results from a formative study give promising feedbacks on the usefulness, novelty and attractiveness of the prototype.
Police Analyst Workstation: Towards a Multi-Surface User Interface BIBAFull-Text 307-311
  Craig Anslow; Chris Rooney; Neesha Kodagoda; William Wong
Developing applications for multi-surface user interfaces is challenging. Sharing and transferring information between these surfaces requires the need for multi-modal interaction methods. In this paper we describe the Police Analyst workstation for supporting multi-surface interaction for criminal intelligence analysis with sense making using multi-touch and mid-air hand gestures for input. We outline our requirements, design, and an initial implementation.
STRIPE: Fluid and Zoomable Lean-back Interface for Navigating Content Landscape on a Large Screen BIBAFull-Text 313-318
  Jinha Lee; Ben Cerveny; Josh Nimoy; Gabriel Dunne; Seran Jeon; Varun Nigam
We present Stripe, a fluid and zoomable interface for navigating content datasets with a remote control on large screens. By bringing multi-level semantic zooming, fixed focus, and fluid user interaction to linear layouts of content, Stripe provides easy, learnable "television-like" interactions to complex online content datasets.
Tangible Holographic 3D Objects with Virtual Touch BIBAFull-Text 319-324
  Péter Tamás Kovács; Tibor Balogh; Zsolt Nagy; Attila Barsi; László Bordács; Gáspár Balogh
This poster describes the design of a display and interaction device that allows a natural mid-air manipulation and feedback of floating, true 3D objects. This so called virtual touch will create a natural feeling of manipulating and touching of the virtual 3D object for the user. The 3D display system allows users to reach into the viewing volume to touch virtual objects, and even feel gradients of haptic feedback as the hands penetrate into virtual objects. True 3D objects are dynamically relit according to the real-world environment captured by cameras, which is reflected on shiny virtual surfaces to reach more realistic user experience. On the other hand, 3D virtual objects can cast real shadows on the real workspace. In case the virtual objects are defined as light-emitting, glows can also be projected onto the desk and the real objects. These techniques together form the basis of an ultra-realistic visualization and interaction system that will find its way into many technical and educational uses.
A Tabletop System to Promote Argumentation in Computer Science Students BIBAFull-Text 325-330
  Marisol Wong-Villacres; Margarita Ortiz; Vanessa Echeverría; Katherine Chiluiza
This study explores the design of a tabletop system that seeks to bolster the argumentative skills of Computer Science students. A set of four design guidelines -- positive interdependence, stages, interference, and awareness -- were derived from user research and used for designing and prototyping a multi-display tabletop application. Four students evaluated a video prototype; the overall results showed that the application's features have great potential to support the design guidelines. Moreover, students' impressions about the prototype's enforcement of positive interdependence indicate possibilities for augmenting argumentation opportunities. Steps for future work are presented.
ViZCom: Viewing, Zooming and Commenting through Mobile Devices BIBAFull-Text 331-336
  Shah Rukh Humayoun; Mahmoud Sharf; Ragaad AlTarawneh; Achim Ebert; Tiziana Catarci
In tabletop-supported collaborative environments, direct viewing or accessing to all parts of the screen is not possible in many cases. This could affect the performance of team members in executing the common collaborative tasks. Targeting this concern, we developed the ViZCom (Viewing, Zooming and Commenting) framework that enables the team members to view the tabletop screen on their mobile devices. It also facilitates them to zoom the screen view on their mobile devices through different zooming options. Further, it provides the facility to write comments or highlight screen parts on their mobile devices, which are then scaled proportionally and shown on the actual tabletop. Overall, the framework extends the collaborative environment by adding these mobile devices in order to provide more spatial flexibility.
Investigating the Potential of a Two-finger Chord Button in Multi-touch Applications BIBAFull-Text 337-342
  Ioannis Leftheriotis; Michail N. Giannakos; Konstantinos Chorianopoulos; Letizia Jaccheri
With the increasing use of multi-touch (MT) capable devices, MT interaction has become a commodity during the last years. From personal devices to larger multi-user screens, MT functionality is nowadays considered as a standard way of performing rich interactions. However, moving from a single-touch interaction to a dual-touch and consequently to MT is not always without challenges for the average user. Although, the use of single-touch is very common, interaction design have yet to be examined thoroughly by taking into account potential differences of single and multi-touch functionality. In this work, we investigate the potential of a two-finger chord button in comparison to the traditional single touch buttons that we find in touchscreens. Based on the fact that users are familiar with single touch buttons (even before the MT screens) our hypotheses are: the use of a two finger chord button a) decreases users' efficiency, and b) delays users' responses. In order to investigate our hypotheses, we conducted a controlled experiment with 12 users working on an appropriately designed MT application. The empirical results have indicated that the use of two-finger button significantly delays users' response-time while it does not affect users' efficiency on the performed task.
Multi-user Multi-device Interaction with Large Displays at the Point of Sale: An Application Case BIBAFull-Text 343-348
  Ulrich von Zadow; Andreas Siegel; Raimund Dachselt
The internet raises challenges for retailers, since it provides a competing sales channel that allows effectively unlimited access to arbitrary products. In particular, this enables a phenomenon known as showrooming, where customers inspect products in local stores and subsequently order from arbitrary online shops. By combining large public displays in stores with personal mobile devices, our prototype multi-user shopping system has the potential to alleviate this: Store-specific shopping carts on the mobile bind customers even after they have exited the store. Significantly, the system is optimized to minimize attention switches between the devices by treating the mobile device as eyes-free remote control wherever possible, using the mobile display only for personal data. We report on interaction concepts, our prototypical implementation, and user feedback, contributing an initial iteration for best practices in this domain.
Towards a Fluid Interaction Concept Bringing Star Plots to Interactive Displays BIBAFull-Text 349-354
  Ricardo Langner; Ulrike Kister; Christin Engel; Tom Horak; Raimund Dachselt
In this paper, we present several concepts towards fluid multi-touch interactions for an interactive star plot visualization. The goal of our research is to improve the usability of visualizations by developing new and natural interaction techniques as well as designing and applying new visualization approaches. To achieve this, we systematically create consistent touch interactions for various tasks typical to information visualizations. Furthermore, we propose a new approach to integrate additional levels of information into a star plot by splitting up axes. Finally, we have successfully implemented many of our concepts in a first prototype, allowing the validation of the usefulness and usability.
Interaction Space of Chords on a Vertical Multi-touch Screen BIBAFull-Text 355-360
  Ioannis Leftheriotis; Michail N. Giannakos; Konstantinos Chorianopoulos; Letizia Jaccheri
Despite the increasing use of Multi-Touch (MT) capable devices, novel interaction techniques need to be examined in order to swift from a single-touch WIMP interaction paradigm to a MT one. In this work, we focus on chord interaction on vertical MT screens. Chord is the simultaneous touch of more than one finger on the MT screen. Based on a user experiment with 12 users, we explore the positioning -- interaction space of the chord technique, by investigating a relation among the type of the chord (number of fingers) and the position on the screen that the chord was applied. The empirical results have indicated an interaction pattern that demonstrates a significant relation between the type of the chord that was applied (number of fingers) and its position on the screen. Our results show that as the number of fingers needed for a chord increases, the nearer from the bottom left of the screen this chord is to be applied. Notably, our results give evidence of the fact that there is a threshold (five-finger-chord) beyond which the above relation is not strong.
aWall: A Socio-Cognitive Tool for Agile Team Collaboration using Large Multi-Touch Wall Systems BIBAFull-Text 361-366
  Magdalena Mateescu; Martin Kropp; Roger Burkhard; Carmen Zahn; Dario Vischi
Agile methods emphasize highly interactive and close collaboration within teams and among stakeholders. Due to still missing adequate digital tools, agile teams use mostly physical artefacts like wallboards and story cards. In this paper, we present aWall, an agile team collaboration tool for large multi-touch wall systems. aWall was designed based on empirical user research using new interaction and visualization concepts to support and foster the highly collaborative and communicative agile work style. The application is based on web technology and can be used in both co-located and distributed setting. The implemented prototypes were validated with end-users in a user workshop.
Hi Robot: Evaluating RoboTale BIBAFull-Text 367-372
  Aleksander Krzywinski; Weiqin Chen
This paper presents the evaluation of the RoboTale game of interactive storytelling for children. The evaluation assess the usability of tangibles for robot control; their role in the storytelling; and the collaboration among the participating children. In the evaluation 21 children told stories with RoboTale in groups of three. The subsequent evaluation revealed surprisingly creative uses of the tangible markers available to the children, and the collaboration in the telling of stories showed great potential for learning.
Marker-free Object Recognition on Tabletops: An Exploratory Study BIBAFull-Text 373-378
  Frédéric Rayar; Armand Renaudeau
In the last decade, object recognition placed on tabletop has been advertised as a near future ability on next generation tabletops by several major information technology companies. Markers-based recognition seems the straightforward solution, but one can easily understand that this may not be a lasting solution for real word scenarios. In this paper, we study the marker-free object recognition ability of a commonly used tabletop. This tabletop does not rely on external cameras and thus fits perfectly a future connected house scenario. A proof-of-concept system has been developed and experimented on various objects. Based on this study, we raise some conclusions and perspectives on both software and hardware levels.
Towards Collaborative Modelling of Business Processes on Large Interactive Touch Display Walls BIBAFull-Text 379-384
  Alexander Nolte; Ross Brown; Erik Poppe; Craig Anslow
Analyzing and redesigning business processes is a complex task, which requires the collaboration of multiple actors. Current approaches focus on workshops where process stakeholders together with modeling experts create a graphical visualization of a process in a model. Within these workshops, stakeholders are mostly limited to verbal contributions, which are integrated into a process model by a modeling expert using traditional input devices. This limitation negatively affects the collaboration outcome and also the perception of the collaboration itself. In order to overcome this problem we created CubeBPM -- a system that allows groups of actors to interact with process models through a touch based interface on a large interactive touch display wall. Using this system for collaborative modeling, we expect to provide a more effective collaboration environment thus improving modeling performance and collaboration.

Demos

A TUI Platform for your Desktop BIBAFull-Text 385-387
  Felix Raymond; Loic Semelle; Mathieu Courtois; Cedric Kervegant; Delphine Graeff; Julien Castet; Jean-Baptiste de la Rivière
This sample paper describes the prototype of a TUI platform designed for non-expert that suggests to use tangible stamps as User Interface. This concept offers a process in order to print its own stamps regarding the need.
PERCs Demo: Persistently Trackable Tangibles on Capacitive Multi-Touch Displays BIBAFull-Text 389-392
  Christian Cherek; Simon Voelker; Jan Thar; Rene Linden; Florian Busch; Jan Borchers
Tangible objects on capacitive multi-touch surfaces are usually only detected while the user is touching them. When the user lets go of such a tangible, the system cannot distinguish whether the user just released the tangible, or picked it up and removed it from the surface. In this demo we demonstrate PERCs, persistent capacitive tangibles that "know" whether they are currently on a capacitive touch surface or not. This is achieved by adding a small field sensor to the tangible to detect the touch screen's own, weak electromagnetic touch detection probing signal. In this demo we present two applications that make use of PERC tangibles -- An air hockey like game for two players and a single person arcade game.
Capture The Flag Demo: Engaging In A Multi-Device Augmented Reality Game BIBAFull-Text 393-396
  Suzanne Mueller; Andreas Dippon; Gudrun Klinker
We present a Capture the Flag based game that investigates the possible engagements in a multi-device game. The distinction between a publicly used space and a player's private space is made and utilized to display different information to players. The tablet and the Augmented Reality component are used to see how players can be drawn to a certain physical space, to create a social and engaging game. The demo allows the users to experience a different setup for a multi-device game that attempts to engage the users with the space and each other.
EnchanTable: Displaying a Vertically Standing Mid-air Image on a Table Surface using Reflection BIBAFull-Text 397-400
  Hiroki Yamamoto; Hajime Kajita; Naoya Koizumi; Takeshi Naemura
We propose a novel display method, EnchanTable, that can augment a table surface with mid-air images. Users can interact with visual images displayed on the table by using real objects. In our optical design, we place an optical imaging device behind a table so that the light from the device forms a vertically standing mid-air image reflected at the table surface. This design displays the image right on the table. The merit of our method is that the only requirement is for the table to have a reflective surface. Utilizing this, we can place any devices, such as touch sensors, around the table, or display mid-air images on a tablet whose surface is sufficiently reflective. Owing to its compactness, this method can be applied to other tabletop-interaction systems.
Fall of Humans: Interactive Tabletop Games and Transmedia Storytelling BIBAFull-Text 401-404
  Mara Dionisio; Aditya Gujaran; Miguel Pinto; Augusto Esteves
This paper illustrates how transmedia storytelling can help introduce players to interactive tabletop games. To do so, we developed Fall of Humans (FoH), an experience that takes place over two games: Meat factory, a physical card game where players compete to create different zombies; and Uprising, a interactive tabletop game where players can get to see the zombies they have created come to life.
aWall: A Socio-Cognitive Tool for Agile Team Collaboration using Large Multi-Touch Wall Systems BIBAFull-Text 405-408
  Magdalena Mateescu; Martin Kropp; Roger Burkhard; Carmen Zahn; Dario Vischi
Agile methods emphasize highly interactive and close collaboration within teams and among stakeholders. Due to still missing adequate digital tools, agile teams use mostly physical artefacts like wallboards and story cards. In this paper, we present aWall, an agile team collaboration tool for large multi-touch wall systems. aWall was designed based on empirical user research using new interaction and visualization concepts to support and foster the highly collaborative and communicative agile work style. The application is based on web technology and can be used in both co-located and distributed setting. The implemented prototypes were validated with end-users in a user workshop. In the demo, users can experience the interaction and visualization concepts hands-on.
MAPS: A Multi-Dimensional Password Scheme for Mobile Authentication BIBAFull-Text 409-412
  Jonathan Gurary; Ye Zhu; George Corser; Jared Oluoch; Nahed Alnahash; Huirong Fu
It has been long recognized that no silver bullet exists to achieve both security and memorability. With the addition of requirements, the task of designing authentication schemes for mobile devices becomes more challenging. We propose a Multi-dimensionAl Password Scheme (MAPS) for mobile authentication. MAPS fuses information from multiple dimensions to form a password. This fusion enlarges the password space, improves memorability, and enhances usability by reducing the number of gestures needed for authentication. Based on the idea of MAPS, we implement a Chess-based MAPS (CMAPS) for Android systems. Our user studies show that CMAPS can achieve high recall rates while exceeding the security strength of current mobile authentication schemes and exceeding the requirements of banking.
Airsteroids: Re-designing the Arcade Game Using MarkAirs BIBAFull-Text 413-416
  Fernando Garcia-Sanjuan; Javier Jaen; Alejandro Catala; Geraldine Fitzpatrick
This paper presents Airsteroids, a multi-player redesign of the classic arcade game Asteroids. The redesign makes use of handheld devices such as tablets and Smartphones and of MarkAirs, an around-device interaction (ADI) with fiducial markers that reduces occlusion on the screens and interference between users' interactions.
DreamScope Catcher: A Touch Sensitive Interface to Catch Dreams BIBAFull-Text 417-420
  Mara Dionisio; Paulo Bala; Rui Trindade; Valentina Nisi; Nuno Nunes
Dream Scope is the interactive, stand alone, self-contained portion of a bigger Art installation named Lucid Peninsula. The goal of the installation is to offer a way for people to experience the future through a physical interactive installation. To achieve this aim we designed and developed the interactive DreamScope device, while the Time's Up collective designed and built the physical installation. On one side with the DreamViewer binoculars enable participants to see the Lucid Peninsula fictional world and absorb data relating to factors such as air quality, presence of plant and other life forms, etc. On the other side of the installation, the audience will be able to borrow mobile devices (DreamCatchers) and "catch" the dreams of the inhabitants of the peninsula, which are mixed with memories of the world before it was transformed.
BlackBlocks: Tangible Interactive System for Children to Learn 3-Letter Words and Basic Math BIBAFull-Text 421-424
  Wafa Almukadi; A. Lucas Stephane
We demonstrate BlackBlocks, a low-cost portable prototype work as a tangible user interface for children aged 4 to 8 to support language and math learning. This prototype is used as a research instrument to compare learning between toy blocks and a tangible system. A comparative experiment was performed with 18 children for evaluating the effects of learning method.
The MiTable Multi-Touch Gestures Engine BIBAFull-Text 425-428
  Mauricio Cirelli; Ricardo Nakamura; Lucia Filgueiras
From smartphones to large multi-touch surfaces, different devices and applications need a technique to match the input from each user to a known action. The multi-touch gestures recognition problem has been studied for a long time and has obtained its most expressive results in the past decade. However, the multi-user scenario for large multi-touch tables imposes several challenges to existing gestures recognizers. In this paper, we propose a new multi-touch gestures engine which is capable of adding multi-user capabilities to existing state-of-art gestures recognizers.
Making Fashion More Trendy through Touchless Interactive Displays Integrated with Mobile Devices BIBAFull-Text 429-432
  Franca Garzotto; Antonella Di Rienzo; Ayse Naciye Celebi Yilmaz; Luigi Oliveto; Paolo Cremonesi; Cristina Frà; Massimo Valla
Our research focuses on innovative interactive technologies for the retail domain, in particular fashion. We integrate "touchless" large interactive displays, enabling full-body interaction at the distance, and personal mobile devices, to create engaging UXs that have a strong potential for advertisement and branding purposes in the fashion sector.

Doctoral Symposium

Usability Guidelines for Co-Located Multi-User Interaction on Wall Displays BIBAFull-Text 433-438
  Andrea Nutsi
Today interactive wall displays can be widely seen in industry and research. Due to their size, these displays offer the opportunity for several users to interact co-located and simultaneously. An application supporting multi-user interaction has to be designed differently than traditional single-user interfaces, for example supporting several parallel workspaces or considering by-standers. The overall goal of this thesis is the development of usability guidelines for multi-user applications running on interactive wall displays. These guidelines should aid developers of future applications in ensuring a high multi-user usability. The research approach combines literature analysis with usability studies and controlled laboratory studies. The literature analysis and the usability study will identify aspects specific to multi-user usability. The laboratory studies will deepen the understanding of two selected usability aspects, with readability in a multi-user scenario being one of them.
Designing Ad-Hoc Cross Device Collaborations For Small Groups BIBAFull-Text 439-444
  Frederik Brudy
The curation of historic documents is a difficult task as it requires to combine information and raw material from many different sources. Digital tools can support such a sensemaking task and group collaboration can help the discovery of knowledge. While most of people's personal devices (such as phones, tablets and laptops) are connected to the internet, they are not aware of each other's presence or relationship when in close proximity. Leveraging people's personal devices and other devices in their surroundings provides an opportunity to support the curation of historic documents in ad hoc small group scenarios. I describe my motivation and a selection of related work, leading to requirements for such a system. I then state how I am planning to address these challenges and my current state of research, following two parallel tracks: building and testing technology as well as conducting observational studies and interviews to inform my designs.
Toward Motor: Intuitive Interaction Primitives for Touchless Interfaces BIBAFull-Text 445-450
  Debaleena Chattopadhyay
To design intuitive, interactive systems in various domains, such as health, entertainment, or smart cities, researchers are exploring touchless interaction. Touchless systems allow individuals to interact without any input device-using freehand gestures in midair. Gesture-elicitation studies focus on generating user-defined interface controls to design touchless systems. Interface controls, however, are composed of primary units called interaction primitives-which remain little explored. For example, what touchless primitives are motor-intuitive and can unconsciously use our pre-existing sensorimotor knowledge (such as visual perception or motor skills)? Drawing on the disciplines of cognitive science and motor behavior, my research aims to understand the perceptual and motor factors in touchless interaction with 2D user interfaces (2D UIs). I then aim to apply this knowledge to design a set of touchless interface controls for large displays.
Interactive Spaces: Models and Algorithms for Reality-based Music Applications BIBAFull-Text 451-456
  Marcella Mandanici
Reality-based interfaces have the property of linking the user's physical space with the computer digital content, bringing in intuition, plasticity and expressiveness. Moreover, applications designed upon motion and gesture tracking technologies involve a lot of psychological features, like space cognition and implicit knowledge. All these elements are the background of three presented music applications, employing the characteristics of three different interactive spaces: a user centered three dimensional space, a floor bi-dimensional camera space, and a small sensor centered three dimensional space. The basic idea is to deploy the application's spatial properties in order to convey some musical knowledge, allowing the users to act inside the designed space and to learn through it in an enactive way.
Collaborative Learning In An Artifact Ecology: A Distributed Cognition Perception BIBAFull-Text 457-462
  Christina Vasiliou
This work aims to extend our understanding of how groups of learners collaborate in a learning environment rich in technologies, namely an artifact ecology. For the purpose of this investigation we enriched a postgraduate HCI course with four identical technology rich settings that aimed to support student collaborative activities around a design problem. Following an ethnographic approach, both qualitative and quantitative data were collected in HCI courses over three years resulting in a rich dataset for analysis. Initial studies helped us understand the domain knowledge, context, and learners' needs and experiences. Then, using Distributed Cognition (DC) as a conceptual framework to guide analysis and interpretation of findings we worked toward understanding the interdependencies of learners, tasks, and technologies in the environment and highlighting aspects for redesign. The findings of these individual studies were then combined in order to provide a holistic understanding of the collaborative activities in an artifact ecology.
Supporting the Meeting Journey: Understanding and Designing Collaborative Device Ecologies BIBAFull-Text 463-468
  Aurélien Ammeloot
The combination of personal and mobile technologies (Bring Your Own Device) with technology-augmented spaces designed for collaboration offers new design challenges for the HCI community. This paper is a summary of a doctoral work-in-progress aiming to further understand the design of multi-screen device ecologies for collaboration: understanding the activities, i.e. the Meeting Journey, and understanding the best design principles applicable to support this journey. In order to achieve this, a number of methodologies have been used: ethnographic observations, semi-structured interviews, and focus groups. The journey and design principles are established with an analysis method informed by Grounded Theory. Future work will include the development and evaluation of a software assistant supporting meeting journey activities using a hybrid approach.
Supporting Interactive Graph Analysis Using Adjustable Magic Lenses BIBAFull-Text 469-474
  Ulrike Kister
Magic lenses are important tools for visualization and can be used to explore and edit graphs. Here, large displays can be beneficial for both their visualization space and possibility for multi-user analysis. With this work, we contribute the enhancement of graph lens functionality by allowing the user to switch lens functions, adjust function parameters, and combine lenses. Further, we investigate the application of various interaction modalities for magic lenses, such as multi-touch, tangibles, and body-centric interaction, to support interactive graph exploration and manipulation.

Workshops

Architecting Collaborative Learning Places BIBAFull-Text 475-478
  Edward Tse; Ahmed Kharrufa; Emma Mercier; Cresencia Fong
Over the years we have seen the proliferation of mobile internet connected devices. This growth is challenging the traditional notions of education being bound to a single time and place. At the same time teachers are asked to develop 21st century competencies such as creativity, collaboration, and technological literacy. The variety of technologies in today's Collaborative Learning Places present new opportunities for learning and new challenges in device orchestration. This is driving the need for design that considers the pedagogical, practical, and technical aspects of Architecting Collaborative Learning Places. This workshop aims to connect the expertise of the ITS community with the domains of educational pedagogy and practice. The goal is to identify and explore challenges in Architecting Collaborative Learning Places. We aim to explore topics such as skills for a knowledge society, equity of participation, orchestration, collaboration, and practical challenges.
Collaboration Meets Interactive Surfaces (CMIS): Walls, Tables, Mobiles, and Wearables BIBAFull-Text 479-483
  Craig Anslow; Pedro Campos; Laurent Grisoni; Andres Lucero
This workshop proposes to bring together researchers who are interested in improving collaborative experiences through the combination of multiple interaction surfaces with diverse sizes and formats, ranging from large-scale walls, to tables, mobiles, and wearables. The opportunities for innovation exist, but the ITS, CHI, CSCW, and other HCI communities have not yet thoroughly addressed the problem of bringing effective collaboration activities together using multiple interactive surfaces, especially in complex work domains. Of particular interest is the potential synergy that one can obtain by effectively combining different-sized surfaces and sharing information between devices.
Cross-Surface: Workshop on Interacting with Multi-Device Ecologies in the Wild BIBAFull-Text 485-489
  Steven Houben; Jo Vermeulen; Clemens Klokmose; Nicolai Marquardt; Johannes Schöning; Harald Reiterer
In this workshop, we will review and discuss opportunities, technical challenges and problems with cross-device interactions in interactive multi-surface and multi-device ecologies. We aim to bring together researchers and practitioners currently working on novel techniques for cross-surface interactions, focusing both on technical as well as interaction challenges for introducing these technologies into the wild, and highlighting opportunities for further research. The workshop will help to facilitate knowledge exchange on the inherent challenges of building robust and intuitive cross-surface interactions, identify application domains and enabling technologies for cross-surface interactions in the wild, and establish a research community to develop effective strategies for successful design of cross-device interactions.
ITS Workshop DEXIS 2015: Visual Data Exploration on Interactive Surfaces BIBAFull-Text 491-494
  Petra Isenberg; Bongshin Lee; Alark Joshi; Tobias Isenberg
We focus on the use of interactive surfaces for visual data exploration. The workshop topics are situated at the intersection of Interaction and Visualization research, and we ask for contributions from members of one or both communities. Our main goal is to call for the development of more dedicated research on visualization systems for interactive surfaces ranging from small screen smartphones to medium-size tables to large wall-size displays. The workshop is meant to provide a space for visualization and interaction researchers to meet, discuss, advance the state-of-the-art, and refine research agendas.
MULTIsensory Interaction and Assistive Technology: Feasibility and Effectiveness (MULTI-AT) BIBAFull-Text 495-498
  Luca Brayda; Andrea Serino; Elisabeth Wilhelm; Marc Macé
The wide availability of human-computer interfaces which stimulate our senses (vision, hearing and touch) does not equally serve all categories of end users: people with sensory and/or motor disabilities cannot access information as the majority does, because solutions for the wider public are implicitly designed for persons with average sensory abilities. Assistive technologies exploit alternative stimulation methodologies complementing the residual sensory channels. Successful implementations would give people with disabilities the possibility of using everybody's human-computer interfaces or the possibility of developing new interfaces for therapeutic and rehabilitation purposes.
   However, many assistive devices, which look promising at a superficial level, do not go beyond initial curiosity. The most common sources of failure are, at the design stage, the lack of serious clinical tests proving the effectiveness of the stimulation methods. At the end user level, instead, failure comes from the interference of this new signal with the residual sensory channels and/or the negative impact in the social life of the person. For these reasons the joint effort of researchers and experts in the rehabilitation field is a mandatory step to design well-targeted, useful, socially acceptable novel devices.
   The goal of this workshop is to attract contributions from scientists working on multisensory perception and related methodologies, to technologists and engineers interested in investigating alternative solutions for multisensory stimulation and to clinicians interested in developing new therapeutic strategies based on novel interfaces.
   The ultimate purpose of this workshop is to integrate common interests in an interdisciplinary fashion from experts of different fields, which stem from the needs of end users and span up to technological solutions.
Shared Infrastructures for Tangible Tabletops & Interactive Surfaces BIBAFull-Text 499-500
  Martin Kaltenbrunner; Florian Echtler
Although interactive surfaces have been a constant topic of research during the last decades, there is still little agreement on shared infrastructures for their conception and development. While the mainstream industry has already partially integrated basic multi-touch functionality in most mobile and some desktop platforms, any further attempts of introducing commercial table-tops have been largely abandoned since. Therefore our research community that seeks to conceive and implement novel interactive tabletop concepts, often relies on the prototyping of individualized solutions or the integration and extension of openly shared community solutions.
   One example of such a shared ecosystem is the TUIO protocol which has been designed to encode and transport a low-level interface abstraction within a wide variety of settings, from tangible user interfaces to multi-touch tabletops. However, our community also needs further implementations of common middle-ware solutions such as gesture-recognition or object classification as well as their integration into high-level development frameworks. Furthermore it would be also desirable to establish a shared collection of building-blocks at the hardware-level. Such a common ecosystem may not only accelerate the development efforts of our community, but should also ensure the scientific quality control through reproducible and comparable results.
Interaction Techniques for Wall-Sized Screens BIBAFull-Text 501-504
  Lars Lischke; Jürgen Grüninger; Khalil Klouche; Albrecht Schmidt; Philipp Slusallek; Giulio Jacucci
Large screen displays are part of many future visions, such as i-LAND that describes the possible workspace of the future. Research showed that wall-sized screens provide clear benefits for data exploration, collaboration and organizing work in office environments. With the increase of computational power and falling display prices wall-sized screens currently make the step out of research labs and specific settings into office environments and private life. Today, there is no standard set of interaction techniques for interacting with wall-sized displays and it is even unclear if a single mode of input is suitable for all potential applications. In this workshop, we will bring together researchers from academia and industry who work on large screens. Together, we will survey current research directions, review promising interaction techniques, and identify the underlying fundamental research challenges.