HCI Bibliography Home | HCI Conferences | AVI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AVI Tables of Contents: 9496980002040608101214

Proceedings of the 2014 International Conference on Advanced Visual Interfaces

Fullname:AVI'14: International Working Conference on Advanced Visual Interfaces
Editors:Paolo Paolini; Franca Garzotto
Location:Como, Italy
Dates:2014-May-27 to 2014-May-29
Standard No:ISBN: 978-1-4503-2775-6; ACM DL: Table of Contents; hcibib: AVI14
Links:Conference Website
  1. AVI'14 best papers
  2. Towards "natural interaction"
  3. Interacting from the distance
  4. Tangibles
  5. Connection and collaboration
  6. Designing for special needs
  7. Adaptive and context aware interfaces
  8. Designing for specific domains
  9. Evaluation
  10. Touch and multitouch
  11. Interface metaphors
  12. Social interaction
  13. Visual analytics
  14. Software engineering approaches
  15. Information visualization
  16. Search and information management
  17. Gestural interaction
  18. Posters
  19. Workshop papers

AVI'14 best papers

User-defined gestures for elastic, deformable displays BIBAFull-Text 1-8
  Giovanni Maria Troiano; Esben Warming Pedersen; Kasper Hornbæk
Elastic, deformable displays allow users to give input by pinching, pushing, folding, and twisting the display. However, little is known about what gestures users prefer or how they will use elasticity and deformability as input. We report a guessability study where 17 participants performed gestures to solve 29 tasks, including selection, navigation, and 3D modeling. Based on the resulting 493 gestures, we describe a user-defined gesture set for elastic, deformable displays. We show how participants used depth and elasticity of the display to simulate deformation, rotation, and displacement of objects. In addition, we show how the use of desktop computers as well as multi-touch interaction affected users' choice of gestures. Finally, we discuss some unique uses of elasticity and deformability in gestures.
AwToolkit: attention-aware user interface widgets BIBAFull-Text 9-16
  Juan E. Garrido; Victor M. R. Penichet; Maria D. Lozano; Aaron Quigley; Per Ola Kristensson
Increasing screen real-estate allows for the development of applications where a single user can manage a large amount of data and related tasks through a distributed user interface. However, such users can easily become overloaded and become unaware of display changes as they alternate their attention towards different displays. We propose AwToolkit, a novel widget set for developers that supports users in maintaning awareness in multi-display systems. The AwToolkit widgets automatically determine which display a user is looking at and provide users with notifications with different levels of subtlety to make the user aware of any unattended display changes. The toolkit uses four notification levels (unnoticeable, subtle, intrusive and disruptive), ranging from an almost imperceptible visual change to a clear and visually salient change. We describe AwToolkit's six widgets, which have been designed for C# developers, and the design of a user study with an application oriented towards healthcare environments. The evaluation results reveal a marked increase in user awareness in comparison to the same application implemented without AwToolkit.
Reflections on how designers design with data BIBAFull-Text 17-24
  Alex Bigelow; Steven Drucker; Danyel Fisher; Miriah Meyer
In recent years many popular data visualizations have emerged that are created largely by designers whose main area of expertise is not computer science. Designers generate these visualizations using a handful of design tools and environments. To better inform the development of tools intended for designers working with data, we set out to understand designers' challenges and perspectives. We interviewed professional designers, conducted observations of designers working with data in the lab, and observed designers working with data in team settings in the wild. A set of patterns emerged from these observations from which we extract a number of themes that provide a new perspective on design considerations for visualization tool creators, as well as on known engineering problems.

Towards "natural interaction"

A natural interface for multi-focal plane head mounted displays using 3D gaze BIBAFull-Text 25-32
  Takumi Toyama; Daniel Sonntag; Jason Orlosky; Kiyoshi Kiyokawa
In mobile augmented reality (AR), it is important to develop interfaces for wearable displays that not only reduce distraction, but that can be used quickly and in a natural manner. In this paper, we propose a focal-plane based interaction approach with several advantages over traditional methods designed for head mounted displays (HMDs) with only one focal plane. Using a novel prototype that combines a monoscopic multi-focal plane HMD and eye tracker, we facilitate interaction with virtual elements such as text or buttons by measuring eye convergence on objects at different depths. This can prevent virtual information from being unnecessarily overlaid onto real world objects that are at a different range, but in the same line of sight. We then use our prototype in a series of experiments testing the feasibility of interaction. Despite only being presented with monocular depth cues, users have the ability to correctly select virtual icons in near, mid, and far planes in 98.6% of cases.
Touchless circular menus: toward an intuitive UI for touchless interactions with large displays BIBAFull-Text 33-40
  Debaleena Chattopadhyay; Davide Bolchini
Researchers are exploring touchless interactions in diverse usage contexts. These include interacting with public displays, where mouse and keyboards are inconvenient, activating kitchen devices without touching them with dirty hands, or supporting surgeons in browsing medical images in a sterile operating room. Unlike traditional visual interfaces, however, touchless systems still lack a standardized user interface language for basic command selection (e.g., menus). Prior research proposed touchless menus that require users to comply strictly with system-defined postures (e.g., grab, finger-count, pinch). These approaches are problematic because they are analogous to command-line interfaces: users need to remember an interaction vocabulary and input a pre-defined symbol (via gesture or command). To overcome this problem, we introduce and evaluate Touchless Circular Menus (TCM) -- a touchless menu system optimized for large displays, which enables users to make simple directional movements for selecting commands. TCM utilize our abilities to make mid-air directional strokes, relieve users from learning posture-based commands, and shift the interaction complexity from users' input to the visual interface. In a controlled study (N=15), when compared with contextual linear menus using grab gestures, participants using TCM were more than two times faster in selecting commands and perceived lower workload. However, users made more command-selection errors with TCM than with linear menus. The menu's triggering location on the visual interface significantly affected the effectiveness and efficiency of TCM. Our contribution informs the design of intuitive UIs for touchless interactions with large displays.

Interacting from the distance

Static Voronoi-based target expansion technique for distant pointing BIBAFull-Text 41-48
  Maxime Guillon; François Leitner; Laurence Nigay
Addressing the challenges of distant pointing, we present the feedforward static targeting assistance technique VTE: Voronoi-based Target Expansion. VTE statically displays all the activation areas by dividing the total screen space into areas such that there is only one target inside each area, also called Voronoi tessellation. The key benefit of VTE is in providing the user with an immediate understanding of the targets' activation boundaries before the pointing task even begins: VTE then provides static targeting assistance for both phases of a pointing task, the ballistic motion and the corrective phase. With the goal of making the environment visually uncluttered, we present a first user study to explore the visual parameters of VTE that affect the performance of the technique. In a second user study focusing on static versus dynamic assistance, we compare VTE with Bubble Ray, a dynamic Voronoi-based targeting assistance technique for distant pointing. Results show that VTE significantly outperforms the dynamic assistance technique and is preferred by users both for ray-casting pointing and relative pointing with a hand-controlled cursor.
GlideCursor: pointing with an inertial cursor BIBAFull-Text 49-56
  Michel Beaudouin-Lafon; Stéphane Huot; Halla Olafsdottir; Pierre Dragicevic
Pointing on large displays with an indirect, relative pointing device such as a touchpad often requires clutching. This article introduces gliding, where the cursor continues to move during the clutching gestures. The effect is that of controlling the cursor as a detached object that can be pushed, with inertia and friction similar to a puck being pushed on a table. We analyze gliding from a practical and a theoretical perspective and report on two studies. The first controlled experiment establishes that gliding reduces clutching and can improve pointing performance for large distances. We introduce cursor efficiency to capture the effects of gliding on clutching. The second experiment demonstrates that participants use gliding even when an efficient acceleration function lets them perform the task without it, without degrading performance.


Back to tangibility: a post-WIMP perspective on control room design BIBAFull-Text 57-64
  Jens Müller; Tobias Schwarz; Simon Butscher; Harald Reiterer
In today's digital control rooms, desktop computers represent the most common interface for process control. Compared to their predecessors -- manual control actuators -- desktop computers enable quick and effective process intervention but they lack in process-related interaction qualities such as haptic feedback and the involvement of motor skills. Thus, design trade-offs have to be made to combine the strengths of both paradigms: today's processing power with the interaction qualities of former control room interfaces. In this paper related interaction concepts are presented and evaluated. In a control room scenario, participants were tasked with adjusting numerical values -- so-called process variables -- under two traditional conditions (mouse, keyboard) and two post-WIMP conditions (touch, tangible). Task completion time and recall accuracy of the adjusted values were measured. As a result, traditional desktop interaction proved to be faster, whereas control actions could be recalled significantly better using the tangible control elements. We therefore suggest providing both tangible control for process maintenance and traditional desktop interaction in critical situations that require quick intervention.
ArtVis: combining advanced visualisation and tangible interaction for the exploration, analysis and browsing of digital artwork collections BIBAFull-Text 65-72
  Bruno Dumas; Bram Moerman; Sandra Trullemans; Beat Signer
We present ArtVis, an application combining advanced visualisation techniques and tangible interaction to explore a large digital collection of almost 28 000 European artworks managed by the Web Gallery of Art. In order to get new insights by exploring, analysing and browsing the artworks, our graphical ArtVis user interface offers three complementary but synchronised visualisation components. We further developed a tangible ArtVis user interface for the playful exploration and seamless integration of the digital artwork collection with physical artefacts. A formative evaluation of the ArtVis prototype revealed that users are able to answer relatively difficult questions as well as get some new insights based on the vast amount of data. A second user evaluation of the tangible ArtVis interface has shown that this sort of physical interaction attracts users and stimulates them to further explore the digital artwork collection.
PuppetX: a framework for gestural interactions with user constructed playthings BIBAFull-Text 73-80
  Saikat Gupta; Sujin Jang; Karthik Ramani
We present PuppetX, a framework for both constructing playthings and playing with them using spatial body and hand gestures. This framework allows users to construct various playthings similar to puppets with modular components representing basic geometric shapes. It is topologically-aware, i.e. depending on its configuration; PuppetX automatically determines its own topological construct. Once the plaything is made the users can interact with them naturally via body and hand gestures as detected by depth-sensing cameras. This gives users the freedom to create playthings using our components and the ability to control them using full body interactions. Our framework creates affordances for a new variety of gestural interactions with physically constructed objects. As its by-product, a virtual 3D model is created, which can be animated as a proxy to the physical construct. Our algorithms can recognize hand and body gestures in various configurations of the playthings. Through our work, we push the boundaries of interaction with user-constructed objects using large gestures involving the whole body or fine gestures involving the fingers. We discuss the results of a study to understand how users interact with the playthings and conclude with a demonstration of the abilities of gestural interactions with PuppetX by exploring a variety of interaction scenarios.
T4 -- transparent and translucent tangibles on tabletops BIBAFull-Text 81-88
  Wolfgang Büschel; Ulrike Kister; Mathias Frisch; Raimund Dachselt
In many cases, Tangible User Interfaces allow the manipulation of digital content with physical objects recognized by an interactive tabletop. Usually, such tangible objects are made of opaque wood or synthetic materials, thereby occluding the display. In this paper, we systematically investigate the promising potential of tangibles entirely made of transparent or translucent materials. Besides visualizing content directly below a manipulable tangible, transparent objects also facilitate direct touch interaction with the content below, dynamic illumination and glowing effects. We propose a comprehensive design space for transparent tangibles on tabletops based on a thorough review of existing work. By reporting on our own experiments and prototypes, we address several gaps in this design space, regarding aspects of both interaction and visualization. These include the illumination of tangibles as well as the precise input with transparent tangibles for which we also present the promising results of an initial user study. Finally, benefits and shortcomings of transparent tangibles are discussed and resulting design considerations are presented.

Connection and collaboration

Paper vs. tablets: the effect of document media in co-located collaborative work BIBAFull-Text 89-96
  Jonathan Haber; Miguel A. Nacenta; Sheelagh Carpendale
With new computer technologies portable devices are rapidly approaching the dimensions and characteristics of traditional pen and paper-based tools. Text and graphic documents are now commonly viewed using small tablet computers. We conducted a study with small groups of participants to better understand how paper-based text and graphics are used by small collaborative groups as compared to how these groups make use of documents presented on a digital tablet with digital styluses. Our results indicate that digital tools, as compared to paper tools, can affect the levels of verbal communication and participant gaze engagement with other group members. Additionally, we observed how participants spatially arranged paper-based and digital tools during collaborative group activities, how often they switched from digital to paper, and how they still prefer paper overall.
Sandboxed interaction for casual users in shared spaces BIBAFull-Text 97-104
  Andrea Albarelli; Augusto Celentano
A simple and natural interaction is considered the most important feature of an interface for multiple impromptu users in public spaces. Intuitiveness and forthright feedback are key factors to enable a steep learning curve for untrained users that need to grasp the interaction model in a short time span. The lack of proper constraints, designed to restrict and guide the user actions, might hinder such intuitiveness, mainly when the number of users grows or their behavior is exceedingly unrestrained.
   In this paper we introduce the idea of sandboxed interaction, a general concept that groups many flavors of physical and software-based measures aiming at guaranteeing a smooth and fitting interaction. To this end, we propose different types of sandboxes, suitable to handle different kind of interaction problems, and discuss a case study where several sandboxing measures have been put into use and evaluated within a real-world application scenario.
EyeGaze: enabling eye contact over video BIBAFull-Text 105-112
  Jesper Kjeldskov; Jacob H. Smedegård; Thomas S. Nielsen; Mikael B. Skov; Jeni Paay
Traditional video communication systems offer a very limited experience of eye contact due to the offset between cameras and the screen. In response, we present EyeGaze, which uses multiple Kinect cameras to generate a 3D model of the user, and then renders a virtual camera angle giving the user an experience of eye contact. As a novel approach, we use concepts from KinectFusion, such as a volumetric voxel data representation and GPU accelerated ray tracing for viewpoint rendering. This achieves detail from a noisy source, and allows the real-time video output to be a composite of old and new data. We frame our work in literature on eye contact and previous approaches to supporting it over video. We then describe EyeGaze, and an empirical study comparing it with communication face-to-face or over traditional video. The study shows that while face-to-face is still superior, EyeGaze has added value over traditional video in terms of eye contact, involvement, turn-taking and co-presence.

Designing for special needs

Designing accessible ICT products and services: the VERITAS accessibility testing platform BIBAFull-Text 113-116
  Fotios Spyridonis; Panagiotis Moschonas; Katerina Touliou; Athanasios Tsakiris; Gheorghita Ghinea
Among the key components of designing accessible products and services for disabled users is accessibility testing and support. The VERITAS FP7 project has developed a platform that consists of several tools that provide automatic simulation feedback and reporting for built-in accessibility support at all stages of ICT product development. In this explorative pilot study, we evaluated the usability and technology acceptance of using three of these tools in the design of accessible GUI-based ICT products in five application domains. A sample of 80 designers/developers (12 female; 68 male) evaluated the three tools by filling in the standard SUS and TAM questionnaires. Results revealed good usability and technology acceptance for all three tools as a novel accessibility testing method. The VERITAS platform can offer an intuitive solution in accessibility design and can ensure that ICT products are designed for all.
Motion-based touchless interaction for ASD children: a case study BIBAFull-Text 117-120
  Franca Garzotto; Mirko Gelsomini; Luigi Oliveto; Matteo Valoriani
Autism spectrum disorder (ASD) has become the fastest growing disability in the United States in 2013. The disorder is characterized by a triad of symptoms related to lack of social interaction, deficits in the acquisition and expression of language, and repetitive patterns of behavior often accompanied by sensorimotor impairments. In our research, we explore the use of motion based touchless games for ASD children and develop innovative tools that can be autonomously used by teachers/therapists in school classes or therapeutic activities. The paper describes the design and preliminary evaluation of "Pixel Balance", a motion based touchless game conceived to promote imitative capability, body schema awareness, and social skills in ASD children.

Adaptive and context aware interfaces

BeatBox: end-user interactive definition and training of recognizers for percussive vocalizations BIBAFull-Text 121-124
  Kyle Hipke; Michael Toomim; Rebecca Fiebrink; James Fogarty
Interactive end-user training of machine learning systems has received significant attention as a tool for personalizing recognizers. However, most research limits end users to training a fixed set of application-defined concepts. This paper considers additional challenges that arise in end-user support for defining the number and nature of concepts that a system must learn to recognize. We develop BeatBox, a new system that enables end-user creation of custom beatbox recognizers and interactive adaptation of recognizers to an end user's technique, environment, and musical goals. BeatBox proposes rapid end-user exploration of variations in the number and nature of learned concepts, and provides end users with feedback on the reliability of recognizers learned for different potential combinations of percussive vocalizations. In a preliminary evaluation, we observed that end users were able to quickly create usable classifiers, that they explored different combinations of concepts to test alternative vocalizations and to refine classifiers for new musical contexts, and that learnability feedback was often helpful in alerting them to potential difficulties with a desired learning concept.
RouteLens: easy route following for map applications BIBAFull-Text 125-128
  Jessalyn Alvina; Caroline Appert; Olivier Chapuis; Emmanuel Pietriga
Millions of people go to the Web to search for geographical itineraries. Inspecting those map itineraries remains tedious because they seldom fit on screen, requiring much panning & zooming to see details. Focus+context techniques address this problem by displaying routes at a scale that allows them to fully fit on screen: users see the entire route at once, and perform magnified steering using a lens to navigate along the path, revealing additional detail. Navigation based on magnified steering has been shown to outperform pan & zoom for large steering tasks. Yet, this task remains challenging, in part because paths have a tendency to "slip off" the side of the lens. RouteLenses automatically adjust their position based on the geometry of the path that users steer through. RouteLenses make it easier for users to follow a route, yet do not constrain movements too strictly, leaving them free to move the lens away from the path to explore its surroundings.
Pupil-canthi-ratio: a calibration-free method for tracking horizontal gaze direction BIBAFull-Text 129-132
  Yanxia Zhang; Andreas Bulling; Hans Gellersen
Eye tracking is compelling for hands-free interaction with pervasive displays. However, most existing eye tracking systems require specialised hardware and explicit calibrations of equipment and individual users, which inhibit their widespread adoption. In this work, we present a light-weight and calibration-free gaze estimation method that leverages only an off-the-shelf camera to track users' gaze horizontally. We introduce pupil-canthi-ratio (PCR), a novel measure for estimating gaze directions. By using the displacement vector between the inner eye corner and the pupil centre of an eye, PCR is calculated as the ratio of the displacement vectors from both eyes. We establish a mapping between PCR to gaze direction by Gaussian process regression, which inherently infers averted horizontal gaze directions of users. We present a study to identify the characteristics of PCR. The results show that PCR achieved an average accuracy of 3.9 degrees across different people. Finally, we show examples of real-time applications of PCR that allow users to interact with a display by moving only their eyes.

Designing for specific domains

Visualizing food ingredients for children by utilizing glyph-based characters BIBAFull-Text 133-136
  Patrick Riehmann; Wieland Möbus; Bernd Froehlich
We present a system for visualizing food ingredients with a glyph-based approach aimed at children between the ages of four and eight, approximately. The intention is to visually explain and to visually argue that a certain food a child is eager to eat is healthy or, more often, is not healthy. Therefore, we introduced two comic-like characters whose shape and features depend on the main ingredients of food products. These characters can be directly displayed on a parent's smartphone by scanning the barcode of a food product. Our study showed that children are able to recognize several ingredient manifestations encoded as visual attributes and thus to consider a food product as being healthy or not.
Visual and tactile engagement: designing projected touch-surfaces for community use in a rural context BIBAFull-Text 137-140
  Alan Chamberlain; Alessio Malizia; Alan J. Dix
This paper discusses the design, development and deployment of a large projected multi-touch surface within a rural context. Based on our previous work we deployed the system and worked in a real world setting in order to understand how it might be used in this particular scenario and to further understand the issues involved in using such systems in a rural context. Such systems are often deployed in urban or lab-based scenarios, however rural settings come with their own set of issues that can often make them very different in terms of design implications from urban settings, and it is important to understand how these issues can impact upon both the design of the interface and the whole system. This innovative research engaged with the local community and this enabled us to further understand how we might advance the use of the system, in terms of its interface, software architecture and physical design. This paper discusses the impetus for the design of the system, the methods we used to frame our studies, the technical development and design of the system, and some findings from the actual deployment.
New materials = new expressive powers: smart material interfaces and arts, an interactive experience made possible thanks to smart materials BIBAFull-Text 141-144
  Andrea Minuto; Fabio Pittarello; Anton Nijholt
It is not easy for a growing artist to find his poetry. Smart materials could be an answer for those who are looking for new forms of art. Smart Material Interfaces (SMI) define a new interaction paradigm based on dynamic modifications of the innovative materials' properties. SMI can be applied in different domains and used for different purposes; functional, communicative and creative.
   In this paper we focus on experimenting in the art and creative communication domain. In particular we describe the results of a workshop held with 15 students of the Fine Arts Academy in Venice who learned how to make and program SMI and took advantage of their new skills to design a variety of interesting creative artifacts.


Analyzing intended use effects in target acquisition BIBAFull-Text 145-152
  Jaime Ruiz; Edward Lank
Recent work by Mandryk and Lough demonstrated that the movement time of Fitts-style pointing tasks varies based on intended use of a target, suggesting major implications for HCI research that models pointing using Fitts' Law. We replicate the study of Mandryk and Lough to determine exactly how and why observed movement times vary. We demonstrate that any variation in movement time is the result of differences in additive factors (a in Fitts' equation) and can be attributed to changes in the time a user spends over their primary target.
Quantification of interface visual complexity BIBAFull-Text 153-160
  Aliaksei Miniukovich; Antonella De Angeli
Designers strive for enjoyable user experience (UX) and put a significant effort into making graphical user interfaces (GUI) both usable and beautiful. Our goal is to minimize their effort: with this purpose in mind, we have been studying automatic metrics of GUI qualities. These metrics could enable designers to iterate their designs more quickly. We started from the psychological findings that people tend to prefer simpler things. We then assumed visual complexity determinants also determine visual aesthetics and outlined eight of them as belonging to three dimensions: information amount (visual clutter and color variability), information organization (symmetry, grid, ease-of-grouping and prototypicality), and information discriminability (contour density and figure-ground contrast). We investigated five determinants (visual clutter, symmetry, contour density, figure-ground contrast and color variability) and proposed six associated automatic metrics. These metrics take screenshots of GUI as input and can thus be applied to any type of GUI. We validated the metrics through a user study: we gathered the ratings of immediate impressions of GUI visual complexity and aesthetics, and correlated them with the output of the metrics. The output explained up to 51% of aesthetics ratings and 50% of complexity ratings. This promising result could be further extended towards the creation of tLight, our automatic GUI evaluation tool.
Readability of a background map layer under a semi-transparent foreground layer BIBAFull-Text 161-168
  Saturnino Luz; Masood Masoodian
This study investigates the readability (interpretability) of information presented on a geographical map onto which a semi-transparent multivariate selection layer has been overlaid. The investigation is based on an information visualization prototype developed for a mobile platform (tablet devices) which aimed at supporting epidemiologists and medical staff in field data collection and epidemiological interpretation tasks. Different factors are analysed under varying transparency (alpha blending) levels, including: map interpretation task (covering "seeing map" and "reading map" tasks), legend symbol and map area type. Our results complement other studies that focused on the readability characteristics of items displayed on semi-transparent foreground layers developed in the context of "toolglass" interfaces. The implications of these results to the usability of transparency variable selection layers in geographical map applications are also discussed.
An evaluation of Dasher with a high-performance language model as a gaze communication method BIBAFull-Text 169-176
  Daniel Rough; Keith Vertanen; Per Ola Kristensson
Dasher is a promising fast assistive gaze communication method. However, previous evaluations of Dasher have been inconclusive. Either the studies have been too short, involved too few participants, suffered from sampling bias, lacked a control condition, used an inappropriate language model, or a combination of the above. To rectify this, we report results from two new evaluations of Dasher carried out using a Tobii P10 assistive eye-tracker machine. We also present a method of modifying Dasher so that it can use a state-of-the-art long-span statistical language model. Our experimental results show that compared to a baseline eye-typing method, Dasher resulted in significantly faster entry rates (12.6 wpm versus 6.0 wpm in Experiment 1, and 14.2 wpm versus 7.0 wpm in Experiment 2). These faster entry rates were possible while maintaining error rates comparable to the baseline eye-typing method. Participants' perceived physical demand, mental demand, effort and frustration were all significantly lower for Dasher. Finally, participants significantly rated Dasher as being more likeable, requiring less concentration and being more fun.

Touch and multitouch

Multi-touch gestures for discrete and continuous control BIBAFull-Text 177-184
  Halla Olafsdottir; Caroline Appert
Touchscreen interaction currently relies on a limited set of multi-touch gestures and a wide range of graphical widgets that are often difficult to manipulate and consume much screen real-estate. Many tasks remain tedious to perform on touchscreens: selecting text over multiple views, manipulating different degrees of freedom of a graphical object, invoking a command and setting its parameter values in a row. We propose a design space of simple multi-touch gestures that designers of user interfaces can systematically explore to propose more gestures to users. We further consider a set of 32 gestures for tablet-sized devices, by proposing an incremental recognition engine that works with current hardware technology, and empirically testing the usability of those gestures. In our experiment, individual gestures are recognized with an average accuracy of 90%, and users successfully achieve some of the transitions between gestures without the use of explicit delimiters. The goal of our contribution is to assist designers in optimizing the use of the rich multi-touch input channel for the activation of discrete and continuous controls, and enable fluid transitions between controls.
BezelCopy: an efficient cross-application copy-paste technique for touchscreen smartphones BIBAFull-Text 185-192
  Chen Chen; Simon T. Perrault; Shengdong Zhao; Wei Tsang Ooi
Copy-Paste (CP) operations on touchscreen smartphones are not as easy to perform as compared with similar operations on desktop computers. The smaller screen size and input area make both text selection and application switching more difficult to perform. To enable faster copy-paste on touchscreen smartphones, we introduce BezelCopy, a copy-paste technique that uses a bezel-swipe gesture to determine a rough area of interest in the document. Chosen text is magnified in a new panel to enable fast and precise selection. With the new panel, users can perform easy tap-and-drag gestures to select the exact content, and tap the application icon on the bottom of the panel to paste it to the target application. Users can further adjust the location of the pasted text in the target application using drag and drop. We conducted two experiments to compare the performance of BezelCopy with alternative approaches, and our results show that BezelCopy outperform existing copy-paste techniques for a number of commonly performed copy-paste tasks.
Tracing and sketching performance using blunt-tipped styli on direct-touch tablets BIBAFull-Text 193-200
  Sriram Karthik Badam; Senthil Chandrasegaran; Niklas Elmqvist; Karthik Ramani
Direct-touch tablets are quickly replacing traditional pen-and-paper tools in many applications, but not in case of the designer's sketchbook. In this paper, we explore the tradeoffs inherent in replacing such paper sketchbooks with digital tablets in terms of two major tasks: tracing and free-hand sketching. Given the importance of the pen for sketching, we also study the impact of using a blunt-and-soft-tipped capacitive stylus in tablet settings. We thus conducted experiments to evaluate three sketch media: pen-paper, finger-tablet, and stylus-tablet based on the above tasks. We analyzed the tracing data with respect to speed and accuracy, and the quality of the free-hand sketches through a crowdsourced survey. The pen-paper and stylus-tablet media both performed significantly better than the finger-tablet medium in accuracy, while the pen-paper sketches were significantly rated higher quality compared to both tablet interfaces. A follow-up study comparing the performance of this stylus with a sharp, hard-tip version showed no significant difference in tracing performance, though participants preferred the sharp tip for sketching.
Match-up & conquer: a two-step technique for recognizing unconstrained bimanual and multi-finger touch input BIBAFull-Text 201-208
  Yosra Rekik; Radu-Daniel Vatavu; Laurent Grisoni
We present a simple, two-step technique for recognizing multi-touch gesture input that is invariant to how users articulate gestures, i.e., by using one or two hands, one or multiple fingers, one or multiple strokes, synchronous or asynchronous stroke input. We introduce, for the first time in the gesture literature, a preprocessing step that is specific to multi-touch gestures (Match-Up) that clusters together similar strokes produced by different fingers, before running a gesture recognizer (Conquer). We report gains in recognition accuracy up to 10% leveraged by our new preprocessing step, which manages to construct a more adequate representation of multi-touch gestures in terms of key strokes. It is our hope that the Match-Up technique will add to the practitioners' toolkit of gesture preprocessing techniques, as a first step toward filling today's lack of algorithmic knowledge to process multi-touch input and leading toward the design of more efficient and accurate recognizers for touch surfaces.

Interface metaphors

SpaceFold and PhysicLenses: simultaneous multifocus navigation on touch surfaces BIBAFull-Text 209-216
  Simon Butscher; Kasper Hornbæk; Harald Reiterer
Many tasks performed in multiscale visual spaces require the user to have several foci. Using bimanual interaction, multitouch devices can facilitate the simultaneous definition and exploration of several foci. However, multitouch is rarely used for multifocus navigation, and may limit the interaction to a sequential definition of areas of interest. We introduce two novel navigation techniques that combine multiple foci and bimanual touch, and therefore enable the isochronic definition of areas of interest, leading to simultaneous multifocus navigation. SpaceFold folds the visual space in the third dimension, allowing users to bring objects closer to each other. Our technique enables a direct, bimanual manipulation of a folded space and therefore provides high flexibility. PhysicLenses uses multiple magnification lenses to compare objects. Using a physics model, PhysicLenses introduces a general solution for the arrangement of multiple lenses within the viewport. We conducted a controlled experiment with 24 participants to compare the techniques with split screen. The results show that SpaceFold significantly outperformed all other techniques, whereas PhysicLenses was just as fast as split screen.
Two-finger 3D rotations for novice users: surjective and integral interactions BIBAFull-Text 217-224
  Élisabeth Rousset; François Bérard; Michaël Ortega
Now that 3D interaction is available on tablets and smart phones, it becomes critical to provide efficient 3D interaction techniques for novice users. This paper investigates interaction techniques for 3D rotation with two fingers of a single hand, on multitouch mobile devices.
   We introduce two new rotation techniques that allow integral control of the 3 axes of rotation. These techniques also satisfy a new criterion that we introduce: surjection. We ran a study to compare the new techniques with two widely used rotation techniques from the literature. Results indicate that surjection and integration lead to a performance improvement of a group of participants who had no prior experience in 3D interaction. Qualitative results also indicate participants' preference for the new interaction techniques.

Social interaction

Inconvenient interactions: an alternative interaction design approach to enrich our daily activities BIBAFull-Text 225-228
  Jun Rekimoto; Hitomi Tsujita
While most traditional user interfaces are intended to pursue "convenience" by eliminating user operations and by typically automating tasks, some new categories of HCI, such as health support, may require explicit human participation and effort to achieve long-term benefits. In these areas, interfaces that require interactions that promote users to perform explicit activities, rather than interfaces that solely perform tasks on behalf of users, are becoming increasingly important. This trend can be a further challenge of interaction design, and we refer it as "inconvenient interactions". In this paper, we discuss why carefully designed inconveniences can enrich our lives, and provide preliminary but concrete examples. We also propose our guidelines for the design of these inconvenient interactions.
Digitally augmented narratives for physical artifacts BIBAFull-Text 229-232
  Andrea Bellucci; Paloma Diaz; Ignacio Aedo
In this working paper we present the design and setup of a workshop environment that empowers users with tools to create and share digital narratives associated with physical artifacts. The approach focuses on two main design choices: (1) physical embodiment, by having people directly interact with the physical objects we want to augment and, (2) situated social interactions, by offering the possibility to share personal ideas and comment on others, to collaboratively produce stories about augmented objects. We also report about the main evaluation results we got in a workshop in which we studied users needs with respect to tangible interactive experiences in exhibitions of museums.
A multimodal tablet-based interface for designing and reviewing 3D engineering models BIBAFull-Text 233-236
  Pedro Campos; Hildegardo Noronha
The usage of multimodal user interfaces has revolutionized many different activities. However, most of the interactive technologies deployed in real world engineering contexts are still difficult to use, especially when engineering teams need to collaboratively visualize and review large-scale 3D CAD (Computer-Aided Design) models. This is the case of the oil platform industry, which necessarily involves the review and manipulation of large CAD models. In this paper we present a novel solution, based on multitouch and accelerometer input, which was designed and evaluated in close cooperation with researchers and engineers of a large oil industry company. We evaluated two different conditions: using multitouch-only input and using multitouch coupled with accelerometer-based input. Statistical analysis of quantitative data suggests that the second condition is faster and less error-prone than simply using multitouch-only input. Additionally, qualitative data showed that users perceive the multitouch-only interface as being more accurate, but more difficult to understand and use.

Visual analytics

Advanced visual analytics interfaces for adverse drug event detection BIBAFull-Text 237-244
  Sebastian Mittelstädt; Ming C. Hao; Umeshwar Dayal; Mei-Chun Hsu; Joseph Terdiman; Daniel A. Keim
Adverse reactions to drugs are a major public health care issue. Currently, the Food and Drug Administration (FDA) publishes quarterly reports that typically contain on the order of 200,000 adverse incidents. In such numerous incidents, low frequency events that are clinically highly significant often remain undetected. In this paper, we introduce a visual analytics system to solve this problem using (1) high scalable interfaces for analyzing correlations between a number of complex variables (e.g., drug and reaction); (2) enhanced statistical computations and interactive relevance filters to quickly identify significant events including those with a low frequency; and (3) a tight integration of expert knowledge for detecting and validating adverse drug events. We applied these techniques to the FDA Adverse Event Reporting System and were able to identify important adverse drug events, such as the known association of the drug Avandia with myocardial infarction and Seroquel with diabetes mellitus, as well as low frequency events such as the association of Boniva with femur fracture. In our evaluation, we found over 90% of the adverse drug events that were published in the Institute for Safe Medication Practices (ISMP) reports from 2009 to 2012. In addition, our domain expert was able to identify some previously unknown adverse drug events.
Mimic: visual analytics of online micro-interactions BIBAFull-Text 245-252
  Simon Breslav; Azam Khan; Kasper Hornbæk
We present Mimic, an input capture and visual analytics system that records online user behavior to facilitate the discovery of micro-interactions that may affect problem understanding and decision making. As aggregate statistics and visualizations can mask important behaviors, Mimic can help interaction designers to improve the usability of their designs by going beyond aggregates to examine many individual user sessions in detail. To test Mimic, we replicate a recent crowd-sourcing experiment to better understand why participants consistently perform poorly in answering a canonical conditional probability question called the Mammography Problem. To analyze the micro-interactions, the Mimic web application is used to play back user sessions collected through remote logging of client-side events. We use Mimic to demonstrate the value of using advanced visual interfaces to interactively study interaction data. In the Mammography Problem, issues like user confusion, low confidence, and divided-attention were found based on participants changing their answers, doing repeated scrolling, and overestimating a base rate. Mimic shows how helpful detailed observational data can be and how important the careful design of micro-interactions is in helping users to successfully understand a problem, find a solution, and achieve their goals.

Software engineering approaches

CoDICE: balancing software engineering and creativity in the co-design of digital encounters with cultural heritage BIBAFull-Text 253-256
  Paloma Díaz; Ignacio Aedo; Jaime Cubas
Software engineering analysis and design methods are mainly guided by objective principles and criteria, whilst design thinking is usually fueled by subjective and people-based criteria like emotions, aesthetics or cultural and personal resonance. Complex interactive systems require a mixed approach where the benefits of both disciplines are respected: engineering and creativity, rationality and emotions, quality-centered and people-centered. This is the main purpose of a software tool, named CoDICE (COdesigning DIgital Cultural Encounters), that is being developed within the meSch EU project that is aimed at co-designing smart objects for enhanced encounters with cultural heritage. CoDICE adds persistence and traceability to the co-design activities and makes it possible to extend ideas discussion beyond collocated meetings. It also provides mechanisms to revisit design products and generate informed design rationale.
Visual composition of data sources by end users BIBAFull-Text 257-260
  Carmelo Ardito; M. Francesca Costabile; Giuseppe Desolda; Rosa Lanzilotti; Maristella Matera; Matteo Picozzi
There is a huge and ever increasing amount of data sources available on the Web, which provide content through programmatic interfaces. Unfortunately, such data sources are accessible only through programming and therefore it is difficult for non-technical users to take advantage of such enormous data assets. The need therefore arises for paradigms to let laypeople, i.e., users without expertise in programming, explore and compose data sources. This paper discusses mechanisms for data source exploration and integration, which emerged from a study where laypeople were involved in discussions to gather their requirements about accessing and composing services. The paper also describes the prototypes that we defined to respond to the requirements highlighted by end users.
From desktop to touchless interfaces: a model based approach BIBAFull-Text 261-264
  Franca Garzotto; Mirko Gelsomini; Roberto Mangano; Luigi Oliveto; Matteo Valoriani
With the increasingly low cost of motion-sensing technology, touchless interactive interfaces may become a new ingredient in the evolution of content intensive web applications from single-platform (desktop) to multi-platforms use. While the migration from desktop to mobile devices has been widely studied, there is limited understanding on how to include touchless interfaces in this "going multi-channel" evolution. The paper focuses on the design issues that are induced by this process. We propose a model-based design approach that supports information reuse and exploits a systematic mapping from content structures to interaction tasks and touchless gestures. We then describe a case study in the cultural heritage domain to exemplify our method.

Information visualization

Designing and implementing an interactive scatterplot visualization for a tablet computer BIBAFull-Text 265-272
  Ramik Sadana; John Stasko
Tablet computers now offer screen sizes and computing capabilities that are competitive with traditional desktop PCs. Their popularity has grown tremendously, but we are just beginning to see information visualization applications designed for this platform. One potential reason for this limited development is the challenge of designing and implementing a multi-touch interface for visualizations on mobile, tablet devices. In this work, we identify the primary challenges that touch screen interactions pose for information visualization applications. We explore the design space of multi-touch interactions for visualizations and present a prototype information visualization application using a specific technique, a dynamic scatterplot, for an iPad.
Qualizon graphs: space-efficient time-series visualization with qualitative abstractions BIBAFull-Text 273-280
  Paolo Federico; Stephan Hoffmann; Alexander Rind; Wolfgang Aigner; Silvia Miksch
In several application fields, the joint visualization of quantitative data and qualitative abstractions can help analysts make sense of complex time series data by associating precise numeric values with corresponding domain-specific interpretations, such as good, bad, high, low, normal. At the same time, the need to analyse large multivariate time-oriented datasets often calls for keeping visualizations as compact as possible. In this paper, we introduce Qualizon Graphs, a compact visualization that combines quantitative data and qualitative abstractions. It is based on the well known Horizon Graphs, but instead of a predefined number of equally sized bands, it uses as many bands as qualitative categories with corresponding different sizes. In this way, Qualizon Graphs increase the data density of visualized quantitative values and inherently integrate qualitative abstractions. A user study shows that Qualizon Graphs are as fast and accurate as Horizon Graphs for quantitative data, and are an alternative to state-of-the-art visualizations for both quantitative and qualitative data, enabling a trade-off between speed and accuracy.
Touching transport -- a case study on visualizing metropolitan public transit on interactive tabletops BIBAFull-Text 281-288
  Till Nagel; Martina Maitan; Erik Duval; Andrew Vande Moere; Joris Klerkx; Kristian Kloeckl; Carlo Ratti
Due to recent technical developments, urban systems generate large and complex data sets. While visualizations have been used to make these accessible, often they are tailored to one specific group of users, typically the public or expert users. We present Touching Transport, an application that allows a diverse group of users to visually explore public transit data on a multi-touch tabletop. It provides multiple perspectives of the data and consists of three visualization modes conveying tempo-spatial patterns as map, time-series, and arc view. We exhibited our system publicly, and evaluated it in a lab study with three distinct user groups: citizens with knowledge of the local environment, experts in the domain of public transport, and non-experts with neither local nor domain knowledge. Our observations and evaluation results show we achieved our goals of both attracting visitors to explore the data while enabling gathering insights for both citizens and experts. We discuss the design considerations in developing our system, and describe our lessons learned in designing engaging tabletop visualizations.
Effect of lateral chromatic aberration for chart reading in information visualization on display devices BIBAFull-Text 289-292
  Kyle Koh; Bohyoung Kim; Jinwook Seo
In this paper, we explain the effect of lateral chromatic aberration (LCA) when reading information displayed on the screen and how it leads to misinterpretation of charts and values represented. Although the effect can be observed from natural scenes, we focus on LCA on modern display devices. We inform the readers of the significance of issues to those using corrective lenses, especially the high diopter eyeglasses. First, we explain the basics of LCA. Then, we present a user study to observe the effect on users' judgment when reading charts on display devices. We also introduce a prototype software-based correction method with promising results. Lastly, we suggest guidelines for information visualization designers to avoid such issues.
A graph based abstraction of textual concordances and two renderings for their interactive visualisation BIBAFull-Text 293-296
  Saturnino Luz; Shane Sheehan
Concordancing, or the arranging of passages of a textual corpus in alphabetical order according to user-defined keywords, is one of the oldest and still most widely used forms of text analysis. It finds applications in areas such as lexicography, computational linguistics, translation studies and computer-assisted machine translation. Yet, the basic form of visualisation employed in the analysis of textual concordances has remained essentially the same since the keyword-in-context technique was introduced, over fifty years ago. This paper presents a generalisation of this technique as an analytical abstraction of concordances represented as undirected graphs, and then characterises keywords in terms of graph eccentricity properties. We illustrate this proposal with two distinct visual renderings: a mosaic (space-filling) display and a bi-directional hierarchical display. These displays can be used in isolation or in conjunction with traditional keyword-in-context components in an overview-plus-detail pattern, or as synchronised views. We discuss scenarios of use for these arrangements in lexicographical corpus analysis, in translation studies and in text comparison tasks.

Search and information management

A study on human-generated tag structures to inform tag cloud layout BIBAFull-Text 297-304
  Daniela Oelke; Iryna Gurevych
Tag clouds are popular features on web pages, not only to support browsing but also to provide an overview over the content of the page or to summarize search retrieval results. Commonly, the arrangement of tags is based on a random layout or an alphabetic ordering of the tags. Previous research suggests to further structure the tag clouds according to semantics, typically employing cooccurrence-based relations to assess the semantic relatedness of two tags. Regarding the layout of the resulting structure, a wide variety of representations has been proposed. However, only few papers motivate their design choice or evaluate its performance from the perspective of a user, leaving it open if the approach answers the users' expectations. In this paper we present the results of a study in which we observed how humans structure user-generated tags of a social bookmarking system given the task that the resulting layout should provide a quick overview over a search retrieval result. We examine the participants' layouts based on the final arrangement of tags and a detailed interview conducted after the task. Thereby, we analyze and characterize the different term relations employed as well as the higher-level structures generated. The deeper understanding of what criteria are considered important by humans can inform the design of automatic algorithms as well as future studies evaluating their performance.
Visual SPARQL querying based on extended filter/flow graphs BIBAFull-Text 305-312
  Florian Haag; Steffen Lohmann; Steffen Bold; Thomas Ertl
SPARQL is currently the major query language for the Semantic Web. However, writing SPARQL queries is not an easy task and requires some understanding of technologies like RDF. In order to enable users without this knowledge to query linked data, visual interfaces are required that hide the SPARQL syntax and provide graphical support for query building. Based on the concept of extended filter/flow graphs, we present a novel approach for visual querying that addresses the unique specifics of SPARQL and RDF. In particular, it enables the creation of SELECT and ASK queries, though it can also be used for other query forms. In contrast to related work, the users do not need to provide any structured text input but can create the queries entirely with graphical elements. Our approach supports most features of SPARQL and hence also the construction of complex query expressions. It has been implemented in a visual querying framework and tested on different RDF datasets, including DBpedia that is used as an example in this paper. Since the filter/flow concept is empirically well-founded, we expect our approach to be very usable, which is additionally supported by the results of a qualitative user study we conducted.
Engineering information management tools by example BIBAFull-Text 313-320
  Michael Nebeling; Matthias Geel; Moira C. Norrie
While there are many established methodologies for information systems development, designing by example has not been formally explored and applied previously. Our work is also motivated by the desire to explore interface-driven development techniques that could complement existing approaches such as model-driven engineering with the goal of reducing the need for modelling and reengineering of existing applications and interfaces, while still supporting the development task. We explore the example-based technique for rapid development of powerful and flexible information management tools based on the example of Adobe Photoshop Lightroom, a system that was originally designed to support the workflow of digital photographers in a flexible way. We analyse experiments in which two new systems -- one for managing collections of research papers and another for software project management -- were developed based on the Lightroom paradigm. We derive a conceptual framework for engineering by example and assess the method by comparing it to traditional model-driven engineering.

Gestural interaction

Novel interaction techniques for object manipulation on tabletops: scoop net and pinch helper BIBAFull-Text 321-324
  Mirko de Almeida Madeira Clemente; Hannes Leitner; Dietrich Kammer; Rainer Groh; André Pinkert
Interactive tabletops foster collaborative work by supporting face-to-face communication and by offering an interactive surface to continually visualize and integrate the current state of work. Through multi-touch technology, multiple fingers and even parts of the hand can be recognized. Hence, natural interaction with graphical objects can be enhanced. However, ergonomic constraints in shape, size and agility of the hand and fingers can reduce effective and efficient object manipulation. By virtually extending the representation of the fingers and the transformed objects themselves we seek to overcome these limitations. Common collaborative tasks on large touch surfaces are transformation and spatial grouping of graphical objects. Therefore, we present two novel interaction techniques: Scoop Net and Pinch Helper. Scoop Net is based on naive physics and allows users to select and move multiple objects in one seamless action. Pinch Helper supports the execution of a pinch gesture by transforming objects that are too small to be targeted by multiple fingers.
Gesturemote: interacting with remote displays through touch gestures BIBAFull-Text 325-328
  Hao ; Matei Negulescu; Yang Li
We present Gesturemote, a technique for interacting with remote displays through touch gestures on a handheld touch surface. By combining a variety of different touch gestures and connecting them smoothly, Gesturemote supports a wide range of interaction behaviors, from low pixel-level interaction such as pointing and clicking, to medium-level interaction such as structured navigation of a user interface, to high-level interaction such as invoking a function directly (e.g. shortcuts). Gesturemote requires no visual attention to use and thus is eyes-free. We received positive initial feedback for Gesturemote from the participants in an interview where we walked them through the design. In addition, we investigated the usability of our gesture-based target acquisition technique by comparing it with a trackpad in a target acquisition task. The results indicate that Gesturemote performs better when visual search is required and is preferable to a general-purpose trackpad.


sense.me: a EUD environment for social products BIBAFull-Text 329-330
  Alessandro Acerbis; Daniela Fogli; Elisa Giaccardi
This paper describes a framework that supports the physical prototyping of innovative interactive artifacts. Specifically, the framework allows designing, implementing, and testing "social products," that is, physical artifacts able to interact with social media platforms such as Facebook, Twitter, Google+ and others. Since the target users of the framework are not expert in software programming, an End-User Development (EUD) approach has been adopted, which aims at facilitating the ideation process and providing simple mechanisms for automatic code generation and testing. User tests have proved the usefulness and validity of the framework, and provided indications for how to expand it.
NUICursorTools: cursor behaviors for indirect-pointing BIBAFull-Text 331-332
  Said Achmiz; Davide Bolchini
Designing touchless control for six degrees of freedom (6DOF) input devices (e.g., Kinect®) is fundamental for natural user interfaces, but it raises unsolved challenges. These challenges stem from inherent limitations of human motor control in mid-air, and include undesirable jitter from continuous hand tremor, arm fatigue when traversing very large displays, and limited motor control on pixel-accurate selections. To address these problems, we contribute NUICursorTools, a nimble and flexible toolkit that provides a device-agnostic, driver- and middleware-agnostic solution that eliminates touchless cursor jitter, allows optimization of control-display gain and pointer acceleration for touchless control, and enables design of custom complex cursor behaviors. Due to its high-level, device-independent design, NUICursorTools has broad applicability for current and next-generation interaction contexts in which an onscreen cursor is indirectly controlled by any input device.
Evaluating accuracy of perception in an adaptive stereo vision interface BIBAFull-Text 333-334
  Andrea Albarelli; Luca Cosmo; Augusto Celentano
We evaluate the accuracy of perception of a viewer-dependent system that has been implemented through a simple augmentation of basic shutter glasses for stereoscopic setups. The evaluation is based on length measures performed by a group of users on two different scenes, rendered through different perspectives computed from the dynamic user position and from a fixed point of view.
Design and evaluation of a platform to support co-design with children BIBAFull-Text 335-336
  Diego Alvarado; Paloma Díaz
Novel interfaces are always welcome to assess different ways in which children may engage in learning activities. Multi-touch interactive systems are being increasingly exploited in the last few years for learning purposes, because they are able to foster collaboration in children. However, there are other factors that need special attention when trying to get children to collaborate and work in teams, like level of fun and simplicity of tools provided, for instance. This paper describes the design, implementation and evaluation of a platform aiming at supporting children in the co-design of a game conceived to teach children how to respond to an emergency.
City sensing: visualising mobile and social data about a city scale event BIBAFull-Text 337-338
  Fabrizio Antonelli; Matteo Azzi; Marco Balduini; Paolo Ciuccarelli; Emanuele Della Valle; Roberto Larcher
Streams of information flow through our cities thanks to: their progressive instrumentation with diverse sensors, a wide adoption of smart phones and social networks, and a growing open release of datasets. City Data Fusion project investigates techniques to visualise the pulse of our cities in real-time by fusing and making sense of all those information flows. It exploits visual data analytics, semantic technologies, and streaming databases. In this poster, we offer insights on City Sensing: an early result of City Data Fusion that allows to visually analyse city scale events such as Milano Design Week.
ECCE toolkit: prototyping UbiComp device ecologies BIBAFull-Text 339-340
  Andrea Bellucci; Ignacio Aedo; Paloma Diaz
The tremendous amount of different input devices and interactive environments envisioned by researchers produces a severe challenge for the development of ubiquitous interaction. Toolkits that support the rapid setup of ubiquitous environments reduce the effort in arranging the technological medium and have the potential to lower prerequisite knowledge and automate low-level programming tasks. In this paper, we present our work-in-progress approach: a toolkit that combines physical and digital components into a unique environment to allow the rapid setup of device ecologies.
Integrating human-robot and motion-based touchless interaction for children with intellectual disability BIBAFull-Text 341-342
  Andrea Bonarini; Franca Garzotto; Mirko Gelsomini; Matteo Valoriani
Our research explores the integration of motion-based touchless interaction with human-robots interaction to support game-based learning for children with intellectual disability. The paper discusses the design challenges of this novel approach and presents the design concepts of our initial prototypes.
Visualizing collaborative traces in distributed teams BIBAFull-Text 343-344
  Paolo Buono; Giuseppe Desolda
The evolution of communication technologies provides support to the collaboration of people that work in distributed teams. Group awareness is an important requirement for activity coordination, since understanding the activities of the others provides the context for the individual own activities and gives indications on how individual contributions are relevant to the team. This poster proposes a novel information visualization technique that aims at supporting awareness in distributed teams. Collaborative traces of team members are visualized in order to show which one is the most available and responsive.
Collaborative multimedia content creation and sharing by older adults BIBAFull-Text 345-346
  Miguel Ceriani; Paolo Bottoni; Stefano Ventura; Alessandra Talamo
We describe an ongoing co-design project for a collaborative distributed space, based on multi-touch technologies, through which older adults can explore self-produced and on-line multimedia resources for the production of short video clips. The project's main goal is to foster the active participation of older people, already experiencing collaboration in daily activities in care centres, as producers of content related to their experience and know-how and to the activities carried out in the centres, to be shared in a community context. An interface, based on an interactive table, some tablets and a television set, is sketched.
Designing emotion awareness interface for group recommender systems BIBAFull-Text 347-348
  Yu Chen; Pearl Pu
Group recommender systems help users to find items of interest collaboratively. Support for such collaboration has been mainly provided by interfaces that visualize membership awareness, preference awareness and decision awareness. In this paper, we are interested in investigating the roles of emotion awareness interfaces and how they may enable positive group influence. We first describe the design process behind an emotion annotation tool, which we call CoFeel. We then show that it allows users to annotate and visualize group members' emotions in GroupFun, a group music recommender.
Augmented reality for TLC network operation and maintenance support BIBAFull-Text 349-350
  Massimo Chiappone; Danilo Gotta; Elio Paschetta; Paolo Pellegrino; Tiziana Trucco
This paper describes preliminary solutions to support technicians on NGAN (Next Generation Access Network) apparatuses. These solutions are based on AR (Augmented Reality) techniques which enhances the traditional smartphone visual interface.
A low cost tracking system for position-dependent 3D visual interaction BIBAFull-Text 351-352
  Luca Cosmo; Andrea Albarelli; Filippo Bergamasco
In many visual interaction applications the user needs to explore a scene by moving with respect to the virtual environment. Using a fixed camera viewpoint leads to visual inconsistencies, which can be avoided only if the exact pose of the user head is known and can be used to produce a perspective correct rendering. To this end, tracking devices are often used, however many of them are relatively expensive or require the user to wear special apparel. With this paper we present a tracking system that can be implemented with a simple and very low cost modification of standard shutter glasses. The accuracy of such approach has been evaluated quantitatively with a specially crafted experimental setup.
Personalized interaction on large displays: the StreetSmart project approach BIBAFull-Text 353-354
  Paolo Cremonesi; Antonella Di Rienzo; Cristina Frà; Franca Garzotto; Luigi Oliveto; Massimo Valla
The StreetSmart Project develops information services that integrate multiple (touch and touchless) interaction paradigms on personal devices and large public displays. It exploits personalization techniques in order to offer new engaging user experiences involving large amounts of multimedia contents.
Polarized review summarization as decision making tool BIBAFull-Text 355-356
  Paolo Cremonesi; Raffaele Facendola; Franca Garzotto; Matteo Guarnerio; Mattia Natali; Roberto Pagano
When choosing an hotel, a restaurant or a movie, many people rely on the reviews available on the Web. However, this huge amount of opinions make it difficult for users to have a comprehensive vision of the crowd judgments and to make an optimal decision. In this work we provide evidence that automatic text summarization of reviews can be used to design Web applications able to effectively reduce the decision making effort in domains where decisions are based upon the opinion of the crowd.
TagStar: a glyph-based interface for indexing and visual analysis BIBAFull-Text 357-358
  Mirko de Almeida Madeira Clemente; Mandy Keck; Rainer Groh
TagStar is designed to support users during the classification of data items based on a multidimensional classification scheme. We also intend to support experts when analyzing the database. Analytic features of the presented visual indexing system allow experts to discover insufficiently described resources with little effort. The resulting interface concept is based on star glyphs to visualize the multivariate data.
Visually integrating databases at conceptual level BIBAFull-Text 359-360
  Vincenzo Deufemia; Mara Moscariello; Giuseppe Polese
We present the Conceptual Data Integration Language (CoDIL), a visual language capable of turning the data integration process into a conceptual level activity on source data. In particular, CoDIL provides icon operators that are applicable to constructs of conceptual data models, specifying how to merge and map them onto constructs of a reconciled conceptual schema.
Exploratory computing: a challenge for visual interaction BIBAFull-Text 361-362
  Nicoletta Di Blas; Mirjana Mazuran; Paolo Paolini; Elisa Quintarelli; Letizia Tanca
The advent of the Big Data challenge has stimulated research on methods to deal with the problem of managing data abundance. Many approaches have been developed, but for the most part, they attack one specific side of the problem: e.g. efficient querying, analysis techniques that summarize data or reduce its dimensionality, data visualization, etc. The approach proposed in this poster aims instead at taking a comprehensive view: first of all, it supports human exploration as an iterative and multi-step process and therefore allows building upon a previous query on to the next, in a sort of "dialogue" between the user and the system. Second, it aims at supporting a variety of user experiences, like investigation, inspiration seeking, monitoring, comparison, decision-making, research, etc. Third, and probably most important, it adds to the notion of "big" the notion of "rich": Exploratory Computing (EC) aims at dealing with datasets of semantically complex items, whose inspection may reach beyond the user's previous knowledge or expectations: an exploratory experience basically consists in creating, refining, modifying, comparing various datasets in order to "make sense" of these meanings. A crucial challenge of EC lies at the user interface level (data visualization, feedback, relevance of the results, interaction possibilities): how to convey, in an effective manner, all the possible turn-takings of this "dialogue" between the user and the system.
Guidelines for using color blending in data visualization BIBAFull-Text 363-364
  Sandra Gama; Daniel Gonçalves
Visualization is a powerful way to convey data, showing potential for joining and interrelating data items. However, when dealing with large amounts of data, visually merging different classes of information poses several challenges. Color, however, due to its effectiveness for labeling and categorizing information, may be a solution to this shortcoming. Merging items with different colors may suggest mixing their original colors. This approach generates an immediately perceivable way to represent merged items. It also keeps context through the association of the mixed color to its original colors. We studied to which extent color blending provides users with the means to understand the provenience of data items by conducting two user studies using CIE-LCh, CMYK and HSV blending to ascertain (i) to which extent people are able to, given a particular color, understand its provenience, and (ii) the color model in which to perform color blending so that users find it intuitive. Results showed colors which are more suitable for blending so that users understand their provenience and indicated that the CIE-LCh model is more effective for representing color blending.
Multi-level visualization of interrelated data entities BIBAFull-Text 365-366
  Sandra Gama; Daniel Gonçalves
Nowadays, electronic devices are part of our daily routines, resulting in information generation at virtually any time and context. Due to different styles of interaction, data produced by human activities is not only in considerable quantities, but it is also extremely rich, which makes it difficult to manage and analyze. Visualization has the potential to overcome this limitation: not only is it an excellent means to display large quantities of information, but it also alleviates cognitive load associated with data interpretation. We created an interactive multi-level layered visualization, in which time may be represented sequentially through layers. Data entities are displayed as circles with size proportional to a particular data feature we need to highlight, allowing immediate comparison between entities. By selecting an entity, we may see, through visual connectors, all the interrelated entities over the different time layers. User tests have shown that our visualization makes important information immediately perceivable, in a way that is easy to navigate and analyze.
Web3D representation and cultural heritage: from annotations to narrations BIBAFull-Text 367-368
  Ivano Gatto; Fabio Pittarello
Storytelling is a powerful means for teaching, often used for engaging pupils while educating them. This paper describes an innovative web tool for creating engaging narrations for educational purposes that can be shared on the web. The tool is integrated with ToBoA-3D, a web platform for annotating 3D environments and taking advantage of the crowdsourced effort of its users for creating linear stories that can be shared on the web. An example focused on an educational narration about Renaissance architecture will be shown.
Filter dials: combine filter criteria, see how much data is available BIBAFull-Text 369-370
  Florian Haag; Thomas Ertl
Increasing amounts of information are being made available as RDF datasets. While a variety of techniques for finding and browsing single resources from such datasets exists, getting an idea of what data is generally available in a given RDF graph is often problematic. With filter dials, we propose a novel visualization that shows the sizes of various result sets based on different combinations of filter criteria in a compact way. Result sets of one filter dial can be used as input for another filter dial, thereby forming complex chained filter expressions. After the description of the concept, a brief example based on bibliographical data is presented.
ImgWordle: image and text visualization for events in microblogging services BIBAFull-Text 371-372
  Chong Kuang; Jiayu Tang; Zhiyuan Liu; Maosong Sun
With the wide usage of microblogging services, microposts grow at high rate and provide a rich source of information related to important social events and trends. However, analyzing microposts is challenging due to its high complexity and large volume. In this paper, we present ImgWordle, an interactive visualization prototype to help people perceive and analyze the image and text information in microblogging services. The prototype extends tag cloud by involving images as well as words, and provides multiple coordinated views for a given event, including keywords of various topics, representative images, geographic sentiments and amounts of microposts, and popular microposts over time. We implement ImgWordle on Sina Weibo, the most popular microblogging service in China, and illustrate the usefulness of this visual interface.
How collective intelligence emerges: knowledge creation process in Wikipedia from microscopic viewpoint BIBAFull-Text 373-374
  Kyungho Lee
The Wikipedia, one of the richest human knowledge repositories on the Internet, has been developed by collective intelligence. To gain insight into Wikipedia, one asks how initial ideas emerge and develop to become a concrete article through the online collaborative process? Led by this question, the author performed a microscopic observation of the knowledge creation process on the recent article, "Fukushima Daiichi nuclear disaster." The author collected not only the revision history of the article but also investigated interactions between collaborators by making a user-paragraph network to reveal an intellectual intervention of multiple authors. The knowledge creation process on the Wikipedia article was categorized into 4 major steps and 6 phases from the beginning to the intellectual balance point where only revisions were made. To represent this phenomenon, the author developed a visaphor (digital visual metaphor) to digitally represent the article's evolving concepts and characteristics. Then the author created a dynamic digital information visualization using particle effects and network graph structures. The visaphor reveals the interaction between users and their collaborative efforts as they created and revised paragraphs and debated aspects of the article.
Multidimensional sort of lists in mobile devices BIBAFull-Text 375-376
  Emanuele Panizzi; Giuseppe Marzo
Many apps for mobile devices show lists of elements that the user can sort by choosing a single metric, e.g. price or date or alphabetical order. Often, none of the different sorting is ideal, in fact elements that are more interesting to the user are often spread along the list as there is no single sorting that can group them together at the top. We propose that the user can sort the list by including two or more metrics to create a personalized ranking. Sorting is thus multidimensional and the order of the chosen metrics defines a different ranking according to predefined weights. We designed and developed an iOS framework that implements a multidimensional sortable list, with a drag-and-drop interface to let the user choose the personal order of metrics. We performed qualitative user tests with 8 users.
An end-user interface for behaviour change intervention development BIBAFull-Text 377-378
  Daniel Rough; Aaron Quigley
Traditional behaviour change interventions are typically delivered with a fixed set of components, providing identical content to all participants in a trial. The disregard of personal differences often leads to weak effects and inconclusive results. Tools are required that let researchers identify effective components for specific users and contexts. This paper presents a system design incorporating user models and a visual programming language to allow end-users with varying technical expertise to develop tailored interventions using feedback from a series of visual and non-visual interfaces.
Static hand poses for gestural interaction: a study BIBAFull-Text 379-380
  Hassan Saidinejad; Mahsa Teimourikia; Sara Comai; Fabio Salice
Gestural interaction leveraging the expressiveness of the human hand, either in touch or in air gestures has been a subject of much research. In this work, a user study focusing mainly on static hand poses has been conducted on a heterogeneous group of participants covering different aspects of this interaction method: role of the participants in the creation of the hand poses, context-free pose-action mapping, learnability and memory issues, and physical comfort. Results of the study are discussed.
PODD: a portable diary data collection system BIBAFull-Text 381-382
  Katerina Vrotsou; Mathias Bergqvist; Matthew Cooper; Kajsa Ellegård
Activity diaries are a powerful data source for studying the time use of individuals and for creating awareness of individuals' daily activity patterns. The presented project is concerned with the development of an easily accessible method for collecting and analyzing diary data which will be applicable across a wide range of industrial, governmental, social science and medical domains. The PODD (POrtable Diary Data collection) is composed of a smartphone application for data registration, a web interface for user registration and an administration system for configuring the application according to the focus of the data collection.
Automatic street view system synchronized with TV program using geographical metadata from closed captions BIBAFull-Text 383-384
  Yuanyuan Wang; Daisuke Kitayama; Yukiko Kawai; Kazutoshi Sumiya
Various TV programs, such as travel and educational programs, often introduce tourist spots, or historical places. However, viewers find it difficult to grasp the surroundings of these spots or places, how the locations are related, and distances between them, when instantaneously moving between the scenes by switching video streams. Therefore, we built an interface using the geographical metadata of the video streams, called TV-Milan, that automatically presents geographic contents (i.e., maps, photos, Street View, etc.) synchronized with the TV program.
Semantize: visualizing the sentiment of individual document BIBAFull-Text 385-386
  Alan J. Wecker; Joel Lanir; Osnat Mokryn; Einat Minkov; Tsvi Kuflik
A plethora of tools exist for extracting and visualizing key sentiment information from a corpus of text documents. Often, however, there is a need for quickly assessing the sentiment and feelings that arise from an individual document. We describe an interactive tool that visualizes the sentiment of a specific document such as an online opinion, blog, or transcript, by visually highlighting the sentiment features while leaving the document text intact.

Workshop papers

Culture of participation in the digital age: social computing for learning, working, and living BIBAFull-Text 387-390
  Barbara Rita Barricelli; Ali Gheitasy; Anders Mørch; Antonio Piccinno; Stefano Valtolina
Cultures of participation are oriented towards providing end users with the means to actively participate in problems that are personally meaningfully to them. An overall aim of cultures of participation is to apply collective knowledge to address major problems that our societies are facing today. The CoPDA Workshop is in its second edition, after the first one that was held in 2013 during the International Symposium on End-User Development in Copenhagen (Denmark) [4]. In the 2014's edition the focus is on social computing and its contributions to learning, working and living.
Knowledge artifacts within knowing communities to foster collective knowledge BIBAFull-Text 391-394
  Federico Cabitza; Andrea Cerroni; Carla Simone
In this paper we present a novel model of knowledge creation and diffusion (viz. the Knowledge-Stream model) in communities of knowledgeable citizens (viz. knowing communities). This model takes into account the individual, social and cultural dimensions of knowledge (what we denote as co-knowledge) to account for the various ways knowledge is "circulated" among people (i.e., members of any social structure); we also propose the concept of IT Knowledge Artifact, as the technological driver enabling such a circulation, and exemplify its main roles in a citizen science project that we are going to undertake in the domains of the urban cultural heritage and the food- and diet-related traditions.
Understanding citizen participation in crisis and disasters: the point of view of governmental agencies BIBAFull-Text 395-397
  Paloma Díaz; Ignacio Aedo; Sergio Herranz
Ubiquitous computing combined with web 2.0 technologies might contribute to develop a culture of participation in emergency management (EM) by aligning the efforts and capabilities of official agencies and citizens. For citizen participation to be possible organizations in charge of EM need to realize that involving citizens does not interfere with their protocols and citizens need to be empowered to move beyond the role of passive informants. In this paper we describe how organizations perceive this participation as a step further to understand how to reach a more participative model and the benefits that technology and sociotechnical systems might bring to such model.
First International Workshop on User Interfaces for Crowdsourcing and Human Computation BIBAFull-Text 398-400
  Alessandro Bozzon; Lora Aroyo; Paolo Cremonesi
Recent years witnessed an explosion in the number and variety of data crowdsourcing initiatives. From OpenStreetMap to Amazon Mechanical Turk, developers and practitioners have been striving to create user interfaces able to effectively and efficiently support the creation, exploration, and analysis of crowdsourced information.
   The extensive usage of crowdsourcing techniques brings a major change of paradigm with respect to traditional user interface for data collection and exploration, as effectiveness, speed, and interaction quality concerns play a central role in supporting very demanding incentives, including monetary ones.
   The First International Workshop on User Interfaces for Crowdsourcing and Human Computation (CrowdUI 2014), co-located with the AVI 2014 conference, brought together researchers and practitioners from a wide range of areas interested in discussing the user interaction challenges posed by crowdsourcing systems.
Toward effective tasks navigation in crowdsourcing BIBAFull-Text 401-404
  Pavel Kucherbaev; Florian Daniel; Maurizio Marchese; Fabio Casati; Brian Reavey
Crowdsourcing platforms are changing the way people can work and earn money. The population of workers on crowdsourcing platforms already counts millions and keeps growing. Workers on these platforms face several usability challenges, which we identify in this work by running two surveys on the CrowdFlower platform. Our surveys show that the majority of workers spend more than 25% of their time on searching tasks to work on. Limitations in the current user interface of the task listing page prevent workers from focusing more on the execution. In this work we present an attempt to design and implement a specific user interface for task listing aimed to help workers spend less time searching for tasks and thus navigate among them more easily.
User interface design for crowdsourcing systems BIBAFull-Text 405-408
  Bahareh Rahmanian; Joseph G. Davis
Harnessing human computation through crowdsourcing offers an alternative approach to solving complex problems, especially those that are relatively easy for humans but difficult for computers. Micro-tasking platforms such as Amazon Mechanical Turk have attracted large, on-demand work force of millions of workers as well as hundreds of thousands of job requesters. Achieving high quality results by putting humans in the loop is one of the main goals of these crowdsourcing systems. We study the effects of different user interface designs on the performance of crowdsourcing systems. Our results indicate that user interface design choices have a significant effect on crowdsourced worker performance.
Fostering smart energy applications through advanced visual interfaces BIBAFull-Text 409-412
  Masood Masoodian; Elisabeth André; Saturnino Luz; Thomas Rist
There is an increasing need for technology that assist people with more effective monitoring and management of their energy generation and consumption. In recent years a considerable number of research activities have resulted in a multitude of new ICT-supported tools and services for both the private energy consumer market, as well as for energy related business and industries (e.g., utility and grid companies, facility management, etc.). This workshop focuses on advanced interaction, interface, and visualization techniques for energy-related applications, tools, and services. It brings together researchers and practitioners from a diverse range of background, including interaction design, human-computer interaction, visualization, computer games, and other fields concerned with the development of advanced visual interfaces for smart energy applications.
Smart energy interfaces for electric vehicles BIBAFull-Text 413-416
  Paul Monigatti; Mark Apperley; Bill Rogers
Electric vehicle charging strategies rely on knowledge of future vehicle usage, or implicitly make assumptions about a vehicle's usage. For example, a naïve charging strategy may assume that a full charge is required as soon as possible and simply charge at the maximum rate when plugged in, whereas a smart strategy might make use of the knowledge that the vehicle is not needed for a number of hours and optimise its charging behaviour to minimise its impact on the electricity grid. These charging strategies may also offer vehicle-to-grid services.
   To achieve this functionality, a driver needs to specify the details of the next trip -- or sequence of trips -- in order for the charging strategy to perform optimally. This paper explores the value of next-trip information, and presents a potential user interface to assist a driver with providing these details.
Interactive visual tools for the planning and monitoring of power grids BIBAFull-Text 417-420
  Thomas Rist; Michael Wiest
In this contribution we argue that power grid design and monitoring is an application domain that could greatly benefit from novel visualization techniques as well as from advances in interactive graphics. To support our point of view we refer to some selected works including an interactive tool for planning grid extensions, and a simulation environment for micro grids.