HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 0506070809101112131415

Proceedings of the 2015 Conference on Graphics Interface

Fullname:Proceedings of the 2015 Graphics Interface Conference
Editors:Hao (Richard) Zhang; Tony Tang
Location:Halifax, Canada
Dates:2015-Jun-03 to 2015-Jun-05
Publisher:ACM
Standard No:ISBN: 978-0-9947868-0-7; ACM DL: Table of Contents; hcibib: GI15
Papers:36
Pages:284
Links:Conference Website
  1. Invited paper
  2. Modelling and synthesis
  3. Seeing, hearing, and visualizing interactions
  4. Learning for search, design, and simulation
  5. Making it interactive
  6. Rendering and simulation
  7. Working with others
  8. Understanding people and ourselves
  9. Interacting with others
  10. Gestures and mobility
  11. Interaction techniques
  12. Novel application designs
  13. Using your body

Invited paper

User studies and usability evaluations: from research to products BIBAFull-Text 1-8
  I. Scott MacKenzie
Six features of user studies are presented and contrasted with the same features in another assessment method, usability evaluation. The connection between these assessment methods and the disciplines of research, engineering, and design is analysed. The three disciplines are presented in a timeline chart showing their inter-relationship with the final goal the creation of computing products. Background discussions explore three definitions of research as well as three methodologies for conducting research: experimental, observational, and correlational. It is demonstrated that a user study is an example of experimental research and that a usability evaluation is an example of observational research. In terms of the timeline, a user study is performed early (after research but before engineering and design), whereas a usability evaluation is performed late (after engineering and design but before product release).

Modelling and synthesis

Terrain synthesis using curve networks BIBAFull-Text 9-16
  Maryam Ariyan; David Mould
We present a procedural technique for the controllable synthesis of detailed terrains. We generate terrains based on a sparse curve network representation, where interconnected curves are distributed in the plane and can be procedurally assigned height. We employ path planning to procedurally generate irregular curves around user-designated peaks. Optionally, the user can specify base signals for the curves. Then we assign height to the curves using random walks with controlled probability distributions, a process which can produce signals with a variety of shapes. The curve network partitions space into individual patches. We interpolate patch heights using mean value coordinates, after which we have a complete terrain heightfield. Our algorithm enables users to obtain prominent features with lightweight interaction. Increasing the density of curves and roughness of curve profiles adds detail to the synthetic terrains. The curves in a network are organized into a hierarchy, where the major curves are created first and the curves constructed at later stages are affected by earlier curves. Our approach is capable of producing a variety of landscapes with prominent ridges and distinct shapes.
Dynamic on-mesh procedural generation BIBAFull-Text 17-24
  Cyprien Buron; Jean-Eudes Marvie; Gaël Guennebaud; Xavier Granier
We present a method to synthesize procedural models with global structures, such as growth plants, on existing surfaces at interactive time. More generally, our approach extends shape grammars to enable context-sensitive procedural generation on the GPU. Central to our framework is the unified representation of external contexts as texture maps. These generic contexts can be spatially varying parameters controlling the grammar expansion through very fast texture fetches (e.g., a density map). External contexts also include the shape of the underlying surface itself that we represent as a texture atlas of geometry images. Extrusion along the surface is then performed by a marching rule working in texture space using indirection pointers. We also introduce a lightweight deformation mechanism of the generated geometry maintaining a C1 continuity between the terminal primitives while taking account for the shape and trajectory variations. Our method is entirely implemented on the GPU and it allows to dynamically generate highly detailed models on surfaces at interactive time. Finally, by combining marching rules and generic contexts, users can easily guide the growing process by directly painting on the surface with a live feedback of the generated model. This provides friendly editing in production environments.
Model-driven indoor scenes modeling from a single image BIBAFull-Text 25-32
  Zicheng Liu; Yan Zhang; Wentao Wu; Kai Liu; Zhengxing Sun
In this paper, we present a new approach of 3D indoor scenes modeling on single image. With a single input indoor image (including sofa, tea table, etc.), a 3D scene can be reconstructed using existing model library in two stages: image analysis and model retrieval. In the image analysis stage, we obtain the object information from input image using geometric reasoning technology combined with image segmentation method. In the model retrieval stage, line drawings are extracted from 2D objects and 3D models by using different line rendering methods. We exploit various tokens to represent local features and then organize them together as a star-graph to show a global description. Finally, by comparing similarity among the encoded line drawings, models are retrieved from the model library and then the scene is reconstructed. Experimental results show that, driven by the given model library, indoor scenes modeling from a single image could be achieved automatically and efficiently.

Seeing, hearing, and visualizing interactions

Integrated multimodal interaction using normal maps BIBAFull-Text 33-40
  Auston Sterling; Ming C. Lin
In this paper, we explore texture mapping as a unified representation for enabling realistic multimodal interaction with finely-detailed surfaces. We first present a novel approach to modifying collision handling between textured rigid-body objects; we then show how normal maps can be adopted as a unified representation to synthesize complex sound effects from long-lasting collisions and perform rendering of haptic textures. The resulting multimodal display system allows a user to see, hear, and feel complex interactions with textured surfaces. By using normal maps as a unified representation for seamlessly integrated multimodal interaction instead of complex triangular meshes otherwise required, this work is able to achieve up to 25 times performance speedup and reduce up to six orders of magnitude in memory storage. We further validate the results through a user study which demonstrates that the subjects are able to correctly identify the material texture of a surface through interaction with its normal map.
AOI transition trees BIBAFull-Text 41-48
  Kuno Kurzhals; Daniel Weiskopf
The analysis of transitions between areas of interest (AOIs) in eye tracking data provides insight into visual reading strategies followed by participants. We present a new approach to investigate eye tracking data of multiple participants, recorded from video stimuli. Our new transition trees summarize sequence patterns of all participants over complete videos. Shot boundary information from the video is used to divide the dynamic eye tracking information into time spans of similar semantics. AOI transitions within such a time span are modeled as a tree and visualized by an extended icicle plot that shows transition patterns and frequencies of transitions. Thumbnails represent AOIs in the visualization and allow for an interpretation of AOIs and transitions between them without detailed knowledge of the video stimulus. A sequence of several shots is visualized by connecting the respective icicle plots with curved links that indicate the correspondence of AOIs. We compare the technique with other approaches that visualize AOI transitions. With our approach, common transition patterns in eye tracking data recorded for several participants can be identified easily. In our use case, we demonstrate the scalability of our approach concerning the number of participants and investigate a video data set with the transition tree visualization.
Hand grasp and motion for intent expression in mid-air virtual pottery BIBAFull-Text 49-57
  A Vinayak; Karthik Ramani
We describe the design and evaluation of a geometric interaction technique for bare-hand mid-air virtual pottery. We model the shaping of a pot as a gradual and progressive convergence of the pot-profile to the shape of the user's hand represented as a point-cloud (PCL). Our pottery-inspired application served as a platform for systematically revealing how users use their hands to express the intent of deformation during a pot shaping process. Through our approach, we address two specific problems: (a) determining start and end of deformation without explicit clutching and declutching, and (b) identifying user's intent by characterizing grasp and motion of the hand on the pot. We evaluated our approach's performance in terms of intent classification, users' behavior, and users' perception of controllability. We found that the expressive capability of hand articulation can be effectively harnessed for controllable shaping by organizing the deformation process in broad classes of intended operations such as pulling, pushing and fairing. After minimal practice with the pottery application, users could figure out their own strategy for reaching, grasping and deforming the pot. Further, the use of PCL as mid-air input allows for using common physical objects as tools for pot deformation. Users particularly enjoyed this aspect of our method for shaping pots.

Learning for search, design, and simulation

Learning style similarity for searching infographics BIBAFull-Text 59-64
  Babak Saleh; Mira Dontcheva; Aaron Hertzmann; Zhicheng Liu
Infographics are complex graphic designs integrating text, images, charts and sketches. Despite the increasing popularity of infographics and the rapid growth of online design portfolios, little research investigates how we can take advantage of these design resources. In this paper we present a method for measuring the style similarity between infographics. Based on human perception data collected from crowdsourced experiments, we use computer vision and machine learning algorithms to learn a style similarity metric for infographic designs. We evaluate different visual features and learning algorithms and find that a combination of color histograms and Histograms-of-Gradients (HoG) features is most effective in characterizing the style of infographics. We demonstrate our similarity metric on a preliminary image retrieval test.
Efficient trajectory extraction and parameter learning for data-driven crowd simulation BIBAFull-Text 65-72
  Aniket Bera; Sujeong Kim; Dinesh Manocha
We present a trajectory extraction and behavior-learning algorithm for data-driven crowd simulation. Our formulation is based on incrementally learning pedestrian motion models and behaviors from crowd videos. We combine this learned crowd-simulation model with an online tracker based on particle filtering to compute accurate, smooth pedestrian trajectories. We refine this motion model using an optimization technique to estimate the agents' simulation parameters. We highlight the benefits of our approach for improved data-driven crowd simulation, including crowd replication from videos and merging the behavior of pedestrians from multiple videos. We highlight our algorithm's performance in various test scenarios containing tens of human-like agents.

Making it interactive

Cover-it: an interactive system for covering 3d prints BIBAFull-Text 73-80
  Ali Mahdavi-Amiri; Philip Whittingham; Faramarz Samavati
The ubiquity of 3D printers has made it possible to print various types of objects, from toys to mechanical objects. However, most available 3D printers are single or double colors. Even printers that can produce objects with multiple colors do not offer the ability to cover the object with a desired material, such as a piece of cloth or fur. In this paper, we propose a system that produces simple 2D patches that can be used as a reference for cutting material to cover the 3D printed object. The system allows for user interactions to correct and modify the patches, and provides guidelines on how to wrap the printed object via small curves illustrating the patch boundaries etched on the printed object as well as an animation showing how the 2D patches should be folded together. To avoid wasting materials, a heuristics method is also employed to pack 2D patches in the layout. To compensate the effect of inflation resulted from covering objects with thick materials, an offsetting tool is provided in Cover-it. In addition, since many low scale details of an object is not visible after covering, a mesh can be simplified in Cover-it to reduce the number of 2D patches.
Fast image segmentation on mobile phone using multi-level graph cut BIBAFull-Text 81-88
  Steven Garcia; Patrick Gage Kelley; Yin Yang
This paper presents a system for an efficient image segmentation on mobile phones using multi-level graph cut. As the computational capacity of mobile devices is often limited, a fluent and smooth image segmentation is a challenging task with existing segmentation algorithms, increased in difficulty by mobile phone cameras continually upgraded to take photos of higher resolution. Our solution is to carefully tweak the classic graph cut algorithm for an interactive image cutout, enhancing the performance without compromising the quality of the segmentation. This is achieved by down-sampling the original high-resolution image and selecting a rough cutout region on this low-resolution image with a superpixel based pre-segmentation. The segmented foreground is then mapped back to the full-size image and the image undergoes an adaptive boundary refinement. This second segmentation performs the optimization locally and can be accomplished within milliseconds. We test our system on an Apple iPhone 6 and our experiments show that a high quality segmentation can be achieved in a lag-free manner on the mobile phone even for multi-megapixel images.
Interactive shading of 2.5D models BIBAFull-Text 89-96
  João Paulo Gois; Bruno A. D. Marques; Harlen C. Batagelo
Advances in computer-assisted methods for designing and animating 2D artistic models have incorporated depth and orientation cues such as shading and lighting effects. These features improve the visual perception of the models while increasing the artists' flexibility to achieve distinctive design styles. An advance that has gained particular attention in the last years is the 2.5D modeling, which simulates 3D rotations from a set of 2D vector arts. This creates not only the perception of animated 3D orientation, but also automates the process of inbetweening. However, previous 2.5D modeling techniques do not allow the use of interactive shading effects. In this work, we tackle the problem of providing interactive 3D shading effects to 2.5D modeling. Our technique relies on the graphics pipeline to infer relief and to simulate the 3D rotation of the shading effects inside the 2D models in real-time. We demonstrate the application on Phong, Gooch and cel shadings, as well as environment mapping, fur simulation, animated texture mapping, and (object-space and screen-space) texture hatchings.

Rendering and simulation

Visibility sweeps for joint-hierarchical importance sampling of direct lighting for stochastic volume rendering BIBAFull-Text 97-104
  Thomas Kroes; Martin Eisemann; Elmar Eisemann
Physically-based light transport in heterogeneous volumetric data is computationally expensive because the rendering integral (particularly visibility) has to be stochastically solved. We present a visibility estimation method in concert with an importance-sampling technique for efficient and unbiased stochastic volume rendering. Our solution relies on a joint strategy, which involves the environmental illumination and visibility inside of the volume. A major contribution of our method is a fast sweeping-plane algorithm to progressively estimate partial occlusions at discrete locations, where we store the result using an octahedral representation. We then rely on a quadtree-based hierarchy to perform a joint importance sampling. Our technique is unbiased, requires little precomputation, is highly parallelizable, and is applicable to a various volume data sets, dynamic transfer functions, and changing environmental lighting.
6D frictional contact for rigid bodies BIBAFull-Text 105-114
  C. Bouchard; M. Nesme; M. Tournier; B. Wang; F. Faure; P. G. Kry
We present a new approach to modeling contact between rigid objects that augments an individual Coulomb friction point-contact model with rolling and spinning friction constraints. Starting from the intersection volume, we compute a contact normal from the volume gradient. We compute a contact position from the first moment of the intersection volume, and approximate the extent of the contact patch from the second moment of the intersection volume. By incorporating knowledge of the contact patch into a point contact Coulomb friction formulation, we produce a 6D constraint that provides appropriate limits on torques to accommodate displacement of the center of pressure within the contact patch, while also providing a rotational torque due to dry friction to resist spinning. A collection of examples demonstrate the power and benefits of this simple formulation.
Stylized scattering via transfer functions and occluder manipulation BIBAFull-Text 115-121
  Oliver Klehm; Timothy R. Kol; Hans-Peter Seidel; Elmar Eisemann
Volumetric light scattering is an effect that is used increasingly in feature movies as well as games. It enables rendering scenes more realistically and is often used as an artistic tool to achieve a certain mood in the scene or emphasize certain objects. Thus far, however, little research has focused on artistically influencing the air-light integral and scattering process, which are both very complex. We propose a novel solution to help artists in changing the appearance of single scattering effects. Our approach offers techniques based on occluder manipulation to remove or add apparent complexity to the resulting light shafts and to emphasize the object's shape by enhancing the light shaft borders. Furthermore, we adapt an existing shading technique to control the effect of the light integral intuitively through the use of easily modifiable transfer functions. Our solution is easy to use, is compatible with standard rendering pipelines, and can be executed interactively in real time to provide the artist with quick feedback.

Working with others

Exploiting analysis history to support collaborative data analysis BIBAFull-Text 123-130
  Ali Sarvghad; Melanie Tory
Coordination is critical in distributed collaborative analysis of multidimensional data. Collaborating analysts need to understand what each person has done and what avenues of analysis remain uninvestigated in order to effectively coordinate their efforts. Although visualization history has the potential to communicate such information, common history representations typically show sequential lists of past work, making it difficult to understand the analytic coverage of the data dimension space (i.e. which data dimensions have been investigated and in what combinations). This makes it difficult for collaborating analysts to plan their next steps, particularly when the number of dimensions is large and team members are distributed. We introduce the notion of representing past analysis history from a dimension coverage perspective to enable analysts to see which data dimensions have been explored in which combinations. Through two user studies, we investigated whether 1) a dimension oriented view improves understanding of past coverage information, and 2) the addition of dimension coverage information aids coordination. Our findings demonstrate that a representation of dimension coverage reduces the time required to identify and investigate unexplored regions and increases the accuracy of this understanding. In addition, it results in a larger overall coverage of the dimension space, one element of effective team coordination.
Effects of arm embodiment on implicit coordination, co-presence, and awareness in mixed-focus distributed tabletop tasks BIBAFull-Text 131-138
  Andre Doucette; Carl Gutwin; Regan Mandryk
Mixed-focus collaboration occurs when people work on individual tasks in a shared space -- and although their tasks may not be directly linked, they still need to maintain awareness and manage access to shared resources. This kind of collaboration is common on tables, where people often use the same space to carry out work that is only loosely coupled. At physical tables, people easily manage to coordinate access to the table surface and the artifacts on it, because people have years of experience interacting around other physical bodies. At distributed digital tabletops, however, where there is no physical body for the remote person, many of the natural cues used to manage mixed-focus collaboration are missing. To compensate, distributed groupware often uses digital embodiments. On digital touch tables, however, we know little about how these embodiments affect coordination and awareness. We carried out an empirical study of how four factors in an arm embodiment (transparency, input technique, visual fidelity, and tactile feedback) affected implicit coordination, awareness, and co-presence. We found that although some embodiments affected subjective feelings of co-presence or awareness, there were no changes in table behavior -- people acted as if the other person did not exist. These findings show the possibilities and limitations of digital arm embodiments, and suggest that the natural advantages of tables for collaboration may not extend to distributed tables.
TandemTable: supporting conversations and language learning using a multi-touch digital table BIBAFull-Text 139-146
  Erik Paluka; Christopher Collins
We present TandemTable, a multi-touch tabletop system designed to break down communication barriers between partners, with a special focus on supporting those who are learning languages. The design was guided by a grounding study of a real-world tandem language learning (TLL) environment and refined with an exploratory study of an early prototype. TandemTable facilitates and supports conversations by suggesting topics of discussion and presenting partners with a variety of conversation-focused collaborative activities, which consist of shared digital topical content that is dynamically downloaded from the web. Through a formal study comparing TandemTable to the baseline TLL condition of no support, our system was shown to increase communication between learning partners, reduce social discomfort, and was the preferred way of engaging in TLL.

Understanding people and ourselves

How and why personal task management behaviors change over time BIBAFull-Text 147-154
  Mona Haraty; Joanna McGrenere; Charlotte Tang
Personal task management (PTM) is a common human activity that is supported by a plethora of dedicated e-PTM tools. Yet, little is known about how and why PTM behaviors change over time, and how PTM tools can accommodate such changes. We studied changes in 178 participants' PTM behaviors in a survey to inform the design of personalizable e-PTM tools that can accommodate changes over time. In follow-up interviews with 12 of the survey respondents, we deepened our understanding of the changes reported in the survey. Based on the reasons behind the reported changes, we identified factors that contributed to changes in PTM behaviors: changing needs, dissatisfaction caused by unmet needs, and opportunities revealing unnoticed needs. Grounded in our findings, we offer implications for design of PTM tools that support changes in behaviors as well as implications for future PTM research.
Moving towards user-centered government: community information needs and practices of families BIBAFull-Text 155-162
  Carolyn Pang; Carman Neustaedter; Jason Procyk; Daniel Hawkins; Kate Hennessy
Government organizations have begun to consider how to provide families with information about their communities, yet their current design strategies focus on providing any and all of their information. This makes it difficult for families to find what is relevant to them. To help address this problem, we conducted a diary and interview study to explore what community information families are actually interested in, how and when they acquire it, and what challenges they face in doing so. Results show that location-based information in their environments triggered people to want to know more about their community while time-based information helped people plan family activities. Family members also wanted to have information resurface at particular places and points in time to support face-to-face interactions. Our analysis suggests design opportunities to leverage the affordances of print and online media and the use of in-home technologies to support the interactions between family members. We also suggest considerations for location-based experiences within communities.
Gendered or neutral?: considering the language of HCI BIBAFull-Text 163-170
  Adam Bradley; Cayley MacArthur; Mark Hancock; Sheelagh Carpendale
In this paper, we present a Mechanical Turk study that explores how the most common words that have been used to refer to people in recent HCI literature are received by non-experts. The top five CHI 2014 people words are: user, participant, person, designer, and researcher. We asked participants to think about one of these words for ten seconds and then to draw an image of it. After the drawing was done we asked simple demographic questions about both the participant and the created image. Our results show that while generally our participants did perceive most of these words as predominately male, there were two notable exceptions. Women appear to perceive the terms "person" and "participant" as gender neutral. That is, they were just as likely to draw a person or a participant as male or female. So while these two words are not exactly gender neutral in that men largely perceived them as male, at least women did not appear to feel excluded by these terms. We offer an increased understanding of the perception of HCI's people words and discuss the challenges this poses to our community in striving toward gender inclusiveness.

Interacting with others

IIS you is my digital baby: an intimate interface system for persons with disabilities BIBAFull-Text 171-178
  D. I. Fels; D. H. Smith; R. Baffa da Silva; D. Aybar; M. Whitfield
Virtual worlds, avatars and cybersex are becoming more commonplace and acceptable. Virtual environments such as Second Life™ allow for the construction and exploration of virtual selves or agents that are bounded only by imagination and fantasy of their participants. However, they are also informed by the attitudes, limits and agendas of the real life participants that invade these worlds. For people with disabilities, virtual environments may allow the crossing of boundaries of taboo subjects such as disability and sex, and intimate technologies. The Intimate Interface System (IIS) was designed to support and encourage intimacy and cybersex discovery for people with disabilities in an inclusive manner. It is composed of a virtual world component replete with customizable avatars, animations and sound combine with physical devices including a vibrating chair and a pressure pad. Results were derived from an initial focus group with four persons with motoric disabilities. Notions of positive and negative aspects of cybersex, such as the ability to do things in virtual life that cannot be done in real life, and spending time in virtual relationships is for people who are socially inept were found reflecting the literature. However, there were also unique viewpoints such as the desire for more realism that were brought to bear on the discussion. Reaction to IIS was generally positive, however, participants wanted more features such as temperature control and enhanced realism.
Amateur ice hockey coaching and the role of video feedback BIBAFull-Text 179-186
  Jason Procyk; Carman Neustaeder; Thecla Schiphorst
Amateur minor hockey coaches have recently begun to capture and play back video recordings to provide their teams with visual feedback of their play as a learning tool. Yet what is not clear is whether such video feedback is useful and how video feedback systems could be designed to better match the needs of amateur hockey coaches and players. As such, we wanted to understand coaches' current practices for communicating and teaching and their current use of video feedback (if at all). We observed games and practices and conducted in situ interviews with amateur coaches. Our results show that teaching and learning at highly competitive levels of minor hockey focuses on decision-making and comprehension of the game rather than individual physical movement. One-on-one teaching happens opportunistically and in very short time periods throughout games and practices. However, video feedback is currently used in a much different context, often away from the ice because of technological limitations. Based on these findings, we suggest video feedback systems be designed for use within the context of games and practices while balancing the individual needs of players with coaching goals.
Synchronous yoga and meditation over distance using video chat BIBAFull-Text 187-194
  Reese Muntean; Carman Neustaedter; Kate Hennessy
Community and social relationships are an important part of yoga and meditation despite the fact that they are commonly perceived as solitary activities. Family members and loved ones often share activities and experiences over video chat technology to sustain their relationships across distance, and we wondered if similar technology could allow for yoga and meditation partners to share their practice remotely. In our study, sixteen participants completed yoga and meditation sessions over distance and participated in semi-structured interviews about their experience. Our results show that video chat can support synchronous yoga and meditation over distance through seeing and hearing one's remote partner. Both video and audio play an important role in creating a sense of remote presence. Yet there are space issues, camera challenges, and issues with a lack of touch for instructional purposes. Future video chat systems for synchronous yoga should consider ways to improve these issues while balancing the need to keep technology in the background.

Gestures and mobility

Penny pincher: a blazing fast, highly accurate $-family recognizer BIBAFull-Text 195-202
  Eugene M., II Taranta; Joseph J., Jr. LaViola
The $-family of recognizers ($1, Protractor $N, $P, 1¢, and variants) are an easy to understand, easy to implement, accurate set of gesture recognizers designed for non-experts and rapid prototyping. They use template matching to classify candidate gestures and as the number of available templates increase, so do their accuracies. This, of course, is at the cost of higher latencies, which can be prohibitive in certain cases. Our recognizer Penny Pincher achieves high accuracy by being able to process a large number of templates in a short amount of time. If, for example, a recognition task is given a 50μs budget to complete its work, a fast recognizer that can process more templates within this constraint can potentially outperform its rival recognizers. Penny Pincher achieves this goal by reducing the template matching process to merely addition and multiplication, by avoiding translation, scaling, and rotation; and by avoiding calls to expensive geometric functions. Despite Penny Pincher's deceptive simplicity, our recognizer, with a limited number of templates, still performs remarkably well. In an evaluation compared against four other $-family recognizers, in three of our six datasets, Penny Pincher achieves the highest accuracy of all recognizers reaching 97.5%, 99.8%, and 99.9% user independent recognition accuracy, while remaining competitive with the three remaining datasets. Further, when a time constraint is imposed, our recognizer always exhibits the highest accuracy, realizing a reduction in recognition error of between 83% to 99% in most cases.
Hands, hover, and nibs: understanding stylus accuracy on tablets BIBAFull-Text 203-210
  Michelle Annett; Walter F. Bischof
Although tablets and styli have become pervasive, styli have not seen widespread adoption for precise input tasks such as annotation, note-taking, algebra, and so on. While many have identified that stylus accuracy is a problem, there is still much unknown about how the user and the stylus itself influences accuracy. The present work identifies a multitude of factors relating to the user, the stylus, and tablet hardware that impact the inaccuracy experienced today. Further, we report on a two-part user study that evaluated the interplay between the motor and visual systems (i.e., hand posture and visual feedback) and an increasingly important feature of the stylus, the nib diameter. The results determined that the presence of visual feedback and the dimensions of the stylus nib are crucial to the accuracy attained and pressure exerted with the stylus. The ability to rest one's hand on the screen, while providing comfort and support, was found to have surprisingly little influence on accuracy.
A comparison of visual and textual city portal designs on desktop and mobile interfaces BIBAFull-Text 211-218
  Carolyn Pang; Carman Neustaedter; Jason Procyk; Bernhard E. Riecke
Cities have recently begun to focus on how digital technology can better inform and engage people through an online presence containing web portals for desktop computers and mobile devices. Yet we do not know whether common user interface design strategies apply to government portal design given their vast repositories of information for citizens of varying ages. This mixed-methods study compares the usability of desktop and mobile interfaces for two types of city portals, textual and visual, using the System Usability Scale, a standardized usability questionnaire. Using a set of twelve tasks, we evaluated three usability aspects of two city portals: effectiveness, efficiency, and satisfaction. Our results suggest there was a main effect between textual and visual designs, with users rating the textual design on a mobile device higher than a visual design. From this, we suggest that responsive design may not be the best fit when designing city portals to be experienced for use on desktop and mobile devices.

Interaction techniques

Twist and pulse: ephemeral adaptation to improve icon selection on smartphones BIBAFull-Text 219-222
  Antoine Ponsard; Kamyar Ardekani; Kailun Zhang; Frederic Ren; Matei Negulescu; Joanna McGrenere
The concept of ephemeral adaptation was introduced to reduce visual search time in GUI menus, while preserving spatial consistency and minimizing distraction. We extend this concept to the visual search of app icons on smartphones, in order to speed up launching apps from a homescreen. We created ephemeral highlighting effects based on preattentive visual properties including size, orientation, color, opacity and blur. We then conducted informal design and evaluation cycles from which Twist (icon rotates back and forth) and Pulse (icon grows and shrinks) were the most promising effects. An experiment comparing these two effects to a control condition showed that these effects improve search time performance by 8-10% and Pulse is subjectively preferred.
Testing the rehearsal hypothesis with two FastTap interfaces BIBAFull-Text 223-231
  Carl Gutwin; Andy Cockburn; Benjamin Lafreniere
Rehearsal-based interfaces such as Marking Menus or FastTap are designed to enable smooth transitions from novice to expert performance by making the novice's visually-guided actions a physical rehearsal of the expert's feedback-free actions. However, these interfaces have not been extensively tested in real use. We carried out studies of the adoption of rehearsal-based expert methods in two dissimilar applications -- a game that directly rewards rapid selections, and a drawing program that has no particular need for urgency. Results showed very different patterns of use for the guidance-free expert method. In the game, participants quickly switched to sustained use of expert selections, whereas few users regularly used the expert method in the drawing program, even after ten weeks and more than 1800 selections. These studies show that rehearsal alone does not guarantee that users will switch to expert methods, and that additional factors affect users' decisions about what methods to use. Our studies also revealed several issues that should be considered by designers of rehearsal-based techniques -- such as perceived risk in making selections without visual guidance, the value of guidance that shows possible options in the UI, and training that reminds users of an expert method and motivates its use.
Crossets: manipulating multiple sliders by crossing BIBAFull-Text 233-240
  Charles Perin; Pierre Dragicevic; Jean-Daniel Fekete
Crossets are new interactive instruments or widgets based on crossing gestures that exploit the dimension orthogonal to sliders' axes for manipulating multiple aligned sliders simultaneously. We propose a Crossets taxonomy to generalize the sliders' properties to those of other standard widgets. We introduce and illustrate the constrained crossing gesture with Crossets in an interface for the visual exploration of numerical tables. Then, we discuss alternative strategies to Crossets before exploring persistent unconstrained crossing gestures compatible with Crossets, introducing Spline as a persistent reusable interactive instrument. This paper highlights promising perspectives for crossing-based widgets. We hope future interfaces will make use of this simple technique that can help improve the efficiency of standard widgets and lead to the generation of new styles of interfaces.

Novel application designs

CheatSheet: a contextual interactive memory aid for web applications BIBAFull-Text 241-248
  Laton Vermette; Parmit Chilana; Michael Terry; Adam Fourney; Ben Lafreniere; Travis Kerr
We present CheatSheet, a novel contextual interactive memory aid that helps users track their learning progress and refind information when working with complex web applications. Unlike most refinding systems that rely on background monitoring of search sessions or browsing histories to automatically suggest content to users, our approach actively engages users in assessing and curating helpful content for later use. Users create application-specific notes using CheatSheet that contain the visual state of the application overlaid with any text or diagram annotations. Users can also extract snippets of relevant help and tips from other web resources (or other users) and link them to their application-specific CheatSheet. Instead of having to remember or scour through previous notes, bookmarks, or folders, CheatSheet automatically retrieves the recently added notes within the application's user interface. We discuss findings from formative interviews that we used to derive a set of design goals for designing an interactive memory aid, present the design and implementation of CheatSheet, and report on an observational user study that sheds light on the range of users' note-taking and refinding strategies that CheatSheet was able to successfully support.
Postulater: the design and evaluation of a time-delayed media sharing system BIBAFull-Text 249-256
  Daniel Hawkins; Carman Neustaedter; Jason Procyk
Personal media sharing of photos and video has become a spectacle of the immediate, yet it may come at the cost of meaning and significance. To explore this design space, we created a new tool, Postulater, that supports time-delayed photo and video sharing. Our goal was to understand how media sharing tools should be designed and how they might be used for sending media, if users were able to select delivery time explicitly. We conducted a field evaluation of Postulater over six weeks and found that participants valued sending time-based messages to send reminders, share personal memories and reflections, affect future time periods, and send social greetings. Yet these messaging acts often garnered strong emotions from our participants. The implication is that time-based messaging systems should be designed in a cautionary way that balances the need to send messages 'into the future' with complex human emotions that such practices can create.
Think different: how we completely changed the visualization of pseudo-pilot BIBAFull-Text 257-264
  Jean-Paul Imbert; Christophe Hurter; Yannick Jestin
During their initial and on-the-job training, air traffic controllers communicate with human operators called pseudo-pilots who act as pilots for several simulated aircraft. With the expected increase in air traffic, a significantly higher number of aircraft will be handled during the simulations. The existing tools and working methods of the pseudo-pilots do not allow them to handle more traffic without increasing the number of operators. The increase in the number of pseudo-pilots induces problems of cost, logistics and collaboration (distribution of traffic and radio frequency congestion). This article describes the design process and improvement of a pseudo-pilot HMI which led us to a radical change of both the visualization and the interaction. This user-centered process aims to optimize visualization, effectiveness of interaction and the level of realism of the simulations. We also integrated in a seamless and robust way voice recognition in the visualization.

Using your body

The performance of indirect foot pointing using discrete taps and kicks while standing BIBAFull-Text 265-272
  William Saunders; Daniel Vogel
We investigate the performance of indirect foot pointing while standing using discrete taps and kicks. Two experiments show that left and right feet perform at similar levels, there is little difference in selection time across target configurations or directions, but targets with an angular size under 22.5° or radial size under 5cm should be avoided due to high error rates. There is a detectable advantage to tapping compared to kicking, but little practical difference. Although cursor feedback is optimal, we show that eyes-free foot pointing achieves an error rate of 27% for 45° angular targets. We translate our results into ten design guidelines and we illustrate their application by designing foot interaction techniques to control desktop applications at a standing desk.
Palpebrae superioris: exploring the design space of eyelid gestures BIBAFull-Text 273-280
  Ricardo Jota; Daniel Wigdor
In this paper, we explore the design space of eyelid gestures. We first present a framework for the design space based on the anatomy of the eye, human perception, and complexity of the eyelid gesture. Based on the framework we propose an algorithm to detect eyelid gestures with commodity cameras, already existing in laptops and mobile devices. We then populate the design space by demonstrating prototypes based on 3 form factors: mobile devices, desktop, and horizontal surfaces. These prototypes demonstrate the breadth of eyelid gestures as an input modality. We follow the scenarios with a discussion of how eyelid gestures can contribute to an interactive environment, and conclude with a discussion on insights, design recommendations, and limitations of the technique.
DynoFighter: exploring a physical activity incentive mechanism to support exergaming BIBAFull-Text 281-284
  Sergiu Veazanchin; Joseph J., Jr. LaViola
We present a study and game design that explores how motion controlled physical activity levels can be used to support exergaming and an improved user experience in a traditional game genre. We developed DynoFighter, a two player full body competitive fighting game where a player's activity levels influence their strength in the game, making it advantageous for players to exert themselves in order to win. We conducted a between subjects experiment to compare DynoFighter with and without its physical activity incentive mechanism and examined players' heart rate and activity levels, as well as their overall user experience with the game. Our results show that although there were no significant differences in physical exertion levels, players showed significantly more immersion and enjoyment as well as an increase in perceived exertion levels with DynoFighter's physical activity incentive mechanism.