HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 03040506070809101112131415

Proceedings of the 2013 Conference on Graphics Interface

Fullname:Proceedings of the 2013 Graphics Interface Conference
Editors:Faramarz F. Samavati; Kirstie Hawkey
Location:Regina, Saskatchewan, Canada
Dates:2013-May-29 to 2013-May-31
Publisher:ACM
Standard No:ISBN: 978-1-4822-1680-6; ACM DL: Table of Contents; hcibib: GI13
Papers:29
Pages:228
Links:Conference Website
  1. Invited paper
  2. Understanding data
  3. Image-based rendering and tracking
  4. Input 1: pens and consistency
  5. Modeling and animation
  6. Health, wellness, and snippets
  7. Rendering
  8. Input 2: haptic and gestures

Invited paper

Innovations in visualization BIBAFull-Text 1-8
  Sheelagh Carpendale
While information is a crucial part of people's everyday lives, many people find that access to information via today's technologies is awkward, stressful, and overly intrusive in their lives. The problem is not with the information itself, but rather with its volume and the unwieldy ways currently provided for interacting with digital content. My research focus is to create interactive information visualizations so that they support people's everyday work and social practices as they interact with information. In this paper I will provide an eclectic overview of my research, particularly featuring the research done by my PhD students.

Understanding data

A model of navigation for very large data views BIBAFull-Text 9-16
  Michael Glueck; Tovi Grossman; Daniel Wigdor
Existing user performance models of navigation for very large documents describe trends in movement time over the entire navigation task. However, these navigation tasks are in fact a combination of many sub-tasks, the details of which are lost when aggregated. Thus, existing models do not provide insight into the navigation choices implicit in a navigation task, nor into how strategy ultimately affects user performance. Focusing on the domain of data visualizations, the very large documents we investigate are very large data views. We present an algorithmic decision process and descriptive performance model of zooming and panning navigation strategy, parameterized to account for speed-accuracy trade-offs, using common mouse-based interaction techniques. Our model is fitted and validated against empirical data, and used to evaluate proposed optimal strategies. Further, we use our model to provide support for interaction design considerations for achieving performant interaction techniques for navigation of very large data views.
FacetClouds: exploring tag clouds for multi-dimensional data BIBAFull-Text 17-24
  Manuela Waldner; Johann Schrammel; Michael Klein; Katrín Kristjánsdóttir; Dominik Unger; Manfred Tscheligi
Tag clouds are simple yet very widespread representations of how often certain words appear in a collection. In conventional tag clouds, only a single visual text variable is actively controlled: the tags' font size. Previous work has demonstrated that font size is indeed the most influential visual text variable. However, there are other variables, such as text color, font style and tag orientation, that could be manipulated to encode additional data dimensions.
   FacetClouds manipulate intrinsic visual text variables to encode multiple data dimensions within a single tag cloud. We conducted a series of experiments to detect the most appropriate visual text variables for encoding nominal and ordinal values in a cloud with tags of varying font size. Results show that color is the most expressive variable for both data types, and that a combination of tag rotation and background color range leads to the best overall performance when showing multiple data dimensions in a single tag cloud.
The effects of display fidelity, visual complexity, and task scope on spatial understanding of 3D graphs BIBAFull-Text 25-32
  Felipe Bacim; Eric Ragan; Siroberto Scerbo; Nicholas F. Polys; Mehdi Setareh; Brett D. Jones
Immersive display features can improve performance for tasks involving 3D, but determining which types of spatial analysis tasks are affected by immersive display features for different applications is not simple. This research adds to the knowledge of how the level of display fidelity (i.e., the realism provided by the display output) affects task performance for a variety of 3D spatial understanding tasks. In this study, we control visual display fidelity with the combination of stereoscopy, head-based rendering, and display area and study performance analysis of 3D graphs. Through a controlled study, we evaluated the relationship among display fidelity, visual complexity, task scope, and a user's personal spatial ability. Over a variety of task types, our results show significantly better overall task performance with higher display fidelity. We also found that visual complexity and task scope affect speed, with higher levels of either type of complexity leading to slower performance. These results show the importance of considering multiple factors when calculating the overall difficulty and complexity of a spatial task, and they suggest that visual clutter makes a greater impact on speed than correctness. Further, the study of different task types suggest enhanced virtual reality displays offer more benefits for spatial search and fine-grained component distinction, but may provide little gain than for sense of scale or size comparison.
Evaluating the readability of extended filter/flow graphs BIBAFull-Text 33-36
  Florian Haag; Steffen Lohmann; Thomas Ertl
The filter/flow model is a graph-based query visualization capable of representing arbitrary Boolean expressions. However, the resulting graphs quickly become large and hard to handle when representing complex search queries. We developed an extended filter/flow model that allows the display of complex queries in a more compact form. This paper reports on a user study we conducted to evaluate the readability of the extended model. The results indicate that it is as readable as the basic one and slightly preferred by users. Therefore, we regard the extended model as a good alternative to the basic one, especially when it comes to the visualization of complex queries.

Image-based rendering and tracking

Non-linear normalized entropy based exposure blending BIBAFull-Text 37-44
  Neil D. B. Bruce
In this paper we consider the problem of dynamic range compression from multiple exposures in the absence of raw images, radiometric response functions, or irradiance information. This is achieved in a rapid and relatively simplistic fashion by merging image content across provided exposures. The premise of the proposal lies in assuming as one important goal of tone-mapping, that of making visible any contrast appearing across a dynamic range that exceeds display capabilities, while preserving the nature of the image structure, lighting, and avoiding introducing discontinuities in illumination, or image artifacts. The strategy assumed for this purpose appeals to the local entropy evident in each exposure, and employs cross-exposure normalization of entropy with a non-linearity characterized by a single parameter providing a trade-off between detail, and smoothness of the result.
Simulated bidirectional texture functions with silhouette details BIBAFull-Text 45-54
  Mohamed Yessine Yengui; Pierre Poulin
The representation of material appearance requires an understanding of the underlying structures of real surfaces, light-material interaction, and human visual system. The Bidirectional Texture Function (BTF) describes real-world materials as a spatial variation of reflectance, which depends on view and light directions. Real BTFs integrate all optical phenomena occurring in a complex material, such as self-occlusions, interreflections, subsurface scattering, etc., independently of the mesoscopic surface geometry.
   In this paper, we revisit BTF simulation to improve the modeling of surface appearance. In the recent years, computer graphics has achieved very good levels of image realism on geometrical appearance of 3D scenes. It is therefore logical to think that using this technology to simulate visual effects at the level of the mesoscopic geometry should provide even more realistic simulated BTFs. Our ultimate goal here is thus to produce material appearance as rich and as similar as those in reality, but relying more on the intuition and skills of artists, and on the rendering capacity of today's computer graphics.
   We have designed a virtual parallel-projection/directional incident illumination framework that exploits rendering coherency in order to produce, in reasonable rendering times and with good compression ratios, BTFs of complex mesoscopic geometry, and this, even at grazing angles. Our current framework can simulate efficiently local interreflections effects within mesoscopic structures, as well as effects due to transparency, silhouettes, and surface curvatures. Our general simulation framework should also prove extensible to several other visual phenomena.
Efficient reconstruction, decomposition and editing for spatially-varying reflectance data BIBAFull-Text 55-62
  Yong Hu; Shan Wang; Yue Qi
We present a new method for modeling real-world surface reflectance, described with non-parametric spatially-varying bidirectional reflectance distribution functions (SVBRDF). Our method seeks to achieve high reconstruction accuracy, compactness and "editability" of representation meanwhile speeding up the SVBRDF modeling processes. For a planar surface, we 1) design a capturing device to acquire reflectance samples at dense surface locations; 2) propose a Laplacian-based angular interpolation scheme for a 2D slice of BRDF at a given surface location, and then a Kernel Nyström method for SVBRDF data matrix reconstruction; 3) propose a practical algorithm to extract linear-independent basis BRDFs, and to calculate blending weights through projecting reconstructed reflectance onto these bases. Results demonstrate that our approach models real-world reflectance with both high accuracy and high visual fidelity for real-time virtual environment rendering.
Dynamics based 3D skeletal hand tracking BIBAFull-Text 63-70
  Stan Melax; Leonid Keselman; Sterling Orsten
Tracking the full skeletal pose of the hands and fingers is a challenging problem that has a plethora of applications for user interaction. Existing techniques either require wearable hardware, add restrictions to user pose, or require significant computation resources. This research explores a new approach to tracking hands, or any articulated model, by using an augmented rigid body simulation. This allows us to phrase 3D object tracking as a linear complementarity problem with a well-defined solution. Based on a depth sensor's samples, the system generates constraints that limit motion orthogonal to the rigid body model's surface. These constraints, along with prior motion, collision/contact constraints, and joint mechanics, are resolved with a projected Gauss-Seidel solver. Due to camera noise properties and attachment errors, the numerous surface constraints are impulse capped to avoid overpowering mechanical constraints. To improve tracking accuracy, multiple simulations are spawned at each frame and fed a variety of heuristics, constraints and poses. A 3D error metric selects the best-fit simulation, helping the system handle challenging hand motions. Such an approach enables real-time, robust, and accurate 3D skeletal tracking of a user's hand on a variety of depth cameras, while only utilizing a single x86 CPU core for processing.

Input 1: pens and consistency

Motion and context sensing techniques for pen computing BIBAFull-Text 71-78
  Ken Hinckley; Xiang 'Anthony' Chen; Hrvoje Benko
We explore techniques for a slender and untethered stylus prototype enhanced with a full suite of inertial sensors (three-axis accelerometer, gyroscope, and magnetometer). We present a taxonomy of enhanced stylus input techniques and consider a number of novel possibilities that combine motion sensors with pen stroke and touchscreen inputs on a pen + touch slate. These inertial sensors enable motion-gesture inputs, as well sensing the context of how the user is holding or using the stylus, even when the pen is not in contact with the tablet screen. Our initial results suggest that sensor-enhanced stylus input offers a potentially rich modality to augment interaction with slate computers.
User perceptions of drawing logic diagrams with pen-centric user interfaces BIBAFull-Text 79-86
  Bo Kang; Jared N. Bott; Joseph J., Jr. LaViola
Researchers hypothesize pen-based interfaces are the input method of choice for structured 2D languages, as they are natural for users. In our research we asked whether naturalness, similarity to pen and paper, is more important than speed of entry and ease of use by performing a study comparing interfaces for creating logic diagrams. We compared a Wizard of Oz based sketch interface with 100% accuracy, a drag-and-drop interface, and a hybrid interface combining features from sketch and drag-and-drop. Eighteen college students with logic gate diagram backgrounds participated in the study. We found that participants finished fastest with the hybrid interface, but ten out of eighteen participants felt that the sketch interface was fastest. Ten participants ranked the sketch interface easiest to use, while the hybrid interface was rated highly on ease of use metrics. Participants showed significant inclination towards the sketch interface as being natural. While the hybrid and sketch interfaces were ranked best for overall preference, neither was ranked more than the other. Even though the hybrid interface was empirically faster, user preferences for the interfaces varied, with many participants favoring the sketch interface. Finally, we tested for correlations between overall ranking for interfaces and other rankings on the interfaces and found the strongest correlation to be with ease of use. Based on our results, we believe that combining sketching with other interface paradigms could lead to better interfaces for structured 2D languages.
Understanding the consistency of users' pen and finger stroke gesture articulation BIBAFull-Text 87-94
  Lisa Anthony; Radu-Daniel Vatavu; Jacob O. Wobbrock
Little work has been done on understanding the articulation patterns of users' touch and surface gestures, despite the importance of such knowledge to inform the design of gesture recognizers and gesture sets for different applications. We report a methodology to analyze user consistency in gesture production, both between-users and within-user, by employing articulation features such as stroke type, stroke direction, and stroke ordering, and by measuring variations in execution with geometric and kinematic gesture descriptors. We report results on four gesture datasets (40,305 samples of 63 gesture types by 113 users). We find a high degree of consistency within-users (.91), lower consistency between-users (.55), higher consistency for certain gestures (e.g., less geometrically complex shapes are more consistent than complex ones), and a loglinear relationship between number of strokes and consistency. We highlight implications of our results to help designers create better surface gesture interfaces informed by user behavior.
Effects of hand drift while typing on touchscreens BIBAFull-Text 95-98
  Frank Chun Yat Li; Leah Findlater; Khai N. Truong
On a touchscreen keyboard, it can be difficult to continuously type without frequently looking at the keys. One factor contributing to this difficulty is called hand drift, where a user's hands gradually misalign with the touchscreen keyboard due to limited tactile feedback. Although intuitive, there remains a lack of empirical data to describe the effect of hand drift. A formal understanding of it can provide insights for improving soft keyboards. To formally quantify the degree (magnitude and direction) of hand drift, we conducted a 3-session study with 13 participants. We measured hand drift with two typing interfaces: a visible conventional keyboard and an invisible adaptive keyboard. To expose drift patterns, both keyboards used relaxed letter disambiguation to allow for unconstrained movement. Findings show that hand drift occurred in both interfaces, at an average rate of 0.25mm/min on the conventional keyboard and 1.32mm/min on the adaptive keyboard. Participants were also more likely to drift up and/or left instead of down or right.

Modeling and animation

ACM: atlas of connectivity maps for semiregular models BIBAFull-Text 99-107
  Ali Mahdavi Amiri; Faramarz Samavati
Semiregular models are an important subset of models in computer graphics. They are typically obtained by applying repetitive regular refinements on an initial arbitrary model. As a result, their connectivity strongly resembles regularity due to these refinement operations. Although data structures exist for regular or irregular models, a data structure designed to take advantage of this semiregularity is desirable. In this paper, we introduce such a data structure called atlas of connectivity maps for semiregular models resulting from arbitrary refinements. This atlas maps the connectivity information of vertices and faces on separate 2D domains called connectivity maps. The connectivity information between adjacent connectivity maps is determined by a linear transformation between their 2D domains. We also demonstrate the effectiveness of our data structure on subdivision and multiresolution applications.
Least-squares hermite radial basis functions implicits with adaptive sampling BIBAFull-Text 109-116
  Harlen Costa Batagelo; João Paulo Gois
We investigate the use of Hermite Radial Basis Functions (HRBF) Implicits with least squares for the implicit surface reconstruction of scattered first-order Hermitian data. Instead of interpolating all pairs of point-normals, we select a small subset of point-normals as centers of the HRBF Implicits while considering all pairs as least-squares constraints. Centers are adaptively sampled via a novel greedy algorithm that takes into account Hermitian data and distances between points. This approach produces sets of centers that are globally well distributed and preserves local features. We show that this yields accurate surface reconstructions with small sets of centers.
Local fairing with local inverse BIBAFull-Text 117-124
  Javad Sadeghi; Faramarz Samavati
Local fairing techniques are extensively used in the geometry processing of curves and surfaces. They also play an important role in the multiresolution shape editing and synthesis applications. However, due to the inter-dependency of the vertices after applying the current fairing techniques, their inverses are not local. Finding a local fairing operation with local inverse provides a well-defined relationship between the smooth vertices and the initial vertices. This paper introduces a new fairing operation for curves and surfaces that is smoothing and local but with a local inverse. In the curve domain, we find a class of banded smoothing matrices with banded inverses. Then, using the geometric interpretation of the corresponding local operation, this class is extended to surfaces. We discuss the advantages of using this new fairing operation in different applications. Also, the resulting operation is used to find novel subdivision schemes with well-defined reverse subdivisions.
Target particle control of smoke simulation BIBAFull-Text 125-132
  Jamie Madill; David Mould
User control over fluid simulations is a long-standing research problem in computer graphics. Applications in games and films often require recognizable creatures or objects formed from smoke, water, or flame. This paper describes a two-layer approach to the problem, in which a bulk velocity drives a particle system towards a target distribution, while simultaneously a vortex particle simulation adds recognizable fluid motion.
   A bulk velocity field is obtained by distributing target particles within a mesh, then matching control particles with target particles; control particles are given a trajectory bringing them to their targets, and a field is obtained by interpolating values from the control particles. A detail velocity field is obtained by traditional vortex particle simulation. We render the final particle system using stochastic shadow mapping. We spend some effort optimizing our processes for speed, obtaining simulations at interactive or near-interactive rates: from 70 to 500 milliseconds per frame depending on the configuration.

Health, wellness, and snippets

Is movement better?: comparing sedentary and motion-based game controls for older adults BIBAFull-Text 133-140
  Kathrin M. Gerling; Kristen K. Dergousoff; Regan L. Mandryk
Providing cognitive and physical stimulation for older adults is critical for their well-being. Video games offer the opportunity of engaging seniors, and research has shown a variety of positive effects of motion-based video games for older adults. However, little is known about the suitability of motion-based game controls for older adults and how their use is affected by age-related changes. In this paper, we present a study evaluating sedentary and motion-based game controls with a focus on differences between younger and older adults. Our results show that older adults can apply motion-based game controls efficiently, and that they enjoy motion-based interaction. We present design implications based on our study, and demonstrate how our findings can be applied both to motion-based game design and to general interaction design for older adults.
Adaptive difficulty in exergames for Parkinson's disease patients BIBAFull-Text 141-148
  Jan Smeddinck; Sandra Siegel; Marc Herrlich
Parkinson's disease (PD) patients can benefit from regular physical exercises which may ease their symptoms and can slow down the progression of the disease. Motion-based video games can provide motivation to carry out the often repetitive exercises, as long as they establish a suitable balance between the level of difficulty and each player's skills. We present an adaptive game system concept, which is based on separate difficulty parameters for speed, accuracy and range of motion. We then describe the heuristic performance-evaluation and adjustment mechanisms in a prototypical implementation which was applied in a case study with three PD patients over a period of three weeks. Results indicate that the system facilitated a challenging yet suitable game experience and a detailed analysis of the results informed a number of follow-up research questions for future research.
Personal informatics in chronic illness management BIBAFull-Text 149-156
  Haley MacLeod; Anthony Tang; Sheelagh Carpendale
Many people with chronic illness suffer from debilitating symptoms or episodes that inhibit normal day-to-day function. Pervasive tools offer the possibility to help manage these conditions, particularly by helping people understand their conditions. But, it is unclear how to design these tools, as prior designs have focused on effortful tracking and many see those tools as a burden to use. We report here on an interview study with 12 individuals with chronic illnesses who collect personal data. We learn that these people are motivated through self-discovery and curiosity. We explore how these concepts may support the design of tools that engage curiosity and encourage self-discovery, rather than emphasize the behaviour change aspect of chronic illness management.
Improving form-based data entry with image snippets BIBAFull-Text 157-164
  Nicola Dell; Nathan Breit; Jacob O. Wobbrock; Gaetano Borriello
This paper describes Snippets, a novel method for improving computerized data entry from paper forms. Using computer vision techniques, Snippets segments an image of the form into small snippets that each contain the content for a single form field. Data entry is performed by looking at the snippets on the screen and typing values directly on the same screen. We evaluated Snippets through a controlled user study in Seattle, Washington, USA, comparing the performance of Snippets on desktop and mobile platforms to the baseline method of reading the form and manually entering the data. Our results show that Snippets improved the speed of data entry by an average of 28.3% on the desktop platform and 10.8% on the mobile platform without any detectable loss of accuracy. In addition, findings from a preliminary field study with five participants in Bangalore, India support these empirical results. We conclude that Snippets is an efficient and practical method that could be widely used to aid data entry from paper forms.

Rendering

A micro 64-tree structure for accelerating ray tracing on a GPU BIBAFull-Text 165-172
  Xin Liu; Jon G. Rokne
The uniform grid is a well-known acceleration structure for ray tracing. It is fast to build, but slow to traverse. In this paper, we propose a novel micro 64-tree structure to speed up grid traversals on a GPU. A micro 64-tree is a compact 64-way full tree that summarizes the occupancy of an underlying uniform grid in a hierarchy. A node of the tree stands for a voxel, whose occupancy is represented by a single bit. A node is subdivided into a 64-subgrid that is stored in a 64-bit word. The micro 64-tree is built on the top of a uniform grid. We improve the GPU grid construction algorithm by computing precise triangle-cell intersections and precluding non-overlapping triangle-cell pairs before sorting. The micro 64-tree is then built bottom-up from the uniform grid by reductions in parallel. The top levels of the micro 64-tree are pre-loaded into the shared memory of a GPU, which support on-chip traversals across the coarse levels. The traversal algorithm navigates the ray through the 64-subgrids at different levels, with a concise context for each level stored in the GPU's registers to facilitate vertical moves. With a small overhead in memory and a small overhead in building time, the micro 64-tree can reduce traversal steps, decrease memory bandwidth consumption, and hence significantly improve the efficiency of ray tracing on a GPU.
Partition of unity parametrics for texture synthesis BIBAFull-Text 173-179
  Jack Caron; David Mould
Partition of unity parametrics (PUPs) are a recent framework designed for geometric modeling. We propose employing PUPs for procedural texture synthesis, taking advantage of the framework's guarantees of high continuity and local support. Using PUPs to interpolate among data values distributed through the plane, the problem of texture synthesis can be approached from the perspective of point placement and attribute assignment. We present several alternative mechanisms for point distribution and demonstrate how the system is able to produce a variety of distinct classes of texture, including analogs to cellular texture, Perlin noise, and progressively-variant textures.
Structure and aesthetics in non-photorealistic images BIBAFull-Text 181-188
  Hua Li; David Mould; Jim Davies
Non-photorealistic rendering (NPR) has been used to produce stylized images, e.g., in a stippled or painted style. To evaluate NPR algorithms, similarity measurements used in image processing have been employed to assess the quality of rendered images. However, there is no standard objective measurement of stylization quality. In many cases, raw side-by-side comparisons are used to demonstrate improvements in aesthetic quality. This means of comparison often fails to be persuasive due to the small size of demonstrations and the subjective choice of images. We conducted a user study and examined responses of 30 subjects in order to determine two things: whether there exists a relationship between the structural quality and aesthetic quality of non-colored non-photorealistic images; and whether the choice of images matters for side-by-side comparisons.
   Our study revealed a statistically significant correlation between the aesthetic and structure ratings given by participants: increases in structural rating coincided with increases in aesthetic rating. Second, participants' ratings of structure and aesthetic were influenced by image content: that is, choice of input images influenced the results of side-by-side comparisons.
Rendering in shift-invariant spaces BIBAFull-Text 189-196
  Usman R. Alim
We present a novel image representation method based on shift-invariant spaces. Unlike existing rendering methods, our proposed approach consists of two steps: an analog acquisition step that traces rays through the scene, and a subsequent digital processing step that filters the intermediate digital image to obtain the coefficients of a minimum-error continuous image approximation. Our approach can be easily incorporated in existing renderers with very little change and with little-to-no computational overhead. Additionally, we introduce the necessary tools needed to analyze the smoothing and post-aliasing properties of the minimum-error approximations.
   We provide examples of spaces -- generated by the uniform B-splines -- that can be readily used in conjunction with the two-dimensional Cartesian grid. Our experimental results demonstrate that minimum-error approximations significantly enhance image quality by preserving high-frequency details that are usually smoothed out by existing image anti-aliasing approaches.

Input 2: haptic and gestures

Understanding touch selection accuracy on flat and hemispherical deformable surfaces BIBAFull-Text 197-204
  Felipe Bacim; Mike Sinclair; Hrvoje Benko
Touch technology is rapidly evolving, and soon deformable, movable and malleable touch interfaces may be part of everyday computing. While there has been a lot of work on understanding touch interactions on flat surfaces, as well as recent work about pointing on curved surfaces, little is known about how surface deformation affects touch interactions. This paper presents the study of how different features of deformable surfaces affect touch selection accuracy, both in terms of position and control of the deformation distance, which refers to the distance traveled by the finger when deforming the surface. We conducted three separate user studies, investigating how touch interactions on a deformable surface are affected not only by the compliant force feedback generated by the elastic surface, but also by the use of visual feedback, the use of a tactile delimiter to indicate the maximum deformation distance, and the use of hemispherical surface shape. The results indicate that, when provided with visual feedback, users can achieve sub-millimeter precision for deformation distance. In addition, without visual feedback, users tend to overestimate deformation distance especially in conditions that require less deformation and therefore provide less surface tension. While the use of a tactile delimiter to indicate maximum deformation improves the distance estimation accuracy, it does not eliminate overestimation. Finally, the shape of the surface also affects touch selection accuracy for both touch position and deformation distance.
It's alive!: exploring the design space of a gesturing phone BIBAFull-Text 205-212
  Jessica Q. Dawson; Oliver S. Schneider; Joel Ferstay; Dereck Toker; Juliette Link; Shathel Haddad; Karon MacLean
Recent technical developments with flexible display materials have diversified the possible forms of near-future handheld devices. We envision smartphones that will deploy these materials for physical, device-originated gestural display as expressive channels for user communication. Over several iterations, we designed both human-actuated and mechanized prototypes that animate the standard block-like smartphone form-factor with evocative life-like gestures. We present three basic prototypes developed through an exploratory study, and a medium fidelity prototype developed in a second study, which enact a combination of visual and haptic gestural displays including breathing, curling, crawling, ears, and vibration. Through two evaluations we find that (a) users are receptive to the use of gestural displays to enrich their communications; and (b) smartphone-embodied gestural displays are capable of expressing both common notifications (e.g., incoming calls) and emotional content through the dimensions of arousal and, to a small extent, valence. Finally, we conclude with several guidelines for the design of gestural mobile devices.
Haptic target acquisition to enable spatial gestures in nonvisual displays BIBAFull-Text 213-219
  Alexander Fiannaca; Tony Morelli; Eelke Folmer
Nonvisual natural user interfaces can facilitate gesture-based interaction without having to rely on a physical display. Consequently, this may significantly increase available interaction space on mobile devices, where screen real estate is limited. Interacting with invisible objects is challenging though, as such techniques do not provide any spatial feedback but rely entirely on users' visuospatial memory. This paper presents an interaction technique that appropriates a user's arm using haptic feedback to point out the location of nonvisual objects; thereby, allowing for spatial interaction with nonvisual objects. User studies evaluate the effectiveness of two different single-arm target-scanning strategies for selecting an object in 3D and two bimanual target-scanning strategies for selecting an object in 2D. Potential useful applications of our techniques are outlined.
Extending the vocabulary of touch events with ThumbRock BIBAFull-Text 221-228
  David Bonnet; Caroline Appert; Michel Beaudouin-Lafon
Compared with mouse-based interaction on a desktop interface, touch-based interaction on a mobile device is quite limited: most applications only support tapping and dragging to perform simple gestures. Finger rolling provides an alternative to tapping but uses a recognition process that relies on either per-user calibration, explicit delimiters or extra hardware, making it difficult to integrate into current touch-based mobile devices. This paper introduces ThumbRock, a ready-to-use micro gesture that consists in rolling the thumb back and forth on the touchscreen. Our algorithm recognizes ThumbRocks with more than 96% accuracy without calibration nor explicit delimiter by analyzing the data provided by the touch screen with a low computational cost. The full trace of the gesture is analyzed incrementally to ensure compatibility with other events and to support real-time feedback. This also makes it possible to create a continuous control space as we illustrate with our MicroSlider, a 1D slider manipulated with thumb rolling gestures.