HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 9900010203040506070809101112131415

Proceedings of the 2009 Conference on Graphics Interface

Fullname:Proceedings of the 2009 Conference on Graphics Interface
Editors:Amy Gooch; Melanie Tory
Location:Kelowna, British Columbia, Canada
Dates:2009-May-25 to 2009-May-27
Publisher:Canadian Information Processing Society
Standard No:ISSN: 0713-5424; ISBN: 1-56881-470-4, 978-1-56881-470-4; ACM DL: Table of Contents hcibib: GI09
Papers:38
Pages:257
Links:Conference Home Page | Conference Series Home Page
  1. Awards
  2. Invited speakers
  3. Geometry processing
  4. Surfaces and meshes
  5. Image editing: depth, focus, and balance
  6. Rendering: moonbeams, mist, and iridescent gems
  7. Graphs, paths, and rigs
  8. Best student papers
  9. Haptics and novel interaction techniques
  10. Pen and touch interfaces
  11. Contextual design
  12. HCI notes
  13. Pointing, selection, and text input

Awards

Michael A. J. Sweeney Award: Graphics 2009 Award Winner "Parallax Photography: Creating 3D Cinematic Effects from Stills" by Ke Zheng, Alex Colburn, Aseem Agarwala, Maneesh Agrawala, Brian Curless, David Salesin, Michael Cohen BIBFull-Text 01
 
Michael A. J. Sweeney Award: HCI 2009 Award Winner "Determining the Benefits of Direct-Touch, Bimanual, and Multifinger Input on a Multitouch Workstation" by Kenrick Kin, Maneesh Agrawala, Tony DeRose BIBFull-Text 02
 
Alain Fournier Award 2008: Samuel Hasinoff, University of Toronto, Canada, CHCCS/SCDHM Alain Fournier Award Recipient 2008 BIBFull-Text 03
 
Achievement Award 2009: Przemyslaw Prusinkiewicz, University of Calgary, Canada, CHCCS/SCDHM Achievement Award Recipient 2009 BIBFull-Text 04
 

Invited speakers

Semantic graphics for more effective visual communication BIBAFull-Text 05
  Vidya Setlur
Computers are becoming faster, smaller and more interconnected, creating a shift in their primary function from computation to communication. This trend is exemplified by ubiquitous devices such as mobile phones with cameras, personal digital assistants with video, and information displays in automobiles. As communication devices and viewing situations become more plentiful, we need imagery that facilitates visual communication across a wide range of display devices. In addition, producing effective and expressive visual content currently requires considerable artistic skill and can consume days. There is a growing need to develop new techniques and user interfaces that enhance visual communication, while making it fast and easy to generate compelling content. New algorithms in semantic graphics, i.e. combining concepts and methods from visual art, perceptual psychology, information processing, and cognitive science, help facilitate users in creating, understanding and interpreting computer imagery. In this talk, Vidya Setlur will present the usage of semantic graphics for various information visualization goals.
Graphics hardware & GPU computing: past, present, and future BIBAFull-Text 06
  David Luebke
Modern GPUs have emerged as the world's most successful parallel architecture. GPUs provide a level of massively parallel computation that was once the preserve of supercomputers like the MasPar and Connection Machine. For example, NVIDIA's GeForce GTX 280 is a fully programmable, massively multithreaded chip with up to 240 cores, 30,720 threads and capable of performing up to a trillion operations per second. The raw computational horsepower of these chips has expanded their reach well beyond graphics. Today's GPUs not only render video game frames, they also accelerate physics computations, video transcoding, image processing, astrophysics, protein folding, seismic exploration, computational finance, radioastronomy -- the list goes on and on. Enabled by platforms like the CUDA architecture, which provides a scalable programming model, researchers across science and engineering are accelerating applications in their discipline by up to two orders of magnitude. These success stories, and the tremendous scientific and market opportunities they open up, imply a new and diverse set of workloads that in turn carry implications for the evolution of future GPU architectures.
   In this talk I will discuss the evolution of GPUs from fixed-function graphics accelerators to general-purpose massively parallel processors. I will briefly motivate GPU computing and explore the transition it represents in massively parallel computing: from the domain of supercomputers to that of commodity "manycore" hardware available to all. I will discuss the goals, implications, and key abstractions of the CUDA architecture. Finally I will close with a discussion of future workloads in games, high-performance computing, and consumer applications, and their implications for future GPU architectures.

Geometry processing

Preserving sharp edges in geometry images BIBAFull-Text 1-6
  Mathieu Gauthier; Pierre Poulin
A geometry image offers a simple and compact way of encoding the geometry of a surface and its implicit connectivity in an image-like data structure. It has been shown to be useful in multiple applications because of its suitability for efficient hardware rendering, level of detail, filtering, etc. Most existing algorithms generate geometry images by parameterizing the surface onto a domain, and by performing a regular resampling in this domain. Unfortunately, this regular resampling fails to capture sharp features present on the surface. In this paper, we propose to slightly alter the grid to align sample positions with corners and sharp edges in the geometric model. While doing so, our goal is to maintain the resulting geometry images simple to interpret, while producing higher quality reconstructions. We demonstrate an implementation in the planar domain and show results on a range of common geometrical models.
Fast visualization of complex 3D models using displacement mapping BIBAFull-Text 7-14
  The-Kiet Lu; Kok-Lim Low; Jianmin Zheng
We present a simple method to render complex 3D models at interactive rates using real-time displacement mapping. We use an octree to decompose the 3D model into a set of height fields and display the model by rendering the height fields using per-pixel displacement mapping. By simply rendering the faces of the octree voxels to produce fragments for ray-casting on the GPU, and with straightforward transformation of view rays to the displacement map's local space, our method is able to accurately render the object's silhouettes with very little special handling. The algorithm is especially suitable for fast visualization of high-detail point-based models, and models made up of unprocessed triangle meshes that come straight from range scanning. This is because our method requires much less preprocessing time compared to the traditional triangle-based rendering approach, which usually needs a large amount of computation to preprocess the input model into one that can be rendered more efficiently. Unlike the point-based rendering approach, the rendering efficiency of our method is not limited by the number of input points. Our method can achieve interactive rendering of models with more than 300 millions points on standard graphics hardware.

Surfaces and meshes

Fast low-memory streaming MLS reconstruction of point-sampled surfaces BIBAFull-Text 15-22
  Gianmauro Cuccuru; Enrico Gobbetti; Fabio Marton; Renato Pajarola; Ruggero Pintus
We present a simple and efficient method for reconstructing triangulated surfaces from massive oriented point sample datasets. The method combines streaming and parallelization, moving least-squares (MLS) projection, adaptive space subdivision, and regularized isosurface extraction. Besides presenting the overall design and evaluation of the system, our contributions include methods for keeping in-core data structures complexity purely locally output-sensitive and for exploiting both the explicit and implicit data produced by a MLS projector to produce tightly fitting regularized triangulations using a primal isosurface extractor. Our results show that the system is fast, scalable, and accurate. We are able to process models with several hundred million points in about an hour and outperform current fast streaming reconstructors in terms of geometric accuracy.
Interactive part selection for mesh and point models using hierarchical graph-cut partitioning BIBAKFull-Text 23-30
  Steven Brown; Bryan Morse; William Barrett
This paper presents a method for interactive part selection for mesh and point set surface models that combines scribble-based selection methods with hierarchically accelerated graph-cut segmentation. Using graph-cut segmentation to determine optimal intuitive part boundaries enables easy part selection on complex geometries and allows for a simple, scribble-based interface that focuses on selecting within visible parts instead of precisely defining part boundaries that may be in difficult or occluded regions. Hierarchical acceleration is used to maintain interactive speed on large models and to provide connectivity when extending the technique to point set models.
Keywords: graph cut, interactive modeling tools, mesh, model partitioning, point set, scribble interface
Computing surface offsets and bisectors using a sampled constraint solver BIBAFull-Text 31-37
  David E. Johnson; Elaine Cohen
This paper describes SCSolver, a geometric constraint solver based on adaptive sampling of an underlying constraint space. The solver is demonstrated on the computation of the offset to a surface as well as the computation of the bisector between two surfaces. The adaptive constraint sampling generates a solution manifold through a generalized dual-contouring approach appropriate for higher-dimensional problems. Experimental results show that the SCSolver approach can compute solutions for complex input geometry at interactive rates for each example application.

Image editing: depth, focus, and balance

Depth of field postprocessing for layered scenes using constant-time rectangle spreading BIBAFull-Text 39-46
  Todd J. Kosloff; Michael W. Tao; Brian A. Barsky
Control over what is in focus and what is not in focus in an image is an important artistic tool. The range of depth in a 3D scene that is imaged in sufficient focus through an optics system, such as a camera lens, is called depth of field. Without depth of field, the entire scene appears completely in sharp focus, leading to an unnatural, overly crisp appearance. Current techniques for rendering depth of field in computer graphics are either slow or suffer from artifacts, or restrict the choice of point spread function (PSF). In this paper, we present a new image filter based on rectangle spreading which is constant time per pixel. When used in a layered depth of field framework, our filter eliminates the intensity leakage and depth discontinuity artifacts that occur in previous methods. We also present several extensions to our rectangle spreading method to allow flexibility in the appearance of the blur through control over the PSF.
3D-aware image editing for out of bounds photography BIBAFull-Text 47-54
  Amit Shesh; Antonio Criminisi; Carsten Rother; Gavin Smyth
In this paper, we propose algorithms to manipulate 2D images in a way that is consistent with the 3D geometry of the scene that they capture. We present these algorithms in the context of creating "Out of Bounds" (OOB) images -- compelling, depth-rich images generated from single, conventional 2D photographs (fig. 1). Starting from a single image our tool enables rapid OOB prototyping; i.e. the ability to quickly create and experiment with many different variants of the OOB effect before deciding which one best expresses the users' artistic intentions. We achieve this with a flexible work-flow driven by an intuitive user interface.
   The rich 3D perception of the final composition is achieved by exploiting two strong cues -- occlusions and shadows. A realistic-looking 3D frame is interactively inserted in the scene between segmented foreground objects and the background to generate novel occlusions and enhance the scene's perception of depth. This perception is further enhanced by adding new, realistic cast shadows. The key contributions of this paper are: (i) new algorithms for inserting simple 3D objects like frames in 2D images requiring minimal camera calibration, and (ii) new techniques for the realistic synthesis of cast shadows, even for complex 3D objects. These algorithms, although presented for OOB photography, may be directly used in general image composition tasks.
   With our tool, untrained users can turn ordinary photos into compelling OOB images in seconds. In contrast with existing workflows, at any time the artist can modify any aspect of the composition while avoiding time-consuming pixel painting operations. Such a tool has important commercial applications, and is much more suitable for OOB prototyping than existing image editors.
One-click white balance using human skin reflectance BIBAKFull-Text 55-62
  Jeremy Long; Amy A. Gooch
Existing methods for white balancing photographs tend to rely on skilled interaction from the user, which is prohibitive for most amateur photographers. We propose a minimal interaction system for white balancing photographs that contain humans. Many of the pictures taken by amateur photographers fall into this category. Our system matches a user-selected patch of skin in a photograph to an entry in a skin reflectance function database. The estimate of the illuminant that emerges from the skin matching can be used to white balance the photograph, allowing users to compensate for biased illumination in an image with a single click. We compare the quality of our results to output from three other low-interaction methods, including commercial approaches such as Google Picasa's one-click relighting [19], a whitepoint-based algorithm [16], and Ebner's localized gray-world algorithm [7]. The comparisons indicate that our approach offers several advantages for amateur photographers.
Keywords: color constancy, computational photography, white balance

Rendering: moonbeams, mist, and iridescent gems

Rendering lunar eclipses BIBAFull-Text 63-69
  Theodore C. Yapo; Barbara Cutler
Johannes Kepler first attributed the visibility of lunar eclipses to refraction in the Earth's atmosphere in his Astronomiae Pars Optica in 1604. We describe a method for rendering images of lunar eclipses including color contributions due to refraction, dispersion, and scattering in the Earth's atmosphere. We present an efficient model of refraction and scattering in the atmosphere, including contributions of suspended volcanic dusts which contribute to the observed variation in eclipse brightness and color. We propose a method for simulating camera exposure to allow direct comparison between rendered images and digital photographs. Images rendered with our technique are compared to photographs of the total lunar eclipse of February 21, 2008.
An analytical approach to single scattering for anisotropic media and light distributions BIBAFull-Text 71-77
  Vincent Pegoraro; Mathias Schott; Steven G. Parker
Despite their numerous applications, efficiently rendering participating media remains a challenging task due to the intricacy of the radiative transport equation. While numerical techniques remain the method of choice for addressing complex problems, a closed-form solution to the air-light integral in optically thin isotropic media was recently derived. In this paper, we extend this work and present a novel analytical approach to single scattering from point light sources in homogeneous media. We propose a combined formulation of the air-light integral which allows both anisotropic phase functions and light distributions to be adequately handled. The technique relies neither on precomputation nor on storage, and we provide a robust and efficient implementation allowing for an explicit control on the accuracy of the results. Finally, the performance characteristics of the method on graphics hardware are evaluated and demonstrate its suitability to real-time applications.
Rendering the effect of labradoescence BIBAFull-Text 79-85
  Andrea Weidlich; Alexander Wilkie
Labradorescence is a complex optical phenomenon that can be found in certain minerals, such as Labradorite or Spectrolite. Because of their unique colour properties, these minerals are often used as gemstones and decorative objects. Since the phenomenon is strongly orientation dependent, such minerals need a special cut to make the most of their unique type of colourful sheen, which makes it desirable to be able to predict the final appearance of a given stone prior to the cutting process. Also, the peculiar properties of the effect make a believable reproduction with an ad-hoc shader difficult even for normal, non-predictive rendering purposes.
   We provide a reflectance model for labradorescence that is directly derived from the physical characteristics of such materials. Due to its inherent accuracy, it can be used for predictive rendering purposes, but also for generic rendering applications.

Graphs, paths, and rigs

Structural differences between two graphs through hierarchies BIBAFull-Text 87-94
  Daniel Archambault
This paper presents a technique for visualizing the differences between two graphs. The technique assumes that a unique labeling of the nodes for each graph is available, where if a pair of labels match, they correspond to the same node in both graphs. Such labeling often exists in many application areas: IP addresses in computer networks, namespaces, class names, and function names in software engineering, to name a few. As many areas of the graph may be the same in both graphs, we visualize large areas of difference through a graph hierarchy. We introduce a path-preserving coarsening technique for degree one nodes of the same classification. We also introduce a path-preserving coarsening technique based on betweenness centrality that is able to illustrate major differences between two graphs.
Sketch-based path design BIBAFull-Text 95-102
  James McCrae; Karan Singh
We present Drive, a system for the conceptual layout of 3D path networks. Our sketch-based interface allows users to efficiently author path layouts with minimal instruction. Our system incorporates some new and noteworthy components. We present the break-out lens, a novel widget for interactive graphics, inspired by break-out views used in engineering visualization. We also make three contributions specific to path curve design: First, we extend our previous work to fit aesthetic paths to sketch strokes with constraints, using piecewise clothoid curves. Second, we determine the height of paths above the terrain using a constraint optimization formulation of the occlusion relationships between sketched strokes. Finally, we illustrate examples of terrain sensitive path construction in the context of road design: automatically removing foliage, building bridges and tunnels across topographic features and constructing road signs appropriate to the sketched paths.
Rig retargeting for 3D animation BIBAFull-Text 103-110
  Martin Poirier; Eric Paquette
This paper presents a new approach to facilitate reuse and remixing in character animation. It demonstrates a method for automatically adapting existing skeletons to different characters. While the method can be applied to simple skeletons, it also proposes a new approach that is applicable to high quality animation as it is able to deal with complex skeletons that include control bones (those that drive deforming bones). Given a character mesh and a skeleton, the method adapts the skeleton to the character by matching topology graphs between the two. It proposes specific multiresolution and symmetry approaches as well as a simple yet effective shape descriptor. Together, these provide a robust retargeting that can also be tuned between the original skeleton shape and the mesh shape with intuitive weights. Furthermore, the method can be used for partial retargeting to directly attach skeleton parts to specific limbs. Finally, it is efficient as our prototype implementation generally takes less than 30 seconds to adapt a skeleton to a character.

Best student papers

Parallax photography: creating 3D cinematic effects from stills BIBAKFull-Text 111-118
  Ke Colin Zheng; Alex Colburn; Aseem Agarwala; Maneesh Agrawala; David Salesin; Brian Curless; Michael F. Cohen
We present an approach to convert a small portion of a light field with extracted depth information into a cinematic effect with simulated, smooth camera motion that exhibits a sense of 3D parallax. We develop a taxonomy of the cinematic conventions of these effects, distilled from observations of documentary film footage and organized by the number of subjects of interest in the scene. We present an automatic, content-aware approach to apply these cinematic conventions to an input light field. A face detector identifies subjects of interest. We then optimize for a camera path that conforms to a cinematic convention, maximizes apparent parallax, and avoids missing information in the input. We describe a GPU-accelerated, temporally coherent rendering algorithm that allows users to create more complex camera moves interactively, while experimenting with effects such as focal length, depth of field, and selective, depth-based desaturation or brightening. We evaluate and demonstrate our approach on a wide variety of scenes and present a user study that compares our 3D cinematic effects to their 2D counterparts.
Keywords: image-based rendering, photo and image editing
Determining the benefits of direct-touch, bimanual, and multifinger input on a multitouch workstation BIBAKFull-Text 119-124
  Kenrick Kin; Maneesh Agrawala; Tony DeRose
Multitouch workstations support direct-touch, bimanual, and multifinger interaction. Previous studies have separately examined the benefits of these three interaction attributes over mouse-based interactions. In contrast, we present an empirical user study that considers these three interaction attributes together for a single task, such that we can quantify and compare the performances of each attribute. In our experiment users select multiple targets using either a mouse-based workstation equipped with one mouse, or a multitouch workstation using either one finger, two fingers (one from each hand), or multiple fingers. We find that the fastest multitouch condition is about twice as fast as the mouse-based workstation, independent of the number of targets. Direct-touch with one finger accounts for an average of 83% of the reduction in selection time. Bimanual interaction, using at least two fingers, one on each hand, accounts for the remaining reduction in selection time. Further, we find that for novice multitouch users there is no significant difference in selection time between using one finger on each hand and using any number of fingers for this task. Based on these observations we conclude with several design guidelines for developing multitouch user interfaces.
Keywords: bimanual input, direct-touch input, mouse, multifinger input, multitarget selection, multitouch

Haptics and novel interaction techniques

Heart rate control of exercise video games BIBAKFull-Text 125-132
  Tadeusz Stach; T. C. Nicholas Graham; Jeffrey Yim; Ryan E. Rhodes
Exercise video games combine entertainment and physical movement in an effort to encourage people to be more physically active. Multiplayer exercise games take advantage of the motivating aspects of group activity by allowing people to exercise together. However, people of significantly different fitness levels can have a hard time playing together, as large differences in performance can be demotivating. To address this problem, we present heart rate scaling, a mechanism where players' in-game performance is based on their effort relative to their fitness level. Specifically, heart rate monitoring is used to scale performance relative to how closely a person adheres to his/her target heart rate zone. We demonstrate that heart rate scaling reduces the performance gap between people of different fitness levels, and that the scaling mechanism does not significantly affect engagement during gameplay.
Keywords: active games, exertion interfaces, heart rate input, kinetic interfaces, multiplayer exercise video games
Exploring melodic variance in rhythmic haptic stimulus design BIBAKFull-Text 133-140
  Bradley A. Swerdfeger; Jennifer Fernquist; Thomas W. Hazelton; Karon E. MacLean
Haptic icons are brief, meaningful tactile or force stimuli designed to support the communication of information through the often-underutilized haptic modality. Challenges to producing large, reusable sets of haptic icons include technological constraints and the need for broadly-applicable and validated design heuristics to guide the process. The largest set of haptic stimuli to date was produced through systematic use of heuristics for monotone rhythms. We hypothesized that further extending signal expressivity would continue to enhance icon learnability. Here, we introduce melody into the design of rhythmic stimuli as a means of increasing expressiveness while retaining the principle of systematic design, as guided by music theory. Haptic melodies are evaluated for their perceptual distinctiveness; experimental results from grouping tasks indicate that rhythm dominates user categorization of melodies, with frequency and amplitude potentially left available as new dimensions for the designer to control within-group variation.
Keywords: haptic UIs, multi-modal interfaces, user studies
Improving simulated borescope inspection with constrained camera motion and haptic feedback BIBAFull-Text 141-148
  Deepak Vembar; Andrew T. Duchowski; Anand K. Gramopadhye; Carl Washburn
Results are presented from empirical evaluation of a borescope simulator developed for non-destructive inspection training. Two experiments were conducted, manipulating camera rotation constraint and provision of haptic feedback. Performance of experienced borescope inspectors is measured in terms of speed and accuracy, with accuracy clearly shown to improve by placing constraints on the simulator's camera tip rotation and by providing haptic response. This is important as damage avoidance of a real borescope is a critical criterion of borescope inspection training. These are likely to be the first such experiments to have been conducted with aircraft engine inspectors evaluating the potential of haptics in borescope simulation.

Pen and touch interfaces

Who dotted that 'i'?: context free user differentiation through pressure and tilt pen data BIBAFull-Text 149-156
  Brian David Eoff; Tracy Hammond
With the proliferation of tablet PCs and multi-touch computers, collaborative input on a single sketched surface is becoming more and more prevalent. The ability to identify which user draws a specific stroke on a shared surface is widely useful in a) security/forensics research, by effectively identifying a forgery, b) sketch recognition, by providing the ability to employ user-dependent recognition algorithms on a multi-user system, and c) multi-user collaborative systems, by effectively discriminating whose stroke is whose in a complicated diagram. To ensure an adaptive user interface, we cannot expect nor require that users will self-identify nor restrict themselves to a single pen. Instead, we prefer a system that can automatically determine a stroke's owner, even when strokes by different users are drawn with the same pen, in close proximity, and near in timing. We present the results of an experiment that shows that the creator of an individual pen strokes can be determined with high accuracy, without supra-stroke context (such as timing, pen-ID, nor location), and based solely on the physical mechanics of how these strokes are drawn (specifically, pen tilt, pressure, and speed). Results from free-form drawing data, including text and doodles, but not signature data, show that our methods differentiate a single stroke (such as that of a dot of an 'i') between two users at an accuracy of 97.5% and between ten users at an accuracy of 83.5%.
Recognizing interspersed sketches quickly BIBAFull-Text 157-166
  Tracy A. Hammond; Randall Davis
Sketch recognition is the automated recognition of hand-drawn diagrams. When allowing users to sketch as they would naturally, users may draw shapes in an interspersed manner, starting a second shape before finishing the first. In order to provide freedom to draw interspersed shapes, an exponential combination of subshapes must be considered. Because of this, most sketch recognition systems either choose not to handle interspersing, or handle only a limited pre-defined amount of interspersing. Our goal is to eliminate such interspersing drawing constraints from the sketcher. This paper presents a high-level recognition algorithm that, while still exponential, allows for complete interspersing freedom, running in near real-time through early effective sub-tree pruning. At the core of the algorithm is an indexing technique that takes advantage of geometric sketch recognition techniques to index each shape for efficient access and fast pruning during recognition. We have stress-tested our algorithm to show that the system recognizes shapes in less than a second even with over a hundred candidate subshapes on screen.
Handle Flags: efficient and flexible selections for inking applications BIBAKFull-Text 167-174
  Tovi Grossman; Patrick Baudisch; Ken Hinckley
There are a number of challenges associated with content selection in pen-based interfaces. Supplementary buttons to enter a selection mode may not be available, and selections may require a careful and error prone lasso stroke. In this paper we describe the design and evaluation of Handle Flags, a new localized technique used to select and perform commands on ink strokes in pen-operated interfaces. When the user positions the pen near an ink stroke, Handle Flags are displayed for the potential selections that the ink stroke could belong to (such as proximal strokes comprising a word or drawing). Tapping the handle allows the user to access the corresponding selection, without requiring a complex lasso stroke. Our studies show that Handle Flags offer significant benefits in comparison to traditional techniques, and are a promising technique for pen-based applications.
Keywords: Handle Flag, ink, lasso, pen input, selection
Separability of spatial manipulations in multi-touch interfaces BIBAKFull-Text 175-182
  Miguel A. Nacenta; Patrick Baudisch; Hrvoje Benko; Andy Wilson
Multi-touch interfaces allow users to translate, rotate, and scale digital objects in a single interaction. However, this freedom represents a problem when users intend to perform only a subset of manipulations. A user trying to scale an object in a print layout program, for example, might find that the object was also slightly translated and rotated, interfering with what was already carefully laid out earlier.
   We implemented and tested interaction techniques that allow users to select a subset of manipulations. Magnitude Filtering eliminates transformations (e.g., rotation) that are small in magnitude. Gesture Matching attempts to classify the user's input into a subset of manipulation gestures. Handles adopts a conventional single-touch handles approach for touch input. Our empirical study showed that these techniques significantly reduce errors in layout, while the Handles technique was slowest. A variation of the Gesture Matching technique presented the best combination of speed and control, and was favored by participants.
Keywords: multi-touch interaction, separability, tabletops

Contextual design

Presenting identity in a virtual world through avatar appearances BIBAKFull-Text 183-190
  Carman Neustaedter; Elena Fedorovskaya
One of the first tasks that people must do when entering a virtual world (VW) is create a virtual representation for themselves. In many VWs, this means creating an avatar that represents some desired appearance, whether a reflection of one's real life self, or a different identity. We investigate the variety of ways in which people create and evolve avatar appearances in the VW of Second Life® (SL) through contextual interviews. Our findings reveal that users balance pressures from the societal norms of SL with the need to create an appearance that matches a desired virtual identity. These identity needs differ based on four types of users -- Realistics, Ideals, Fantasies, and Roleplayers -- where each presents unique challenges for avatar design. Current research tends to focus on the needs of only one of these user types.
Keywords: appearance, avatar, identity, virtual worlds
Understanding and improving flow in digital photo ecosystems BIBAKFull-Text 191-198
  Carman Neustaedter; Elena Fedorovskaya
Families use a range of devices and locations to capture, manage, and share digital photos as part of their digital photo ecosystem. The act of moving media between devices and locations is not always simple though and can easily become time consuming. We conducted interviews and design sessions in order to better understand the movement of media in digital photo ecosystems and investigate ways to improve it. Our results show that users must manage multiple entry points into their ecosystem, avoid segmentation in their collections, and explicitly select and move photos between desired devices and locations. Through design sessions, we present and evaluate design ideas to overcome these challenges that utilize multipurpose devices, always-accessible photo collections, and sharing from any device. These show how automation can be combined with recommendation and user interaction to improve flow within digital photo ecosystems.
Keywords: capture, digital photos, display, ecosystems, sharing

HCI notes

A multi-level pressure-sensing two-handed interface with finger-mounted pressure sensors BIBAFull-Text 199-202
  Masaki Omata; Manabu Kajino; Atsumi Imamiya
This paper proposes separating a pressure sensor off from an input device and attaching it directly onto a user's finger to allow the user to input pressure values into a computer with various devices and various places. This proposal solves the problem of requiring an individual pressure sensor for each pressure-sensing input device because we've attached a sensor to not a device which is pushed but rather a finger which pushes it. As an instance, we developed a multi-level pressure-sensing two-handed user interface by measuring the positions and pressure values of both the user's hands. The user can manipulate a screen object with the dominant hand and assist it by adjusting the position and the intensity of pressure of the dominant hand and non-dominant hand. We developed some GUI functions: cursor aura for expanding the sphere of its influence, non-dominant hand cursor for picking up a hidden window, and pressure-sensing keyboard input to add arousal to text. The advantages of our system are; (1) a user can use a favorite device and add pressure value, and (2) a user can enter a multi-level value by pressing heavily or lightly without looking at user's hands.
Potential field approach for haptic selection BIBAKFull-Text 203-206
  Jean Simard; Mehdi Ammi; Flavien Picon; Patrick Bourdot
In a number of 3d applications and especially in Computer Aided Design (CAD), the accuracy of the selection process is important for subsequent operations. In this paper, we propose a mathematical model to manage haptic selection of topological entities (vertices, edges, faces...) used in CAD. We have developed an analytical expression with a generic and unified representation based on potential fields. The result is a simplified model for software implementation. Moreover, these functions introduce a smooth, accurate and stable force profile.
Keywords: CAD, haptic, potential field, selection, virtual reality
Haptic conviction widgets BIBAKFull-Text 207-210
  Gerry Chu; Tomer Moscovich; Ravin Balakrishnan
We introduce a haptic mousewheel as a platform for design exploration of haptic conviction widgets. Conviction is how strongly one wants to do something, or how strongly one desires a parameter to be as it is. Using the haptic mousewheel, the widgets allow users to communicate conviction using force, where greater conviction requires greater force. These widgets include buttons that take varying amounts of force to click, a trash can that requires overcoming force to delete files, an instant message client that requires more force to communicate a stronger emotion, and widgets that allow parameters to be locked using force.
Keywords: affect, conviction, haptic
MR Tent: a place for co-constructing mixed realities in urban planning BIBAKFull-Text 211-214
  Maquil Valérie; Sareika Markus; Schmalstieg Dieter; Wagner Ina
This paper describes how mixed reality (MR) technology is applied in the urban renewal process to help mixed groups of stakeholders collaboratively construct, explore and discuss their vision of a particular urban project on site. It introduces the MR Tent, a physical enclosing for a collection of MR prototyping tools. We report findings from the most recent participatory workshop with users on an urban planning site concerning the interaction space, views, tangibility and representational formats.
Keywords: architecture, mixed reality, participatory design, tangible user interfaces, urban planning

Pointing, selection, and text input

QuickSelect: history-based selection expansion BIBAKFull-Text 215-221
  Sara L. Su; Sylvain Paris; Frédo Durand
When editing a graphical document, it is common to apply a change to multiple items at once, and a variety of tools exist for selecting sets of items. However, directly selecting large sets can sometimes be cumbersome and repetitive. We propose a method for helping users reuse complex selections by expanding the set of currently selected items. We analyze a document's operation history to determine which items have been frequently edited together. When the user requests it, items that have been previously edited with the current selection can be added to it. The new selection can then be manipulated like any other selection. This approach does not require a semantic model of the document or relations between items. Rather, each expansion is based on what the user has done so far to create the document. We demonstrate this approach in the context of vector graphics editing. Results from a pilot study were encouraging. Reusing selections with pre-existing histories, users were more efficient at editing tasks with our QuickSelect tool. Subjective preferences from a usability study in a free drawing context indicate that selection expansion is easy for users to learn and to apply.
Keywords: 2D drawing, grouping, operation history, selection
ISO 9241-9 evaluation of video game controllers BIBAKFull-Text 223-230
  Daniel Natapov; Steven J. Castellucci; I. Scott MacKenzie
Fifteen participants completed a study comparing video game controllers for point-select tasks. We used a Fitts' law task, as per ISO 9241-9, using the Nintendo Wii Remote for infrared pointing, the Nintendo Classic Controller for analogue stick pointing, and a standard mouse as a baseline condition. The mouse had the highest throughput at 3.78 bps. Both game controllers performed poorly by comparison. The Wii Remote throughput was 31.5% lower, at 2.59 bps, and the Classic Controller 60.8% lower at 1.48 bps. Comparing just the video game controllers, the Wii Remote presents a 75% increase in throughput over the Classic Controller. Error rates for the mouse, Classic Controller, and the Wii Remote were 3.53%, 6.58%, and 10.2%, respectively. Fourteen of 15 participants expressed a preference for the Wii Remote over the Classic Controller for pointing tasks in a home entertainment environment.
Keywords: Wiimote, Fitts' task, analogue stick, infrared, performance comparison, target acquisition, video game controller
Mid-air text input techniques for very large wall displays BIBAKFull-Text 231-238
  Garth Shoemaker; Leah Findlater; Jessica Q. Dawson; Kellogg S. Booth
Traditional text input modalities, namely keyboards, are often not appropriate for use when standing in front of very large wall displays. Direct interaction techniques, such as handwriting, are better, but are not well suited to situations where users are not in close physical proximity to the display. We discuss the potential of mid-air interaction techniques for text input on very large wall displays, and introduce two factors, distance-dependence and visibility-dependence, which are useful for segmenting the design space of mid-air techniques. We then describe three techniques that were designed with the goal of exploring the design space, and present a comparative evaluation of those techniques. Questions raised by the evaluation were investigated further in a second evaluation focusing on distance-dependence. The two factors of distance- and visibility-dependence can guide the design of future text input techniques, and our results suggest that distance-independent techniques may be best for use with very large wall displays.
Keywords: interaction techniques, text input, wall displays