HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 989900010203040506070809101112131415

Proceedings of the 2008 Conference on Graphics Interface

Fullname:Proceedings of the 2008 Conference on Graphics Interface
Editors:Lyn Bartram; Chris Shaw
Location:Windsor, Ontario, Canada
Dates:2008-May-28 to 2008-May-30
Publisher:Canadian Information Processing Society
Standard No:ISBN: 1-56881-423-2, 978-1-56881-423-0; ISSN: 0713-5424; ACM DL: Table of Contents hcibib: GI08
Papers:35
Pages:301
Links:Conference Home Page | Conference Series Home Page
  1. Large Displays
  2. Image Input
  3. Input
  4. Visualization 1
  5. Evaluation
  6. Shadows
  7. Faces and Web
  8. Geometric Techniques
  9. Pointing & Tracking
  10. Visualization 3

Large Displays

The Effects of Co-Present Embodiments on Awareness and Collaboration in Tabletop Groupware BIBA 1-8
  David Pinelle; Miguel Nacenta; Carl Gutwin; Tadeusz Stach
Most current tabletop groupware systems use direct touch, where people manipulate objects by touching them with a pen or a fingertip. The use of people's real arms and hands provides obvious awareness information, but workspace access is limited by the user's reach. Relative input techniques, where users manipulate a cursor rather than touching objects directly, allow users to reach all areas of the table. However, the only available awareness information comes from the virtual embodiment of the user (e.g., their cursor). This presents designers with a tradeoff: direct-touch techniques have advantages for group awareness; relative input techniques offer additional power but less awareness information. In this paper, we explore this tradeoff, and we explore the design space of virtual embodiments to determine whether factors such as size, realism, and visibility can improve awareness and coordination. We conducted a study in which seven groups carried out a picture-categorizing task using seven techniques: direct touch and relative input with six different virtual embodiments. Our results provide both valuable information to designers of tabletop groupware, and a number of new directions for future research.
The Effects of Peripheral Vision and Physical Navigation on Large Scale Visualization BIBA 9-16
  Robert Ball; Chris North
Large high-resolution displays have been shown to improve userperformance over standard displays on many large-scale visualization tasks. But what is the reason for the improvement?
   The two most cited reasons for the advantage are (1) the wider field of view that exploits peripheral vision to provide context, and (2) the opportunity for physical navigation (e.g. head turning, walking, etc.) to visually access information. Which of these two factors is the key to advantage? Or, do they both work together to produce a combined advantage? This paper reports on an experiment that separates peripheral vision and physical navigation as independent variables. Results indicate that, for most of the tasks tested, increased physical navigation opportunity is more critical to improving performance than increased field of view. Some evidence indicates a valuable combined role.
Lightweight Task/Application Performance using Single versus Multiple Monitors: A Comparative Study BIBA 17-24
  Youn-ah Kang; John Stasko
It is becoming increasingly common to see computers with two or even three monitors being used today. People seem to like having more display space available, and intuition tells us that the added space should be beneficial to work. Little research has been done to examine the effects and potential utility of multiple monitors for work on everyday tasks with common applications, however. We compared how people completed a trip planning task that involved different applications and included interjected interruptions when they worked on a computer with one monitor as compared to a computer with two monitors. Results showed that participants who used the computer with two monitors performed the task set faster and with less workload, and they also expressed a subjective preference for the multiple monitor computer.

Image Input

Dynamic Correction of Color Appearance on Mobile Displays BIBA 25-32
  Clifford Lindsay; Emmanuel Agu; Fan Wu
Technological advances in mobile devices have made them attractive for many previously infeasible image synthesis applications. Mobile users may roam between a wide range of environmental lighting including dim theaters, lit offices, and sunlight. Viewing images in diverse lighting situations poses a challenge: while our eyes can adapt to environmental lighting, changes in lighting can affect the perceived hue, brightness contrast and colorfulness of colors on electronic displays or in print. Consequently, colors in displayed images may appear bleached or be perceived differently from one lighting scenario to another. If these adverse lighting effects are unmitigated, the fidelity of color reproduction could suffer and limit the use of mobile devices in sensitive applications such as medical imaging, visualization, and watching movies. Many mobile devices currently include simple compensation schemes, which adjust their display's brightness in response to the environmental illumination sensed by built-in light sensors. These displays compensate for brightness but neglect the changes in perceived color and contrast caused by environmental lighting. We propose a novel technique to dynamically compensate for changes in color, colorfulness, and hue as mobile users roam. Our adaptation technique is based on the iCam06 color appearance model and uses the mobile device's sensor to continuously sample and feed back environmental lighting information.
Background Estimation from Non-Time Sequence Images BIBA 33-40
  Miguel Granados; Hans-Peter Seidel; Hendrik P. A. Lensch
We address the problem of reconstructing the background of a scene from a set of photographs featuring several occluding objects. We assume that the photographs are obtained from the same viewpoint and under similar illumination conditions. Our approach is to define the background as a composite of the input photographs. Each possible composite is assigned a cost, and the resulting cost function is minimized. We penalize deviations from the following two model assumptions: background objects are stationary, and background objects are more likely to appear across the photographs. We approximate object stationariness using a motion boundary consistency term, and object likelihood using probability density estimates. The penalties are combined using an entropy-based weighting function. Furthermore, we constraint the solution space in order to avoid composites that cut through objects. The cost function is minimized using graph cuts, and the final result is composed using gradient domain fusion. We demonstrate the application of our method to the recovering of clean, unoccluded shots of crowded public places, as well as to the removal of ghosting artifacts in the reconstruction of high dynamic range images from multi-exposure sequences. Our contribution is the definition of an automatic method for consistent background estimation from multiple exposures featuring occluders, and its application to the problem of ghost removal in high dynamic range image reconstruction.
A GPU-friendly Method for High Dynamic Range Texture Compression using Inverse Tone Mapping BIBA 41-48
  Francesco Banterle; Kurt Debattista; Patrick Ledda; Alan Chalmers
In recent years, High Dynamic Range Textures (HDRTs) have been frequently used in real-time applications and video-games to enhance realism. Unfortunately, HDRTs consume a considerable amount of memory, and efficient compression methods are not straightforward to implement on modern GPUs. We propose a framework for efficient HDRT compression using tone mapping and its dual, inverse tone mapping. In our method, encoding is performed by compressing the dynamic range using a tone mapping operator followed by a traditional encoding method for low dynamic range imaging. Our decoding method, decodes the low dynamic range image and expands its range with the inverse tone mapping operator. We present results using the Photographic Tone Reproduction tone mapping operator and its inverse encoded with S3TC running in real-time on current programmable GPUhardware resulting in compressed HDRTs at 4?8 bits per pixel (bpp), using a fast shader program for decoding. We show how our approach is favorable compared to other existing methods.

Input

A Model of Non-Preferred Hand Mode Switching BIBA 49-56
  Jaime Ruiz; Andrea Bunt; Edward Lank
Effective mode-switching techniques provide users of tablet interfaces with access to a rich set of behaviors. While many researchers have studied the relative performance of mode-switching techniques in these interfaces, these metrics tell us little about the behavior of one technique in the absence of a competitor. Differing from past comparison-based research, this paper describes a temporal model of the behavior of a common mode switching technique, non-preferred hand mode switching. Using the Hick-Hyman Law, we claim that the asymptotic cost of adding additional nonpreferred hand modes to an interface is a logarithmic function of the number of modes. We validate the model experimentally, and show a strong correlation between experimental data and values predicted by the model. Implications of this research for the design of mode-based interfaces are highlighted.
Evaluating One Handed Thumb Tapping on Mobile Touchscreen Devices BIBA 57-64
  Keith B. Perry; Juan Pablo Hourcade
In spite of the increasing popularity of handheld touchscreen devices, little research has been conducted on how to evaluate and design one handed thumb tapping interactions. In this paper, we present a study that researched three issues related to these interactions: 1) whether it is necessary to evaluate these interactions with the preferred and the non-preferred hand; 2) whether participants evaluating these interactions should be asked to stand and walk during evaluations; 3) whether targets on the edge of the screen enable participants to be more accurate in selection than targets not on the edge. Half of the forty participants in the study used their non-preferred hand and half used their preferred hand. Each participant conducted half of the tasks while walking and half while standing. We used 25 different target positions (16 on the edge of the screen) and five different target sizes. The participants who used their preferred hand completed tasks more quickly and accurately than the participants who used their non-preferred hand, with the differences being large enough to suggest it is necessary to evaluate this type of interactions with both hands. We did not find differences in the performance of participants when they walked versus when they stood, suggesting it is not necessary to include this as a variable in evaluations. In terms of target location, participants rated targets near the center of the screen as easier and more comfortable to tap, but the highest accuracy rates were for targets on the edge of the screen.
Perceptibility and Utility of Sticky Targets BIBA 65-72
  Regan L. Mandryk; Carl Gutwin
Researchers have suggested that dynamically increasing control-to-display (CD) gain can assist in targeting, by increasing the effective width of targets in motor space, which makes targets feel sticky. Although this method has been shown to be effective, there are several unexplored issues that could affect its use in real-world interfaces. One of these is perceptibility: in particular, the difference between the perceptibility and the utility of the technique. If CD gain changes are highly noticeable even at levels that are not helpful, the technique could be seen as overly intrusive. If CD gain changes are more useful than noticeable, however, the technique could be applied more widely. To explore this issue, we carried out a study that tested both the utility and the perceptibility of CD gain in single-target selection tasks. We found that although even small amounts of gain reduction significantly improved targeting times, participants did not consistently notice the effect until the gain difference was much higher. Our results provide new understanding of how changes in CD gain are experienced by users, and provide initial evidence to suggest that sticky targets can benefit users without a high perceptual cost.

Visualization 1

Single-Pass GPU Solid Voxelization for Real-Time Applications BIBA 73-80
  Elmar Eisemann; Xavier Décoret
In this paper, we present a single-pass technique to voxelize the interior of watertight 3D models with high resolution grids in realtime during a single rendering pass. Further, we develop a filtering algorithm to build a density estimate that allows the deduction of normals from the voxelized model. This is achieved via a dense packing of information using bitwise arithmetic. We demonstrate the versatility of the method by presenting several applications like translucency effects, CSG operations, interaction for particle simulations, and morphological operations. The speed of our method opens up the road for previously impossible approaches in realtime: 300,000 polygons are voxelized into a grid of one billion voxels at > 90Hz with a recent graphics card.
LiveSync++: Enhancements of an Interaction Metaphor BIBA 81-88
  Peter Kohlmann; Stefan Bruckner; Armin Kanitsar; M. Eduard Gröller
The LiveSync interaction metaphor allows an efficient and nonintrusive integration of 2D and 3D visualizations in medical workstations. This is achieved by synchronizing the 2D slice view with the volumetric view. The synchronization is initiated by a simple picking on a structure of interest in the slice view. In this paper we present substantial enhancements of the existing concept to improve its usability. First, an efficient parametrization for the derived parameters is presented, which allows hierarchical refinement of the search space for good views. Second, the extraction of the feature of interest is performed in a way, which is adapting to the volumetric extent of the feature. The properties of the extracted features are utilized to adjust a predefined transfer function in a feature-enhancing manner. Third, a new interaction mode is presented, which allows the integration of more knowledge about the user-intended visualization, without increasing the interaction effort. Finally, a new clipping technique is integrated, which guarantees an unoccluded view on the structure of interest while keeping important contextual information.
Context-Controlled Flow Visualization in Augmented Reality BIBA 89-96
  Mike Eissele; Matthias Kreiser; Thomas Ertl
A major challenge of novel scientific visualization using Augmented Reality is the accuracy of the user/camera position tracking. Many alternative techniques have been proposed, but still there is no general solution. Therefore, this paper presents a system that copes with different conditions and makes use of context information, e.g. available tracking quality, to select adequate Augmented Reality visualization methods. This way, users will automatically benefit from high quality visualizations if the system can estimate the pose of the real-world camera accurately enough. Otherwise, specially-designed alternative visualization techniques which require a less accurate positioning are used for the augmentation of real-world views. The proposed system makes use of multiple tracking systems and a simple estimation of the currently available overall accuracy of the pose estimation, used as context information to control the resulting visualization. Results of a prototypical implementation for visualization of 3D scientific flow data are presented to show the practicality.
Vector Field Contours BIBA 97-105
  Thomas Annen; Holger Theisel; Christian Rössl; Gernot Ziegler; Hans-Peter Seidel
We describe an approach to define contours of 3D vector fields and employ them as an interactive flow visualization tool. Although contours are well-defined and commonly used for surfaces and 3D scalar fields, they have no straightforward extension in vector fields. Our approach is to extract and visualize specific stream lines which show the most similar behavior to contours on surfaces. This way, the vector field contours are a particular set of isolated stream line segments that depend on the view direction and few additional parameters. We present an analysis of the usefulness of vector field contours by demonstrating their application to linear vector fields. In order to achieve interactive visualization, we develop an efficient GPU-based implementation for real-time extraction and rendering of vector field contours. We show the potential of our approach by applying it to a number of example data sets.
2^5 Years Ago I Couldn't Even Spell Canadian, Now I Are One Momentos of Collaborating on, with, and about Technology BIBA 107-114
  Kellogg S. Booth
I've been in Canada doing research and teaching for just about half of my life. It's been fun. I will share some of the lessons I've learned and things I have discovered, but mostly I want to convey a sense of my delight in having had the good fortune over the years to always have worked within a collaborative setting, both in terms of the actual research that often involved systems to support communication between people and in terms of how the research itself was undertaken by a team of people.

Evaluation

Order and Entropy in Picture Passwords BIBA 115-122
  Saranga Komanduri; Dugald R. Hutchings
Previous efforts involving picture-based passwords have not focused on maintaining a measurably high level of entropy. Since password systems usually allow user selection of passwords, their true entropy remains unknown. A 23-participant study was performed in which picture and character-based passwords of equal strength were randomly assigned. Memorability was tested with up to one week between sessions. The study found that both character and picture passwords of very high entropy were easily forgotten. However, when password inputs were analyzed to determine the source of input errors, serial ordering was found to be the main cause of failure. This supports a hypothesis stating that picture-password systems which do not require ordered input may produce memorable, high-entropy passwords. Input analysis produced another interesting result, that incorrect inputs by users are often duplicated. This reduces the number of distinct guesses users can make when authentication systems lock out users after a number of failed logins. A protocol for ignoring duplicate inputs is presented here. A shoulder-surfing resistant input method was also evaluated, with six out of 15 users performing an insecure behavior.
An Empirical Characterisation of Electronic Document Navigation BIBA 123-130
  Jason Alexander; Andy Cockburn
To establish an empirical foundation for analysis and redesign of document navigation tools, we implemented a system that logs all user actions within Microsoft Word and Adobe Reader. We then conducted a four month longitudinal study of fourteen users' document navigation activities. The study found that approximately half of all documents manipulated are reopenings of previously used documents and that recent document lists are rarely used to return to a document. The two most used navigation tools (by distance moved) are the mousewheel and scrollbar thumb, accounting for 44% and 29% of Word movement and 17% and 31% of Reader navigation. Participants were grouped into stereotypical navigator categories based on the tools they used the most. Majority of the navigation actions observed were short, both in distance (less than one page) and in time (less than one second). We identified three types of within document hunting, with the scrollbar identified as the greatest contributor.
Evaluation of Techniques for Visualizing Mathematical Expression Recognition Results BIBA 131-138
  Joseph J., Jr. LaViola; Anamary Leal; Timothy S. Miller; Robert C. Zeleznik
We present an experimental study that evaluates four different techniques for visualizing the machine interpretation of handwritten mathematics. Typeset in Place puts a printed form of the recognized expression in the same location as the handwritten mathematics. Adjusted Ink replaces what was written with scaled-to-fit, cleaned up handwritten characters using an ink font. The Large Offset technique scales a recognized printed form to be just as wide as the handwritten input, and places it below the handwritten mathematical expression. The Small Offset technique is similar to Large Offset but the printed form is set to be a fixed size which is generally small compared to the written expression. Our experiment explores how effective each technique is with assisting users in identifying and correcting recognition mistakes with different types and quantities of mathematical expressions. Our evaluation is based on task completion time and a comprehensive post-questionnaire used to solicit reactions on each technique. The results of our study indicate that, although each technique has advantages and disadvantages depending on the complexity of the handwritten mathematics, subjects took significantly longer to complete the recognition task with Typeset in Place and generally preferred Adjusted Ink or Small Offset.

Shadows

Layered Variance Shadow Maps BIBA 139-146
  Andrew Lauritzen; Michael McCool
Shadow maps are commonly used in real-time rendering, but they cannot be filtered linearly like standard color, resulting in severe aliasing. Variance shadow maps resolve this problem by representing the depth distribution using moments, which can be linearly filtered. However, variance shadow maps suffer from "light bleeding" artifacts and require high-precision texture filtering hardware. We introduce layered variance shadow maps, which provide simultaneous solutions to both of these limitations. By partitioning the shadow map depth range into multiple layers, we eliminate all light bleeding between different layers. Using more layers increases the quality of the shadows at the expense of additional storage. Because each of these layers covers a reduced depth range, they can be stored in lower precision than would be required with typical variance shadow maps, enabling their use on a much wider range of graphics hardware. We also describe an iterative optimization algorithm to automatically position layers so as to maximize the utility of each. Our algorithm is easy to implement on current graphics hardware and provides an efficient, scalable solution to the problem of shadow map filtering.
Quality Scalability of Soft Shadow Mapping BIBA 147-154
  Michael Schwarz; Marc Stamminger
Recently, several soft shadow mapping algorithms have been introduced which extract micro-occluders from a shadow map and backproject them on the light source to approximately determine light visibility. To maintain real-time frame rates, these algorithms often have to resort to coarser levels of a multi-resolution shadow map representation which can lead to visible quality degradations. In particular, discontinuity artifacts can appear when having to use different shadow map levels across pixels. In this paper, we discuss several aspects of soft shadow quality. First, we motivate and propose a scheme that allows for varying soft shadow quality in screen-space in a visually smooth way and also for its adaptation based on local features like assigned importance. Second, we suggest a generalization of micropatches which yields a better occluder geometry approximation at coarser shadow map levels, thus helping to reduce occluder overestimation. Third, we introduce a new hybrid acceleration structure for pruning the search space of potential micro-occluders that enables employing finer shadow map levels and hence increasing quality. Finally, we address multisampled rendering and suggest a simple scheme for interpolating light visibility that only adds a negligible cost compared to single-sample rendering.
Exponential Shadow Maps BIBA 155-161
  Thomas Annen; Tom Mertens; Hans-Peter Seidel; Eddy Flerackers; Jan Kautz
Rendering high-quality shadows in real-time is a challenging problem. Shadow mapping has proved to be an efficient solution, as it scales well for complex scenes. However, it suffers from aliasing problems. Filtering the shadow map alleviates aliasing, but unfortunately, native hardware-accelerated filtering cannot be applied, as the shadow test has to take place beforehand. We introduce a simple approach to shadow map filtering, by approximating the shadow test using an exponential function. This enables us to pre-filter the shadow map, which in turn allows for high quality hardware-accelerated filtering. Compared to previous filtering techniques, our technique is faster, consumes less memory and produces less artifacts.

Faces and Web

A Scalability Study of Web-Native Information Visualization BIBA 163-168
  Donald W. Johnson; T. J. Jankun-Kelly
Several web-native information visualization methods (SVG, HTML5's Canvas, native HTML) are studied to contrast their performances at different data scales. Using Java implementations of parallel coordinates and squarified treemaps for comparison, we explore the design space of these web-based technologies in order to determine what design trade-offs are required.
Effects of Avatar's Blinking Animation on Person Impressions BIBA 169-176
  Kazuki Takashima; Yasuko Omori; Yoshiharu Yoshimoto; Yuich Itoh; Yoshifumi Kitamura; Fumio Kishino
Blinking is one of the most important cues for forming person impressions. We focus on the eye blinking rate of avatars and investigate its effect on viewer subjective impressions. Two experiments are conducted. The stimulus avatars included humans with generic reality (male and female), cartoon-style humans (male and female), animals, and unidentified life forms that were presented as a 20-second animation with various blink rates: 9, 12, 18, 24 and 36 blinks/min. Subjects rated their impressions of the presented stimulus avatars on a seven-point semantic differential scale. The results showed a significant effect of the avatar's blinking on viewer impressions and it was larger with the human-style avatars than the others. The results also lead to several implications and guidelines for the design of avatar representation. Blink animation of 18 blinks/min with a human-style avatar produces the friendliest impression. The higher blink rates, i.e., 36 blinks/min, give inactive impressions while the lower blink rates, i.e., 9 blinks/min, give intelligent impressions. Through these results, guidelines are derived for managing attractiveness of avatar by changing the avatar's blinking rate.
Interactive 3D Facial Expression Posing through 2D Portrait Manipulation BIBA 177-184
  Tanasai Sucontphunt; Zhenyao Mo; Ulrich Neumann; Zhigang Deng
Sculpting various 3D facial expressions from a static 3D face model is a process with intensive manual tuning efforts. In this paper, we present an interactive 3D facial expression posing system through 2D portrait manipulation, where a manipulated 2D portrait serves a metaphor for automatically inferring its corresponding 3D facial expression with fine details. Users either rapidly assemble a face portrait through a pre-designed portrait component library or intuitively modify an initial portrait. During the editing procedure, when the users move one or a group of 2D control points on the portrait, other portrait control points are adjusted in order to automatically maintain the faceness of the edited portrait if the automated propagation function (switch) is optionally turned on. Finally, the 2D portrait is used as a query input to search for and reconstruct its corresponding 3D facial expression from a pre-recorded facial motion capture database. We showed that this system is effective for rapid 3D facial expression sculpting through a comparative user study.

Geometric Techniques

Interactive Global Illumination Based on Coherent Surface Shadow Maps BIBA 185-192
  Tobias Ritschel; Thorsten Grosch; Jan Kautz; Hans-Peter Seidel
Interactive rendering of global illumination effects is a challenging problem. While precomputed radiance transfer (PRT) is able to render such effects in real time the geometry is generally assumed static. This work proposes to replace the precomputed lighting response used in PRT by precomputed depth. Precomputing depth has the same cost as precomputing visibility, but allows visibility tests for moving objects at runtime using simple shadow mapping. For this purpose, a compression scheme for a high number of coherent surface shadow maps (CSSMs) covering the entire scene surface is developed. CSSMs allow visibility tests between all surface points against all points in the scene. We demonstrate the effectiveness of CSSM-based visibility using a novel combination of the lightcuts algorithm and hierarchical radiosity, which can be efficiently implemented on the GPU. We demonstrate interactive n-bounce diffuse global illumination, with a final glossy bounce and many high frequency effects: general BRDFs, texture and normal maps, and local or distant lighting of arbitrary shape and distribution -- all evaluated per-pixel. Furthermore, all parameters can vary freely over time -- the only requirement is rigid geometry.
Geometric Displacement on Plane and Sphere BIBA 193-202
  Elodie Fourquet; William Cowan; Stephen Mann
This paper describes a new algorithm for geometric displacement mapping. Its key idea is that all occluded solutions for an eye ray lie in two-dimensional manifolds perpendicular to the underlying surface to which the height map is applied. The manifold depends only on the eye position and surface geometry, and not on the height field. A simple stepping algorithm, moving along the surface within a manifold renders a curve of pixels to the view plane, which reduces height map rendering to a set of one-dimensional computations that can be done in parallel. The curves on the view plane for two specific underlying manifolds, a plane and a sphere, are straight lines. In this paper we focus on the specific geometry of simple underlying surfaces for which the geometry is more intuitive and the sampling of the rendered image direct.
Convex Hull Covering of Polygonal Scenes for Accurate Collision Detection in Games BIBA 203-210
  Rong Liu; Hao Zhang; James Busby
Decomposing a complex object into simpler pieces, e.g., convex patches or convex polyhedra, is a well-studied geometry problem. A well constructed decomposition can greatly accelerate collision detection since intersections with and between convex objects are fast to compute. In this paper, we look at a particular instance of the convex decomposition problem which arises from real-world game development. Given a collection of polyhedral surfaces (possibly with boundaries, holes, and complex interior structures) that model the scene geometry in a game environment, we wish to find a small set of convex hulls such that colliding objects in the scene against such a set of convex hulls produces the same game behavior as colliding against the original surfaces. The vague formulation of the problem is due to the difficulty of defining the space accessible by the objects involved in the game play. Under reasonable assumptions, we arrive at a set of conditions for valid convex decomposition and develop a construction algorithm via greedy merging driven by patch compactness. We show that our validity conditions ensure valid collision-related game behavior. The effectiveness of our decomposition algorithm is demonstrated through real examples from game development. To the best of our knowledge, no previous convex hull decomposition or surface decomposition algorithms were designed to handle the type of models we consider or be able to compute a set of convex hulls that ensure accurate collision detection results.
Multiresolution Point-set Surfaces BIBA 211-218
  François Duranleau; Philippe Beaudoin; Pierre Poulin
Multiresolution representations of 3D surfaces make it possible to concentrate the efforts of a modification at the appropriate level of detail. This paper introduces a multiresolution representation for point-set surfaces. At each level, the point set is smoothed and downsampled, and the geometric details are encoded along the smoothed surface normal. The resulting structure is only slightly larger than the original point set and allows to reconstruct it precisely. We demonstrate how it can be used for surface deformation and for frequency band scaling.
PNG1 Triangles for Tangent Plane Continuous Surfaces on the GPU BIBA 219-226
  Christoph Fünfzig; Kerstin Müller; Dianne Hansford; Gerald Farin
Improving the visual appearance of coarse triangle meshes is usually done with graphics hardware with per-pixel shading techniques. Improving the appearance at silhouettes is inherently hard, as shading has only a small influence there and the geometry must be corrected. With the new geometry shader stage released with DirectX 10, the functionality to generate new primitives from an input primitive is available. Also the shader can access a restricted primitive neighborhood. In this paper, we present a curved surface patch that can deal with this restricted data available in the geometry shader. A surface patch is defined over a triangle with its vertex normals and the three edge neighbor triangles. Compared to PN triangles, which define a curved patch using just the triangle with its vertex normals, our surface patch is G1 continuous with its three neighboring patches. The patch is obtained by blending two cubic Bézier patches for each triangle edge. In this way, our surface is especially suitable for efficient, high-quality tessellation on the GPU. We show the construction of the surface and how to add special features such as creases. Thus, the appearance of the surface patch can be fine-tuned easily. The surface patch is easy to integrate into existing polygonal modeling and rendering environments. We give some examples using Autodesk Maya.
Surface-based Growth Simulation for Opening Flowers BIBA 227-234
  Takashi Ijiri; Mihoshi Yokoo; Saneyuki Kawabata; Takeo Igarashi
We propose a biologically motivated method for creating animations of opening flowers. We simulate the development of petals based on the observation that flower opening is mainly caused by cell expansion. We use an elastic triangular mesh to represent a petal and emulate its growth by developing each triangular region. Our simulation process consists of two steps. The system first grows each triangle independently according to user-specified parameters and derives target rest edge lengths and dihedral angles. The system then updates the global shape to satisfy the rest lengths and dihedral angles as much as possible by means of energy minimization. We repeat these two processes to obtain keyframes of the flower opening animation. Our system can generate an animation in about 11.5 minutes. Applications include the creation of graphics animations, designing 3D plant models, and simulation for aiding biological study. In contrast to existing systems that simulate the development of flattened 2D petals, our system simulates the growth of petals as 3D surfaces. We show the feasibility of our method by creating animations of Asiatic lily and Eustoma grandiflorum.

Pointing & Tracking

SurfaceFusion: Unobtrusive Tracking of Everyday Objects in Tangible User Interfaces BIBA 235-242
  Alex Olwal; Andrew D. Wilson
Interactive surfaces and related tangible user interfaces often involve everyday objects that are identified, tracked, and augmented with digital information. Traditional approaches for recognizing these objects typically rely on complex pattern recognition techniques, or the addition of active electronics or fiducials that alter the visual qualities of those objects, making them less practical for real-world use. Radio Frequency Identification (RFID) technology provides an unobtrusive method of sensing the presence of and identifying tagged nearby objects but has no inherent means of determining the position of tagged objects. Computer vision, on the other hand, is an established approach to track objects with a camera. While shapes and movement on an interactive surface can be determined from classic image processing techniques, object recognition tends to be complex, computationally expensive and sensitive to environmental conditions. We present a set of techniques in which movement and shape information from the computer vision system is fused with RFID events that identify what objects are in the image. By synchronizing these two complementary sensing modalities, we can associate changes in the image with events in the RFID data, in order to recover position, shape and identification of the objects on the surface, while avoiding complex computer vision processes and exotic RFID solutions.
Semantic Pointing for Object Picking in Complex 3D Environments BIBA 243-250
  Niklas Elmqvist; Jean-Daniel Fekete
Today's large and high-resolution displays coupled with powerful graphics hardware offer the potential for highly realistic 3D virtual environments, but also cause increased target acquisition difficulty for users interacting with these environments. We present an adaptation of semantic pointing to object picking in 3D environments. Essentially, semantic picking shrinks empty space and expands potential targets on the screen by dynamically adjusting the ratio between movement in visual space and motor space for relative input devices such as the mouse. Our implementation operates in the image-space using a hierarchical representation of the standard stencil buffer to allow for real-time calculation of the closest targets for all positions on the screen. An informal user study indicates that subjects perform more accurate pointing with semantic 3D pointing than without.
Analyzing the Kinematics of Bivariate Pointing BIBA 251-258
  Jaime Ruiz; David Tausky; Andrea Bunt; Edward Lank; Richard Mann
Despite the importance of pointing-device movement to efficiency in interfaces, little is known on how target shape impacts speed, acceleration, and other kinematic properties of motion. In this paper, we examine which kinematic characteristics of motion are impacted by amplitude and directional target constraints in Fitts-style pointing tasks. Our results show that instantaneous speed, acceleration, and jerk are most affected by target constraint. Results also show that the effects of target constraint are concentrated in the first 70% of movement distance. We demonstrate that we can discriminate between the two classes of target constraint using Machine Learning with accuracy greater than chance. Finally, we highlight future work in designing techniques that make use of target constraint to improve pointing efficiency in computer interfaces.

Visualization 3

Cascaded Treemaps: Examining the Visibility and Stability of Structure in Treemaps BIBA 259-266
  Hao ; James Fogarty
Treemaps are an important and commonly-used approach to hierarchy visualization, but an important limitation of treemaps is the difficulty of discerning the structure of a hierarchy. This paper presents cascaded treemaps, a new approach to treemap presentation that is based in cascaded rectangles instead of the traditional nested rectangles. Cascading uses less space to present the same containment relationship, and the space savings enable a depth effect and natural padding between siblings in complex hierarchies. In addition, we discuss two general limitations of existing treemap layout algorithms: disparities between node weight and relative node size that are introduced by layout algorithms ignoring the space dedicated to presenting internal nodes, and a lack of stability when generating views of different levels of treemaps as a part of supporting interactive zooming. We finally present a two-stage layout process that addresses both concerns, computing a stable structure for the treemap and then using that structure to consider the presentation of internal nodes when arranging the treemap. All of this work is presented in the context of two large real-world hierarchies, the Java package hierarchy and the eBay auction hierarchy.
Towards A Model Human Cochlea: Sensory substitution for crossmodal audio-tactile displays BIBA 267-274
  Maria Karam; Frank Russo; Carmen Branje; Emily Price; Deborah I. Fels
We present a Model Human Cochlea (MHC): a sensory substitution technique for creating a crossmodal audio-touch display. This research is aimed at designing a chair-based interface to support deaf and hard of hearing users in experiencing musical content associated with film, and seeks to develop this multisensory crossmodal display as a framework for supporting research in enhancing sensory entertainment experiences for universal design. The MHC uses audio speakers as vibrotactile devices placed along the body to facilitate the expression of emotional elements that are associated with music. We present the results of our formative study, which compared the MHC to conventional audio speaker displays for communicating basic emotional information through touch. Results suggest that the separation of audio signals onto multiple vibrotactile channels is more effective at expressing emotional content than is possible using a complete audio signal as vibrotactile stimuli.
The Cost of Supporting References in Collaborative Augmented Reality BIBA 275-282
  Jeff Chastine; Ying Zhu
For successful collaboration to occur, a fundamental requirement is the ability for participants to refer to artifacts within the shared environment. This task is often straightforward in traditional collaborative desktop applications, yet the spatial properties found in mixed reality environments greatly impact the complexity of generating and interpreting meaningful reference cues. Although awareness is a very active area of research, little focus has been given to the environmental and contextual factors that influence referencing or the costs associated with supporting it in mixed reality environments. The work presented here consists of a compilation of understanding we have gained through user observation, participant feedback and system development. We begin by summarizing our findings from several user studies in collaborative augmented reality (AR). To organize the complexity associated with referencing in AR, we enumerate contextual and environmental factors that influence referential awareness - integrating discussion about user preferences and the impact they have on the underlying technological requirements. Finally, we discuss how these factors can impact the design space of collaborative systems and describe the cost associated with supporting references in collaborative AR.