HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 97989900010203040506070809101112131415

Proceedings of the 2007 Conference on Graphics Interface

Fullname:Proceedings of the 2007 Conference on Graphics Interface
Editors:Christopher G. Healey; Edward Lank
Location:Montreal, Canada
Dates:2007-May-28 to 2007-May-30
Publisher:Canadian Information Processing Society
Standard No:ISBN 978-1-56881-337-0; 1-56881-337-6; ACM DL: Table of Contents hcibib: GI07
Papers:44
Pages:352
Links:Conference Series Home Page
  1. Guest talk
  2. Shape
  3. NPR and sketching
  4. Input and interaction
  5. User interfaces and UI design
  6. Real-time and rendering
  7. Collaboration and communication
  8. Images
  9. Visualization
  10. Meshes and compression

Guest talk

Animating dance BIBAFull-Text 1-2
  Tom Calvert
Since the inception of the field, researchers in human figure animation have had an active interchange with those interested in representing and visualizing dance. In 1967 A. Michael Noll at Bell Labs and Merce Cunningham, the father of modern dance speculated independently about the possibility of creating dancing stick figures on a computer display.

Shape

Twinned meshes for dynamic triangulation of implicit surfaces BIBAFull-Text 3-9
  Antoine Bouthors; Matthieu Nesme
We introduce a new approach to mesh an animated implicit surface for rendering. Our contribution is a method which solves stability issues of implicit triangulation, in the scope of real-time rendering. This method is robust, moreover it provides interactive and quality rendering of animated or manipulated implicit surfaces.
   This approach is based on a double triangulation of the surface, a mechanical one and a geometric one. In the first triangulation, the vertices are the nodes of a simplified mechanical finite element model. The aim of this model is to uniformly and dynamically sample the surface. It is robust, efficient and prevents the inversion of triangles. The second triangulation is dynamically created from the first one at each frame. It is used for rendering and provides details in regions of high curvature. We demonstrate this technique with skeleton-based and volumetric animated surfaces.
Constrained planar remeshing for architecture BIBAFull-Text 11-18
  Barbara Cutler; Emily Whiting
Material limitations and fabrication costs generally run at odds with the creativity of architectural design, producing a wealth of challenging computational geometry problems. We have developed an algorithm for solving an important class of fabrication constraints: those associated with planar construction materials such as glass or plywood.
   Starting with a complex curved input shape, defined as a NURBS or subdivision surface, we use an iterative clustering method to remesh the surface into planar panels following a cost function that is adjusted by the designer. We solved several challenging connectivity issues to ensure that the topology of the resulting mesh matches that of the input surface.
   The algorithm described in this paper has been implemented and developed in conjunction with an architectural design seminar. How the participants incorporated this tool into their design process was considered. Their important feedback led to key algorithmic and implementation insights as well as many exciting ideas for future exploration. This prototype tool has potential to impact not only architectural design, but also the engineering for general fabrication problems.
Analysis of segmented human body scans BIBAFull-Text 19-26
  Pengcheng Xi; Won-Sook Lee; Chang Shu
Analysis on a dataset of 3D scanned surfaces have presented problems because of incompleteness on the surfaces and because of variances in shape, size and pose. In this paper, a high-resolution generic model is aligned to data in the Civilian American and European Surface Anthropometry Resources (CAESAR) database in order to obtain a consistent parameterization. A Radial Basis Function (RBF) network is built for rough deformation by using landmark information from the generic model, anatomical landmarks provided by CAESAR dataset and virtual landmarks created automatically for geometric deformation. Fine mapping then successfully applies a weighted sum of errors on both surface data and the smoothness of deformation. Compared with previous methods, our approach makes robust alignment in a higher efficiency. This consistent parameterization also makes it possible for Principal Components Analysis (PCA) on the whole body as well as human body segments. Our analysis on segmented bodies displays a richer variation than that of the whole body. This analysis indicates that a wider application of human body reconstruction with segments is possible in computer animation.
Improved skeleton extraction and surface generation for sketch-based modeling BIBAFull-Text 27-33
  Florian Levet; Xavier Granier
For the generation of freeform models, sketching interfaces have raised an increasing interest due to their intuitive approach. It is now possible to infer a 3D model directly from a sketched curved. Unfortunately, a limit of current systems is the poor quality of the skeleton automatically extracted from this silhouette, leading to low quality meshes for the resulting objects.
   In this paper, we present new solutions that improve the surface generation for sketch-based modeling systems. First, we propose a new algorithm that extracts a smoother skeleton compared to previous approaches. Then, we present a new sampling scheme for the creation of good-quality 3D mesh. Finally, we propose to use a profile curve composed of disconnected components in order to create models which genus is greater than 0.
Surface distance maps BIBAFull-Text 35-42
  Avneesh Sud; Naga Govindaraju; Russell Gayle; Erik Andersen; Dinesh Manocha
We present a new parameterized representation called surface distance maps for distance computations on piecewise 2-manifold primitives. Given a set of orientable 2-manifold primitives, the surface distance map represents the (non-zero) signed distance-to-closest-primitive mapping at each point on a 2-manifold. The distance mapping is computed from each primitive to the set of remaining primitives. We present an interactive algorithm for computing the surface distance map of triangulated meshes using graphics hardware. We precompute a surface parameterization and use the it to define an affine transformation for each mesh primitive. Our algorithm efficiently computes the distance field by applying this affine transformation to the distance functions of the primitives and evaluating these functions using texture mapping hardware. In practice, our algorithm can compute very high resolution surface distance maps at interactive rates and provides tight error bounds on their accuracy. We use surface distance maps for path planning and proximity query computation among complex models in dynamic environments. Our approach can perform planning and proximity queries in a dynamic environment with hundreds of objects at interactive rates and offer significant speedups over prior algorithms.

NPR and sketching

Calligraphic packing BIBAFull-Text 43-50
  Jie Xu; Craig S. Kaplan
There are many algorithms in non-photorealistic rendering for representing an image as a composition of small objects. In this paper, we focus on the specific case where the objects to be assembled into a composition are letters rather than images or abstract geometric forms. We develop a solution to the "calligraphic packing" problem based on dividing up a target region into pieces and warping a letter into each piece. We define an energy function that chooses a warp that best represents the original letter. We discuss variations in rendering style and show results produced by our system.
A method for cartoon-style rendering of liquid animations BIBAFull-Text 51-55
  Ashley M. Eden; Adam W. Bargteil; Tolga G. Goktekin; Sarah Beth Eisinger; James F. O'Brien
In this paper we present a visually compelling and informative cartoon rendering style for liquid animations. Our style is inspired by animations such as Futurama, The Little Mermaid, and Bambi. We take as input a liquid surface obtained from a three-dimensional physically based liquid simulation system and output animations that evoke a cartoon style and convey liquid movement. Our method is based on four cues that emphasize properties of the liquid's shape and motion. We use bold outlines to emphasize depth discontinuities, patches of constant color to highlight near-silhouettes and areas of thinness, and, optionally place temporally coherent oriented textures on the liquid surface to help convey motion.
GPU-based rendering and animation for Chinese painting cartoon BIBAFull-Text 57-61
  Manli Yuan; Xubo Yang; Shuangjiu Xiao; Zheng Ren
This paper presents a real-time rendering system for generating a Chinese ink-and-wash cartoon. The objective is to free the animators from laboriously designing traditional Chinese painting appearance. The system constitutes a morphing animation framework and a rendering process. The whole rendering process is based on Graphic Process Unit (GPU), including interior shading, silhouette extracting and background shading. Moreover, the morphing framework is created to automatically generate Chinese painting cartoon from a set of surface mesh models. These techniques can be applied to real-time Chinese style entertainment application.
Magic canvas: interactive design of a 3-D scene prototype from freehand sketches BIBAFull-Text 63-70
  HyoJong Shin; Takeo Igarashi
Construction of a 3-D scene consisting of multiple objects can be tedious work. Existing 3-D editing tools require the user to choose an appropriate model in a database first and then carefully place it in the scene at a desired position combining various operations such as translation, rotation, and scaling. To simplify the process, we propose a system that takes simple 2D sketches of models in a scene as input for 3D scene construction. The system then automatically identifies corresponding models in a database and puts them in the appropriate location and posture so that their appearance matches the user's input sketches. The system combines a 3-D model search and a 3-D posture estimation to obtain the result. This system allows the user to construct a prototype of a 3-D scene quickly and intuitively.
   We conducted a user study to compare our interface with traditional menu-based UI and verified that our system was useful for constructing a 3-D scene prototype, especially for facilitating the exploration of various alternative designs. We expect our system to be useful as a prototyping tool for 3-D scene construction in various application areas such as interior design, communication, education, and entertainment.
Can smooth view transitions facilitate perceptual constancy in node-link diagrams? BIBAFull-Text 71-78
  Maruthappan Shanmugasundaram; Pourang Irani; Carl Gutwin
Many visualizations use smoothly animated transitions to help the user interact with information structures. These transitions are intended to preserve perceptual constancy during viewpoint transformations. However, animated transitions also have costs -- they increase the transition time, and they can be complicated to implement -- and it is not clear whether the benefits of smooth transitions outweigh the costs. In order to quantify these benefits, we carried out two experiments that explore the effects of smooth transitions. In the first study, subjects were asked to determine whether graph nodes were connected, and navigated the graph either with or without smooth scene transitions. In the second study, participants were asked to identify the overall structure of a tree after navigating the tree through a viewport that either did or did not use smooth transitions for view changes. The results of both experiments show that smooth transitions can have dramatic benefits for user performance -- for example, participants in smooth transition conditions made half the errors of the discrete-movement conditions. In addition, short transitions were found to be as effective as long ones, suggesting that some of the costs of animations can be avoided. These studies give empirical evidence on the benefits of smooth transitions, and provide guidelines about when designers should use them in visualization systems.

Input and interaction

Design as traversal and consequences: an exploration tool for experimental designs BIBAFull-Text 79-86
  Christopher G. Jennings; Arthur E. Kirkpatrick
We present a design space explorer for the space of experimental designs. For many design problems, design decisions are determined by the consequences of the design rather than its elemental parts. To support this need, the explorer is constructed to make the designer aware of design-level options, provide a structured context for design, and provide feedback on the consequences of design decisions. We argue that this approach encourages the designer to consider a wider variety of designs, which will lead to more effective designs overall. In a qualitative study, experiment designers using the explorer were found to consider a wider variety of designs and more designs overall than they reported considering in their normal practice.
A mixing board interface for graphics and visualization applications BIBAFull-Text 87-94
  Matthew Crider; Steven Bergner; Thomas N. Smyth; Torsten Möller; Melanie K. Tory; Arthur E. Kirkpatrick; Daniel Weiskopf
We use a haptically enhanced mixing board with a video projector as an interface to various data visualization tasks. We report results of an expert review with four participants, qualitatively evaluating the board for three different applications: dynamic queries (abstract task), parallel coordinates interface (multi-dimensional combinatorial search), and ExoVis (3D spatial navigation). Our investigation sought to determine the strengths of this physical input given its capability to facilitate bimanual interaction, constraint maintenance, tight coupling of input and output, and other features. Participants generally had little difficulty with the mappings of parameters to sliders. The graspable sliders apparently reduced the mental exertion needed to acquire control, allowing participants to attend more directly to understanding the visualization. Participants often designated specific roles for each hand, but only rarely moved both hands simultaneously.
Eyes on the road, hands on the wheel: thumb-based interaction techniques for input on steering wheels BIBAFull-Text 95-102
  Iván E. González; Jacob O. Wobbrock; Duen Horng Chau; Andrew Faulring; Brad A. Myers
The increasing quantity and complexity of in-vehicle systems creates a demand for user interfaces which are suited to driving. The steering wheel is a common location for the placement of buttons to control navigation, entertainment, and environmental systems, but what about a small touchpad? To investigate this question, we embedded a Synaptics StampPad in a computer game steering wheel and evaluated seven methods for selecting from a list of over 3000 street names. Selection speed was measured while stationary and while driving a simulator. Results show that the EdgeWrite gestural text entry method is about 20% to 50% faster than selection-based text entry or direct list-selection methods. They also show that methods with slower selection speeds generally resulted in faster driving speeds. However, with EdgeWrite, participants were able to maintain their speed and avoid incidents while selecting and driving at the same time. Although an obvious choice for constrained input, on-screen keyboards generally performed quite poorly.
TwoStick: writing with a game controller BIBAFull-Text 103-110
  Thomas Költringer; Poika Isokoski; Thomas Grechenig
We report the design and evaluation of a novel game controller text entry method called TwoStick. The design is based on the review of previous work and several rounds of pilot testing. We compared user performance with TwoStick experimentally to a selection keyboard which is the de facto standard of game controller text entry. Eight participants completed 20 fifteen-minute sessions with both text entry methods. In the beginning TwoStick was slower (4.3 wpm, uncorrected error rate = 0.68%) than the selection keyboard (5.6 wpm, 0.85%). During the last session TwoStick was faster (14.9 wpm, 0.86% vs. 12.9 wpm, 0.27%). Qualitative results indicated that TwoStick was more fun and easier to use than the selection keyboard.
Pointer warping in heterogeneous multi-monitor environments BIBAFull-Text 111-117
  Hrvoje Benko; Steven Feiner
Warping the pointer across monitor bezels has previously been demonstrated to be both significantly faster and preferred to the standard mouse behavior when interacting across displays in homogeneous multi-monitor configurations. Complementing this work, we present a user study that compares the performance of four pointer-warping strategies, including a previously untested frame-memory placement strategy, in heterogeneous multi-monitor environments, where displays vary in size, resolution, and orientation. Our results show that a new frame-memory pointer warping strategy significantly improved targeting performance (up to 30% in some cases). In addition, our study showed that, when transitioning across screens, the mismatch between the visual and the device space has a significantly bigger impact on performance than the mismatch in orientation and visual size alone. For mouse operation in a highly heterogeneous multi-monitor environment, all our participants strongly preferred using pointer warping over the regular mouse behavior.

User interfaces and UI design

BlueTable: connecting wireless mobile devices on interactive surfaces using vision-based handshaking BIBAFull-Text 119-125
  Andrew D. Wilson; Raman Sarin
Associating and connecting mobile devices for the wireless transfer of data is often a cumbersome process. We present a technique of associating a mobile device to an interactive surface using a combination of computer vision and Bluetooth technologies. Users establish the connection of a mobile device to the system by simply placing the device on a table surface. When the computer vision process detects a phone-like object on the surface, the system follows a handshaking procedure using Bluetooth and vision techniques to establish that the phone on the surface and the wirelessly connected phone are the same device. The connection is broken simply by removing the device. Furthermore, the vision-based handshaking procedure determines the precise position of the device on the interactive surface, thus permitting a variety of interactive scenarios which rely on the presentation of graphics co-located with the device. As an example, we present a prototype interactive system which allows the exchange of automatically downloaded photos by selecting and dragging photos from one cameraphone device to another.
Jump: a system for interactive, tangible queries of paper BIBAFull-Text 127-134
  Michael Terry; Janet Cheung; Justin Lee; Terry Park; Nigel Williams
This paper introduces Jump, a prototype computer vision-based system that transforms paper-based architectural documents into tangible query interfaces. Specifically, Jump allows a user to obtain additional information related to a given architectural document by framing a portion of the drawing with physical brackets. The framed area appears in a magnified view on a separate display and applies the principle of semantic zooming to determine the appropriate level of detail to show. Filter tokens can be placed on the paper to modify the digital presentation to include information not on the original drawing itself, such as electrical, mechanical, and structural information related to the given space. These filter tokens serve as tangible sliders in that their relative location on the paper controls the degree to which their information is blended with the original document. To address the issue of recognition errors, Jump introduces the notion of a reflection window, or an inset window that serves to reproduce Jump's current interpretation of the visual scene. The system's overall design is informed by a set of in situ studies of architectural technologists and formative evaluations with the same group.
Animation in a peripheral display: distraction, appeal, and information conveyance in varying display configurations BIBAFull-Text 135-142
  Christopher Plaue; John Stasko
Peripheral displays provide secondary awareness of news and information to people. When such displays are static, the amount of information that can be presented is limited and the display may become boring or routine over time. Adding animation to peripheral displays can allow them to show more information and can potentially enhance visual interest and appeal, but it may also make the display very distracting. Is it possible to employ animation for visual benefit without increasing distraction? We have created a peripheral display system called BlueGoo that visualizes R.S.S. news feeds as animated photographic collages. We present an empirical study in which participants did not find the system to be distracting, and many found it to be appealing. The study also explored how different display sizes and positions affect information conveyance and distraction. Animations on an angled second monitor appeared to be more distracting than three other configurations.
Should I call now? understanding what context is considered when deciding whether to initiate remote communication via mobile devices BIBAFull-Text 143-150
  Edward S. De Guzman; Moushumi Sharmin; Brian P. Bailey
Requests for communication via mobile devices can be disruptive to the current task or social situation. To reduce the frequency of disruptive requests, one promising approach is to provide callers with cues of a receiver's context through an awareness display, allowing informed decisions of when to call. Existing displays typically provide cues based on what can be readily sensed, which may not match what is needed during the call decision process. In this paper, we report results of a four week diary study of mobile phone usage, where users recorded what context information they considered when making a call, and what information they wished others had considered when receiving a call. Our results were distilled into lessons that can be used to improve the design of awareness displays for mobile devices, e.g., show frequency of a receiver's recent communication and distance from a receiver to her phone. We discuss technologies that can enable cues indicated in these lessons to be realized within awareness displays, as well as discuss limitations of such displays and issues of privacy.
Location-dependent information appliances for the home BIBAFull-Text 151-158
  Kathryn Elliot; Mark Watson; Carman Neustaedter; Saul Greenberg
Ethnographic studies of the home revealed the fundamental roles that physical locations and context play in how household members understand and manage conventional information. Yet we also know that digital information is becoming increasingly important to households. The problem is that this digital information is almost always tied to traditional computer displays, which inhibits its incorporation into household routines. Our solution, location-dependent information appliances, exploit both home location and context (as articulated in ethnographic studies) to enhance the role of ambient displays in the home setting; these displays provide home occupants with both background awareness of an information source and foreground methods to gain further details if desired. The novel aspect is that home occupants assign particular information to locations within a home in a way that makes sense to them. As a device is moved to a particular home location, information is automatically mapped to that device along with hints on how it should be displayed.

Real-time and rendering

Fitted virtual shadow maps BIBAFull-Text 159-168
  Markus Giegl; Michael Wimmer
Too little shadow map resolution and resulting undersampling artifacts, perspective and projection aliasing, have long been a fundamental problem of shadowing scenes with shadow mapping.
   We present a new smart, real-time shadow mapping algorithm that virtually increases the resolution of the shadow map beyond the GPU hardware limit where needed. We first sample the scene from the eye-point on the GPU to get the needed shadow map resolution in different parts of the scene. We then process the resulting data on the CPU and finally arrive at a hierarchical grid structure, which we traverse in kd-tree fashion, shadowing the scene with shadow map tiles where needed.
   Shadow quality can be traded for speed through an intuitive parameter, with a homogenous quality reduction in the whole scene, down to normal shadow mapping. This allows the algorithm to be used on a wide range of hardware.
Wavelet encoding of BRDFs for real-time rendering BIBAFull-Text 169-176
  Luc Claustres; Loïc Barthe; Mathias Paulin
Acquired data often provides the best knowledge of a material's bidirectional reflectance distribution function (BRDF). Its integration into most real-time rendering systems requires both data compression and the implementation of the decompression and filtering stages on contemporary graphics processing units (GPUs). This paper improves the quality of real-time per-pixel lighting on GPUs using a wavelet decomposition of acquired BRDFs. Three-dimensional texture mapping with indexing allows us to efficiently compress the BRDF data by exploiting much of the coherency between hemispherical data. We apply built-in hardware filtering and pixel shader flexibility to perform filtering in the full 4D BRDF domain. Anti-aliasing of specular highlights is performed via a progressive level-of-detail technique built upon the multiresolution of the wavelet encoding. This technique increases rendering performance on distant surfaces while maintaining accurate appearance of close ones.
Packet-based whitted and distribution ray tracing BIBAFull-Text 177-184
  Solomon Boulos; Dave Edwards; J. Dylan Lacewell; Joe Kniss; Jan Kautz; Peter Shirley; Ingo Wald
Much progress has been made toward interactive ray tracing, but most research has focused specifically on ray casting. A common approach is to use "packets" of rays to amortize cost across sets of rays. Whether "packets" can be used to speed up the cost of reflection and refraction rays is unclear. The issue is complicated since such rays do not share common origins and often have less directional coherence than viewing and shadow rays. Since the primary advantage of ray tracing over rasterization is the computation of global effects, such as accurate reflection and refraction, this lack of knowledge should be corrected. We are also interested in exploring whether distribution ray tracing, due to its stochastic properties, further erodes the effectiveness of techniques used to accelerate ray casting. This paper addresses the question of whether packet-based ray tracing algorithms can be effectively used for more than visibility computation. We show that by choosing an appropriate data structure and a suitable packet assembly algorithm we can extend the idea of "packets" from ray casting to Whitted-style and distribution ray tracing, while maintaining efficiency.
Interactive refractions with total internal reflection BIBAFull-Text 185-190
  Scott T. Davis; Chris Wyman
A requirement for rendering realistic images interactively is efficiently simulating material properties. Recent techniques have improved the quality for interactively rendering dielectric materials, but have mostly neglected a phenomenon associated with refraction, namely, total internal reflection. We present an algorithm to approximate total internal reflection on commodity graphics hardware using a ray-depth map intersection technique that is interactive and requires no precomputation. Our results compare favorably with ray traced images and improve upon approaches that avoid total internal reflection.

Collaboration and communication

The effects of interaction technique on coordination in tabletop groupware BIBAFull-Text 191-198
  Miguel A. Nacenta; David Pinelle; Dane Stuckel; Carl Gutwin
The interaction techniques that are used in tabletop groupware systems (such as pick-and-drop or pantograph) can affect the way that people collaborate. However, little is known about these effects, making it difficult for designers to choose appropriate techniques when building tabletop groupware. We carried out an exploratory study to determine how several different types of interaction techniques (pantograph, telepointers, radar views, drag-and-drop, and laser beam) affected coordination and awareness in two tabletop tasks (a game and a storyboarding activity). We found that the choice of interaction technique significantly affected coordination measures, performance measures, and preference -- but that the effects were different for the two different tasks. Our study shows that the choice of tabletop interaction technique does indeed matter, and provides insight into how tabletop systems can better support group work.
A digital family calendar in the home: lessons from field trials of LINC BIBAFull-Text 199-206
  Carman Neustaedter; A. J. Bernheim Brush; Saul Greenberg
Digital family calendars have the potential to help families coordinate, yet they must be designed to easily fit within existing routines or they will simply not be used. To understand the critical factors affecting digital family calendar design, we extended LINC, an inkable family calendar to include ubiquitous access, and then conducted a month-long field study with four families. Adoption and use of LINC during the study demonstrated that LINC successfully supported the families' existing calendaring routines without disrupting existing successful social practices. Families also valued the additional features enabled by LINC. For example, several primary schedulers felt that ubiquitous access positively increased involvement by additional family members in the calendaring routine. The field trials also revealed some unexpected findings, including the importance of mobility -- both within and outside the home -- for the Tablet PC running LINC.
Understanding the design space of referencing in collaborative augmented reality environments BIBAFull-Text 207-214
  Jeffrey W. Chastine; Kristine Nagel; Ying Zhu; Luca Yearsovich
For collaborative environments to be successful, it is critical that participants have the ability to generate effective references. Given the heterogeneity of the objects and the myriad of possible scenarios for collaborative augmented reality environments, generating meaningful references within them can be difficult. Participants in co-located physical spaces benefit from non-verbal communication, such as eye gaze, pointing and body movement; however, when geographically separated, this form of communication must be synthesized using computer-mediated techniques. We have conducted an exploratory study using a collaborative building task of constructing both physical and virtual models to better understand inter-referential awareness -- or the ability for one participant to refer to a set of objects, and for that reference to be understood. Our contributions are not necessarily in presenting novel techniques, but in narrowing the design space for referencing in collaborative augmented reality. This study suggests collaborative reference preferences are heavily dependent on the context of the workspace.
PrivateBits: managing visual privacy in web browsers BIBAFull-Text 215-223
  Kirstie Hawkey; Kori M. Inkpen
Privacy can be an issue during collaboration around a personal display when previous browsing activities become visible within web browser features (e.g., AutoComplete). Users currently lack methods to present only appropriate traces of prior activity in these features. In this paper we explore a semi-automatic approach to privacy management that allows users to classify traces of browsing activity and filter them appropriately when their screen is visible by others. We developed PrivateBits, a prototype web browser that instantiates previously proposed general design guidelines for privacy management systems as well as those specific to web browser visual privacy. A preliminary evaluation found this approach to be flexible enough to meet participants' varying privacy concerns, privacy management strategies, and viewing contexts. However, the results also emphasized the need for additional security features to increase trust in the system and raised questions about how to best manage the tradeoff between ease of use and system concealment.
Progressive multiples for communication-minded visualization BIBAFull-Text 225-232
  Doantam Phan; Andreas Paepcke; Terry Winograd
This paper describes a communication-minded visualization called progressive multiples that supports both the forensic analysis and presentation of multidimensional event data. We combine ideas from progressive disclosure, which reveals data to the user on demand, and small multiples [21], which allows users to compare many images at once. Sets of events are visualized as timelines. Events are placed in temporal order on the x-axis, and a scalar dimension of the data is mapped to the y-axis. To support forensic analysis, users can pivot from an event in an existing timeline to create a new timeline of related events. The timelines serve as an exploration history, which has two benefits. First, this exploration history allows users to backtrack and explore multiple paths. Second, once a user has concluded an analysis, these timelines serve as the raw visual material for composing a story about the analysis. A narrative that conveys the analytical result can be created for a third party by copying and reordering timelines from the history. Our work is motivated by working with network security administrators and researchers in political communication. We describe a prototype that we are deploying with administrators and the results of a user study where we applied our technique to the visualization of a simulated epidemic.

Images

Robust pixel classification for 3D modeling with structured light BIBAFull-Text 233-240
  Yi Xu; Daniel G. Aliaga
Modeling 3D objects and scenes is an important part of computer graphics. One approach to modeling is projecting binary patterns onto the scene in order to obtain correspondences and reconstruct a densely sampled 3D model. In such structured light systems, determining whether a pixel is directly illuminated by the projector is essential to decoding the patterns. In this paper, we introduce a robust, efficient, and easy to implement pixel classification algorithm for this purpose. Our method correctly establishes the lower and upper bounds of the possible intensity values of an illuminated pixel and of a non-illuminated pixel. Based on the two intervals, our method classifies a pixel by determining whether its intensity is within one interval and not in the other. Experiments show that our method improves both the quantity of decoded pixels and the quality of the final reconstruction producing a dense set of 3D points, inclusively for complex scenes with indirect lighting effects. Furthermore, our method does not require newly designed patterns; therefore, it can be easily applied to previously captured data.
Real-time backward disparity-based rendering for dynamic scenes using programmable graphics hardware BIBAFull-Text 241-248
  Minglun Gong; Jason M. Selzer; Cheng Lei; Yee-Hong Yang
This paper presents a backward disparity-based rendering algorithm, which runs at real-time speed on programmable graphics hardware. The algorithm requires only a handful of image samples of the scene and estimated noisy disparity maps, whereas most existing techniques need either dense samples or accurate depth information. To color a given pixel in the novel view, a backward searching process is conducted to find the corresponding pixels from the closest four reference images. The use of backward searching process makes the algorithm more robust to errors in estimated disparity maps than existing forward warping-based approaches. In addition, since the computations for different pixels are independent, they can be performed in parallel on the Graphics Processing Units of modern graphics hardware. Experiment results demonstrate that our algorithm can synthesize accurate novel views for dynamic real scenes at a high frame rate.
Optimized tile-based texture synthesis BIBAFull-Text 249-256
  Weiming Dong; Ning Zhou; Jean-Claude Paul
One significant problem in tile-based texture synthesis is the presence of conspicuous seams in the tiles. The reason is that the sample patches employed as primary patterns of the tile set may not be well stitched if carelessly picked. In this paper, we introduce an optimized approach that can stably generate an ω-tile set of high pattern diversity and high quality. Firstly, an extendable rule is introduced to increase the number of sample patches to vary the patterns in an ω-tile set. Secondly, in contrast to the other concurrent techniques that randomly choose sample patches for tile construction, our technique uses Genetic Algorithm to select the feasible patches from the input example. This operation insures the quality of the whole tile set. Experimental results verify the high quality and efficiency of the proposed algorithm.
Improved image quilting BIBAFull-Text 257-264
  Jeremy Long; David Mould
In this paper, we present an improvement to the minimum error boundary cut, a method of shaping texture patches for non-parametric texture synthesis from example algorithms such as Efros and Freeman's Image Quilting [4]. Our method uses an alternate distance metric for Dijkstra's algorithm [3], and as a result we are able to prevent the path from taking short cuts through high cost areas, as can sometimes be seen in traditional image quilting. Furthermore, our method is able to reduce both the maximum error in the resulting texture and the visibility of the remaining defects by spreading them over a longer path. Post-process methods such as pixel re-synthesis [9] can easily be modified and applied to our minimum boundary cut to increase the quality of the results.
On visual quality of optimal 3D sampling and reconstruction BIBAFull-Text 265-272
  Tai Meng; Benjamin Smith; Alireza Entezari; Arthur E. Kirkpatrick; Daniel Weiskopf; Leila Kalantari; Torsten Möller
This paper presents a user study of the visual quality of an imaging pipeline employing the optimal body-centered cubic (BCC) sampling lattice. We provide perceptual evidence supporting the theoretical expectation that sampling and reconstruction on the BCC lattice offer superior imaging quality over the traditionally popular Cartesian cubic (CC) sampling lattice. We asked 12 participants to choose the better of two images: one image rendered from data sampled on the CC lattice and one image that is rendered from data sampled on the BCC lattice. We used both synthetic and CT volumetric data, and confirm that the theoretical advantages of BCC sampling carry over to the perceived quality of rendered images. Using 25% to 35% fewer samples, BCC sampled data result in images that exhibit comparable visual quality to their CC counterparts.

Visualization

Feature peeling BIBAFull-Text 273-280
  Muhammad Muddassir Malik; Torsten Möller; M. Eduard Gröller
We present a novel rendering algorithm that analyses the ray profiles along the line of sight. The profiles are subdivided according to encountered peaks and valleys at so called transition points. The sensitivity of these transition points is calibrated via two thresholds. The slope threshold is based on the magnitude of a peak following a valley, while the peeling threshold measures the depth of the transition point relative to the neighboring rays. This technique separates the dataset into a number of feature layers. The user can scroll through the layers inspecting various features from the current view position. While our technique has been inspired by opacity peeling approach, we demonstrate that we can reveal detectable features even in the third and forth layers for both CT and MRI datasets.
Visualization and exploration of time-varying medical image data sets BIBAFull-Text 281-288
  Zhe Fang; Torsten Möller; Ghassan Hamarneh; Anna Celler
In this work, we propose and compare several methods for the visualization and exploration of time-varying volumetric medical images based on the temporal characteristics of the data. The principle idea is to consider a time-varying data set as a 3D array where each voxel contains a time-activity curve (TAC). We define and appraise three different TAC similarity measures. Based on these measures we introduce three methods to analyze and visualize time-varying data. The first method relates the whole data set to one template TAC and creates a 1D histogram. The second method extends the 1D histogram into a 2D histogram by taking the Euclidean distance between voxels into account. The third method does not rely on a template TAC but rather creates a 2D scatter-plot of all TAC data points via multi-dimensional scaling. These methods allow the user to specify transfer functions on the 1D and 2D histograms and on the scatter plot, respectively. We validate these methods on synthetic dynamic SPECT and PET data sets and a dynamic planar Gamma camera image of a patient. These techniques are designed to offer researchers and health care professionals a new tool to study the time-varying medical imaging data sets.
Point-based stream surfaces and path surfaces BIBAFull-Text 289-296
  Tobias Schafhitzel; Eduardo Tejada; Daniel Weiskopf; Thomas Ertl
We introduce a point-based algorithm for computing and rendering stream surfaces and path surfaces of a 3D flow. The points are generated by particle tracing, and an even distribution of those particles on the surfaces is achieved by selective particle removal and creation. Texture-based surface flow visualization is added to show inner flow structure on those surfaces. We demonstrate that our visualization method is designed for steady and unsteady flow alike: both the path surface component and the texture-based flow representation are capable of processing time-dependent data. Finally, we show that our algorithms lend themselves to an efficient GPU implementation that allows the user to interactively visualize and explore stream surfaces and path surfaces, even when seed curves are modified and even for time-dependent vector fields.
Isochords: visualizing structure in music BIBAFull-Text 297-304
  Tony Bergstrom; Karrie Karahalios; John C. Hart
Isochords is a visualization of music that aids in the classification of musical structure. The Isochords visualization highlights the consonant intervals between notes and common chords in music. It conveys information about interval quality, chord quality, and the chord progression synchronously during playback of digital music. Isochords offers listeners a means to grasp the underlying structure of music that, without extensive training, would otherwise remain unobserved or unnoticed. In this paper we present the theory of the Isochords structure, the visualization, and comments from novice and experienced users.

Meshes and compression

A GPU based interactive modeling approach to designing fine level features BIBAFull-Text 305-311
  Xin Huang; Sheng Li; Guoping Wang
In this paper we propose a GPU based interactive geometric modeling approach to designing fine level features on subdivision surfaces. Displacement mapping is a technique for adding fine geometric detail to surfaces by using two-dimensional height map to produce photo-realistic surfaces. Due to space inefficiency and time consuming to render displacement map, this technique is generally limited in offline cinematic content creation packages. We propose a new approach to designing fine level features on subdivision surfaces via displacement mapping interactively on the latest GPU. Our method can reduce the bandwidth of the graphics channel by generating complex geometric detail on GPU, without feeding a large number of vertices to the AGP or PCI-E. Moreover, we introduce feature modification tools to flexibly control and adjust the created features. Designers can preview the features at the rendering stage, saving the time to generate the satisfying features on surfaces. The proposed approach is efficient and robust, and can be applied in many interactive graphics applications such as computer gaming, geometric modeling and computer animation.
Adapting wavelet compression to human motion capture clips BIBAFull-Text 313-318
  Philippe Beaudoin; Pierre Poulin; Michiel van de Panne
Motion capture data is an effective way of synthesizing human motion for many interactive applications, including games and simulations. A compact, easy-to-decode representation is needed for the motion data in order to support the real-time motion of a large number of characters with minimal memory and minimal computational overheads. We present a wavelet-based compression technique that is specially adapted to the nature of joint angle data. In particular, we define wavelet coefficient selection as a discrete optimization problem within a tractable search space adapted to the nature of the data. We further extend this technique to take into account visual artifacts such as footskate. The proposed techniques are compared to standard truncated wavelet compression and principal component analysis based compression. The fast decompression times and our focus on short, recomposable animation clips make the proposed techniques a realistic choice for many interactive applications.
Stretch-based tetrahedral mesh manipulation BIBAFull-Text 319-325
  Wenhao Song; Ligang Liu
We present a novel least scaling distortion metric to measure the deformation distortion for tetrahedral meshes. The stretch-like metric is a combination of Jacobian matrix norm and tetrahedron volume and has the properties of good shape preservation and rotation invariance. Based on our metric, we propose a uniform non-linear optimization solution to a variety of tetrahedral mesh manipulation applications including shape deformation, interpolation, deformation transfer, and deformation learning. Our approach can produce volume preserving and flip free tetrahedral mesh results, which performs much better than the previous tetrahedral manipulation approaches. We also demonstrate an efficient and practical application using free-form deformation technique. The object is embedded in a rough control tetrahedral mesh and deformed by editing the tetrahedral mesh with various constraints. Each vertex of the object can be obtained by its barycentric coordinates according to its embedding tetrahedron of the control tetrahedral mesh.
Spectral graph-theoretic approach to 3D mesh watermarking BIBAFull-Text 327-334
  Emad E. Abdallah; A. Ben Hamza; Prabir Bhattacharya
We propose a robust and imperceptible spectral watermarking method for high rate embedding of a watermark into 3D polygonal meshes. Our approach consists of four main steps: (1) the mesh is partitioned into smaller sub-meshes, and then the watermark embedding and extraction algorithms are applied to each sub-mesh, (2) the mesh Laplacian spectral compression is applied to the sub-meshes, (3) the watermark data is distributed over the spectral coefficients of the compressed sub-meshes, (4) the modified spectral coefficients with some other basis functions are used to obtain uncompressed watermarked 3D mesh. The main attractive features of this approach are simplicity, flexibility in data embedding capacity, and fast implementation. Extensive experimental results show the improved performance of the proposed method, and also its robustness against the most common attacks including the geometric transformations, adaptive random noise, mesh smoothing, mesh cropping, and combinations of these attacks.
Optimized subdivisions for preprocessed visibility BIBAFull-Text 335-342
  Oliver Mattausch; Jirí Bittner; Peter Wonka; Michael Wimmer
This paper describes a new tool for preprocessed visibility. It puts together view space and object space partitioning in order to control the render cost and memory cost of the visibility description generated by a visibility solver. The presented method progressively refines view space and object space subdivisions while minimizing the associated render and memory costs. Contrary to previous techniques, both subdivisions are driven by actual visibility information. We show that treating view space and object space together provides a powerful method for controlling the efficiency of the resulting visibility data structures.