| 3D Mesh Compression Using Fixed Spectral Bases | | BIBA | PDF | GI Online Paper | 1-8 | |
| Zachi Karni; Craig Gotsman | |||
| We show how to use fixed bases for efficient spectral compression of 3D meshes. In contrast with compression using variable bases, this permits efficient decoding of the mesh. The coding procedure involves efficient mesh augmentation and generation of a neighborhood-preserving mapping between the vertices of a 3D mesh with arbitrary connectivity and those of a 6-regular mesh. | |||
| Watermarking 3D Polygonal Meshes in the Mesh Spectral Domain | | BIBA | PDF | GI Online Paper | 9-18 | |
| Ryutarou Ohbuchi; Shigeo Takahashi; Takahiko Miyazawa; Akio Mukaiyama | |||
| Digital watermarking embeds a structure called watermark into the target data, such as image and 3D polygonal models. The watermark can be used, for example, to enforce copyright and to detect tampering. This paper presents a new robust watermarking method that adds watermark into a 3D polygonal mesh in the mesh's spectral domain. The algorithm computes spectra of the mesh by using eigenvalue decomposition of a Laplacian matrix derived only from connectivity of the mesh. Mesh spectra can be obtained by projecting coordinates of vertices onto the set of eigenvectors. A watermark is embedded by modifying the magnitude of the spectra. Watermarks embedded by using this method are resistant to similarity transformation, random noise added to vertex coordinates, mesh smoothing, and partial resection of the meshes. | |||
| Topological Noise Removal | | BIBA | PDF | GI Online Paper | 19-26 | |
| Igor Guskov; Zoe Wood | |||
| Meshes obtained from laser scanner data often contain topological noise due to inaccuracies in the scanning and merging process. This topological noise complicates subsequent operations such as remeshing, parameterization and smoothing. We introduce an approach that removes unnecessary nontrivial topology from meshes. Using a local wave front traversal, we discover the local topologies of the mesh and identify features such as small tunnels. We then identify non-separating cuts along which we cut and seal the mesh, reducing the genus and thus the topological complexity of the mesh. | |||
| Motion Conversion based on the Musculoskeletal System | | BIBA | PDF | Full-Text | Full-Text | Full-Text | AVI | AVI | 27-36 | |
| Taku Komura; Yoshihisa Shinagawa | |||
| Today, using motion capture devices is the most common way to create realistic human motion data. In addition to that, various methods have been proposed to edit, morph and retarget such kind of motion. However, there are still few methods to add physiological effects to motion which are caused by fatigue, injuries, muscle training and muscle shrinking. This is because the innate structure of the human body, such as the musculoskeletal system, has been mostly neglected when handling human motion in computer graphics. In this paper, we propose a method to use the musculoskeletal system of the human body for editing and retargeting human motion which were captured using a motion-capture device. Using our method, not only physiological effects such as fatigue, or injuries but also physical effects caused by external force can be added to human motion. By changing the muscular parameters and size of the body, it is also possible to retarget the motion to different bodies such as a very trained muscular body, weak and narrow body, or a small childish body. | |||
| Geometry-based Muscle Modeling for Facial Animation | | BIBA | PDF | PDF | GI Online Paper | 37-46 | |
| Kolja Kahler; Jörg Haber; Hans-Peter Seidel | |||
| We present a muscle model and methods for muscle construction that allow to easily create animatable facial models from given face geometry. Using our editing tool, one can interactively specify coarse outlines of the muscles, which are then automatically created to fit the face geometry. Our muscle model incorporates different types of muscles and the effects of bulging and intertwining muscle fibers. The influence of muscle contraction onto the skin is simulated using a mass-spring system that connects the skull, muscle, and skin layers of our model. | |||
| Novel Solver for Dynamic Surfaces | | BIBA | PDF | GI Online Paper | 47-54 | |
| Sumantro Ray; Hong Qin | |||
| Physics-based modeling integrates dynamics and geometry. The standard methods to solve the Lagrangian equations use a direct approach in the spatial domain. Though extremely powerful, it requires time consuming discrete-time integration. In this paper, we propose to use an indirect approach using the Transformation Theory. In particular, we use z-transform from the digital signal processing theory, and formulate a general, novel, unified solver that is applicable for various models and behavior. The convergence and accuracy of the solver are guaranteed if the temporal sampling period is less than the critical sampling period, which is a function of the physical properties of the model. Our solver can seamlessly handle curves, surfaces and solids, and supports a wide range of dynamic behavior. The solver does not depend on the topology of the model, and hence supports non-manifold and arbitrary topology. Our numerical techniques are simple, easy to use, stable, and efficient. We develop an algorithm and a prototype software simulating various models and behavior. Our solver preserves physical properties such as energy, linear momentum, and angular momentum. This approach will serve as a foundation for many applications in many fields. | |||
| Simplification and Real-time Smooth Transitions of Articulated Meshes | | BIBA | PDF | GI Online Paper | 55-60 | |
| Jocelyn Houle; Pierre Poulin | |||
| Simplification techniques have mainly been applied on static models. However
in movie and game industries, many models are designed to be animated. We
extend the progressive mesh technique to handle skeletally-articulated meshes
in order to obtain a continuous level-of-detail (CLOD) representation that
retains its ability to be animated. Our technique is not limited to any
simplification metric, nor is it limited to generating models composed of a
subset of the original vertices. It thus preserves the full simplification
potential.
To further improve performance, we can use this CLOD representation and extract a discrete set of skeletally-articulated models. Each model can be independently optimized, such as by using triangle strips. We can also use morphing between the different models in order to create smoother transitions. The result is a more accurate representation of animated articulated models, suitable for real-time applications. | |||
| Hardware Accelerated Displacement Mapping for Image Based Rendering | | BIBA | PDF | GI Online Paper | 61-70 | |
| Jan Kautz; Hans-Peter Seidel | |||
| In this paper, we present a technique for rendering displacement mapped geometry using current graphics hardware. Our method renders a displacement by slicing through the enclosing volume. The alpha-test is used to render only the appropriate parts of every slice. The slices need not to be aligned with the base surface, e.g. it is possible to do screen-space aligned slicing. We then extend the method to be able to render the intersection between several displacement mapped polygons. This is used to render a new kind of image-based objects based on images with depth, which we call image based depth objects. This technique can also directly be used to accelerate the rendering of objects using the image-based visual hull. Other warping based IBR techniques can be accelerated in a similar manner. | |||
| The Rayset and Its Applications | | BIBA | PDF | GI Online Paper | 71-80 | |
| Minglun Gong; Yee-Hong Yang | |||
| In this paper, a novel concept, rayset, is proposed. A new image based representation, object centered concentric mosaics (OCCM), is derived based on this concept. The rayset is a parametric function, which consists of two mapping relations. The first mapping relation maps from a parameter space to the ray space. The second one maps from the parameter space to the attribute space. We show that different image-based scene representations can all be cast as different kinds of raysets, and image-based rendering approaches can be regarded as attempts to sample and reconstruct the scene using different raysets. A collection of OCCM is a 3-D rayset, which can be used to represent an object. The storage size for OCCM is about the same as an animation sequence generated with a virtual camera rotated around the object. However, comparing with such an animation sequence, OCCM provide a much richer experience because the user can move back and forth freely in the scene and observe the changes in parallax. | |||
| Universal Rendering Sequences for Transparent Vertex Caching of Progressive Meshes | | BIBA | PDF | GI Online Paper | 81-90 | |
| Alexander Bogomjakov; Craig Gotsman | |||
| We present methods to generate rendering sequences for triangle meshes which preserve mesh locality as much as possible. This is useful for maximizing vertex reuse when rendering the mesh using a FIFO vertex buffer, such as those available in modern 3D graphics hardware. The sequences are universal in the sense that they perform well for all sizes of vertex buffers, and generalize to progressive meshes. This has been verified experimentally. | |||
| Tunneling for Triangle Strips in Continuous Level-of-Detail Meshes | | BIBA | PDF | GI Online Paper | 91-100 | |
| A. James Stewart | |||
| This paper describes a method of building and maintaining a good set of triangle strips for both static and continuous level-of-detail (CLOD) meshes. For static meshes, the strips are better than those computed by the classic SGI and STRIPE algorithms. For CLOD meshes, the strips are maintained incrementally as the mesh topology changes. The incremental changes are fast and the number of strips is kept very small. | |||
| Truly Selective Refinement of Progressive Meshes | | BIBA | PDF | GI Online Paper | 101-110 | |
| Junho Kim; Seungyong Lee | |||
| This paper presents a novel selective refinement scheme of progressive meshes. In previous schemes, topology information in the neighborhood of a collapsed edge is stored in the analysis phase. A vertex split or edge collapse transformation is possible in the synthesis phase only if the configuration of neighborhood vertices in the current mesh corresponds to the stored topology information. In contrast, the proposed scheme makes it possible to apply a vertex split or an edge collapse to any selected vertex or edge in the current mesh without a precondition. Our main observation is that the concept of a dual piece can be used to clearly enumerate and visualize the set of all possible selectively refined meshes for a given mesh. Our refinement scheme is truly selective in the sense that each vertex split or edge collapse can be performed without incurring additional vertex split and/or edge collapse transformations. | |||
| Interacting with Image Sequences: Detail-in-Context and Thumbnails | | BIBA | PDF | GI Online Paper | 111-118 | |
| Oliver Kuederle; Kori Inkpen; Stella Atkins; Sheelagh Carpendale | |||
| An image sequence is a series of interrelated images. To enable navigation of large image sequences, many current software packages display small versions of the images, called thumbnails. We observed radiologists during typical diagnosis sessions, where image sequences are examined using photographic films and sophisticated light screens. Based on these observations and on previous research, we have developed a new alternative to the presentation of image sequences on a desktop monitor, a variation of a detail-in-context technique. This paper describes a controlled experiment in which we examined the way users interact with detail-in-context and thumbnail techniques. Our results show that our detail-in-context technique accommodates many individual strategies whereas the thumbnail technique strongly encourages sequential examination of the images. Our findings can assist in the design and development of interactive systems that involve the navigation of large image sequences. | |||
| An Isometric Joystick as a Pointing Device for Handheld Information Terminals | | BIBA | PDF | GI Online Paper | 119-126 | |
| Miika Silfverberg; I. Scott MacKenzie; Tatu Kauppinen | |||
| Meeting the increasing demand for desktop-like applications on mobile products requires powerful interaction techniques. One candidate is GUI-style point-and-click interaction using an integrated pointing device that supports handheld use. We tested an isometric joystick for this purpose. Two prototypes were built. They were designed for thumb operation and included a separate selection button. Twelve participants performed point-and-select tasks. We tested both one-handed and two-handed interaction, and selection using the separate selection button and the joystick's integrated press-to-select feature. A notebook configuration served as a reference. Results for the handheld conditions, both one-handed and two-handed, were just slightly off those for the notebook condition, suggesting that an isometric joystick is suitable as a pointing device for handheld terminals. Inadvertent selection while moving the pointer yielded high error rates for all conditions using press-to-select. A separate select button is therefore needed to ensure accurate selection. | |||
| Aiding Manipulation of Handwritten Mathematical Expressions through Style-Preserving Morphs | | BIBA | PDF | GI Online Paper | 127-134 | |
| Richard Zanibbi; Kevin Novins; Jim Arvo; Katherine Zanibbi | |||
| We describe a technique for enhancing a user's ability to manipulate hand-printed symbolic information by automatically improving legibility and simultaneously providing immediate feedback on the system's current structural interpretation of the information. Our initial application is a handwriting-based equation editor. Once the user has written a formula, the individual hand-drawn symbols can be gradually translated and scaled to closely approximate their relative positions and sizes in a corresponding typeset version. These transformations preserve the characteristics, or style, of the original user-drawn symbols. In applying this style-preserving morph, the system improves the legibility of the user-drawn symbols by correcting alignment and scaling, and also reveals the baseline structure of the symbols that has been inferred by system. We performed a preliminary user study that indicates that this new method of feedback is a useful addition to a conventional interpretive interface. We believe this is because the style preserving morph makes it easier to understand the correspondence between the original input and interpreted output than methods that radically change the appearance of the original input. | |||
| 3D Scene Manipulation with 2D Devices and Constraints | | BIBA | PDF | TXT | AVI | AVI | AVI | AVI | GI Online Paper | 135-142 | |
| Graham Smith; Wolfgang Stuerzlinger; Tim Salzman | |||
| Content creation for computer graphics applications is a laborious process that requires skilled personnel. One fundamental problem is that manipulation of 3D objects with 2D user interfaces is very difficult for non-experienced users. In this paper, we introduce a new system that uses constraints to restrict object motion in a 3D scene, making interaction much simpler and more intuitive. We compare three different 3D scene manipulation techniques based on a 2D user interface. We show that the presented techniques are significantly more efficient than commonly used solutions. To our knowledge, this is the first evaluation of 3D manipulation techniques with 2D devices and constraints. | |||
| The Lit Sphere: A Model for Capturing NPR Shading from Art | | BIBA | PDF | GI Online Paper | 143-150 | |
| Peter-Pike Sloan; William Martin; Amy Gooch; Bruce Gooch | |||
| While traditional graphics techniques provide for the realistic display of three-dimensional objects, these methods often lack the flexibility to emulate expressive effects found in the works of artists such as Michelangelo and Cezanne. We introduce a technique for capturing custom artistic shading models from sampled art work. Our goal is to allow users to easily generate shading models which give the impression of light, depth, and material properties as accomplished by artists. Our system provides real-time feedback to immediately illustrate aesthetic choices in shading model design, and to assist the user in the exploration of novel viewpoints. We describe rendering algorithms which are easily incorporated into existing shaders, making non-photorealistic rendering of materials such as skin, metal, or even painted objects fast and simple. The flexibility of these methods for generating shading models enables users to portray a large range of materials as well as to capture the look and feel of a work of art. (Color images can be found at http://www.cs.utah.edu/npr/papers/LitSphere_HTML.) | |||
| View-Dependent Particles for Interactive Non-Photorealistic Rendering | | BIBA | PDF | GI Online Paper | 151-158 | |
| Derek Cornish; Andrea Rowan; David Luebke | |||
| We present a novel framework for non-photorealistic rendering (NPR) based on view-dependent geometric simplification techniques. Following a common thread in NPR research, we represent the model as a system of particles, which will be rendered as strokes in the final image and which may optionally overlay a polygonal surface. Our primary contribution is the use of a hierarchical view-dependent clustering algorithm to regulate the number and placement of these particles. This algorithm unifies several tasks common in artistic rendering, such as placing strokes, regulating the screen-space density of strokes, and ensuring inter-frame coherence in animated or interactive rendering. View-dependent callback functions determine which particles are rendered and how to render the associated strokes. The resulting framework is interactive and extremely flexible, letting users easily produce and experiment with many different art-based rendering styles. | |||
| Realistic and Controllable Fire Simulation | | BIBA | PDF | GI Online Paper | 159-166 | |
| Philippe Beaudoin; Sébastien Paquet; Pierre Poulin | |||
| We introduce a set of techniques that are used together to produce realistic-looking animations of burning objects. These include a new method for simulating spreading on polygonal meshes. A key component of our approach consists in using individual flames as primitives to animate and render the fire. This simplification enables rapid computation and gives more intuitive control over the simulation without compromising realism. It also scales well, making it possible to animate phenomena ranging from simple candle-like flames to complex, widespread fires. | |||
| Corrosion: Simulating and Rendering | | BIBA | PDF | GI Online Paper | 167-174 | |
| Stephane Merillou; Jean-Michel Dischler; Djamchid Ghazanfarpour | |||
| Weathering phenomena represent a topic of growing interest in computer graphics, and corrosion reactions are of great importance since they affect a large number of different fields. Previous investigations have essentially dealt with the modeling and rendering of metallic patinas. We propose an approach based on simple physical characteristics to simulate and render new forms of corrosion. We take into account "real world" time and different atmospheric condition categories with experimental data. This allows us to predict the evolution of corrosion over time. The reaction is simulated using a random walk technique adapted to our generic model. For realistic rendering, we propose a BRDF and (color- and bump-) texture models, thus affecting color, reflectance and geometry. We additionally propose a set of rules to automatically predict the preferential starting locations of corrosion. | |||
| Surface Aging by Impacts | | BIBA | PDF | PDF | PDF | MOV | MOV | MOV | MOV | MOV | GI Online Paper | 175-182 | |
| Eric Paquette; Pierre Poulin; George Drettakis | |||
| We present a novel aging technique that simulates the deformation of an object caused by repetitive impacts over long periods of time. Our semi-automatic system deteriorates the surface of an object by hitting it with another object. An empirical simulation modifies the object surface to represent the small depressions caused by each impact. This is done by updating the vertices of an adaptively refined object mesh. The simulation is efficient, applying hundreds of impacts in a few seconds. The user controls the simulation through intuitive parameters. Because the simulation is rapid, the user can easily adjust the parameters and see the effect of impacts interactively. The models processed by our system exhibit the cumulative aging effects of repetitive impacts, significantly increasing their realism. | |||
| 3D-Interaction Techniques for Planning of Oncologic Soft Tissue Operations | | BIBA | PDF | PDF | PDF | GI Online Paper | 183-190 | |
| Bernhard Preim; Wolf Spindler; Karl Oldhafer; Heinz-Otto Peitgen | |||
| We discuss interaction tasks and interaction techniques for the planning of
soft tissue operations as for example oncologic liver and lung surgery. We
focus on techniques to explore the relevant structures, to integrate
measurements directly in 3d visualizations and to specify resection volumes.
The main contribution of this paper is the introduction of new techniques for
3d measurements and for virtual resections. For both interaction tasks,
dedicated widgets have been developed for the direct-manipulative use. In
contrast to surgical simulators, which are used for the education of future
surgeons, we concentrate on surgeons in the clinical routine and attempt to
provide them with preoperative decision-support on the basis of
patient-individual data.
The selection of the interaction tasks to be supported is based on a questionnaire in which 13 surgeons described their praxis of surgery planning and their requirements for computer support. All visualization and interaction techniques are integrated in a software, named SURGERYPLANNER, which exploits the results of image analysis achieved in an earlier project. With the SURGERYPLANNER the anatomical and pathological structures of individual patients are used for surgery planning. | |||
| Accelerated Splatting using a 3D Adjacency Data Structure | | BIBA | PDF | GI Online Paper | 191-200 | |
| Jeff Orchard; Torsten Moeller | |||
| We introduce a new acceleration to the standard splatting volume rendering algorithm. Our method achieves full colour (32-bit), depth-sorted and shaded volume rendering significantly faster than standard splatting. The speedup is due to a 3-dimensional adjacency data structure that efficiently skips transparent parts of the data and stores only the voxels that are potentially visible. Our algorithm is robust and flexible, allowing for depth sorting of the data, including correct back-to-front ordering for perspective projections. This makes interactive splatting possible for applications such as medical visualizations that rely on structure and depth information. | |||
| Assisted Visualization of E-Commerce Auction Agents | | BIBA | PDF | GI Online Paper | 201-208 | |
| Christopher Healey; Robert St. Amant; Jiae Chang | |||
| This paper describes the integration of perceptual guidelines from human vision with an AI-based mixed-initiative search technique. The result is a visualization assistant, a system that identifies perceptually salient visualizations for large, multidimensional collections of data. Understanding how the low-level human visual system "sees" visual information in an image allows us to: (1) evaluate a particular visualization, and (2) direct the search algorithm towards new visualizations that may be better than those seen to date. In this way we can limit search to locations that have the highest potential to contain effective visualizations. One testbed application for this work is the visualization of intelligent e-commerce auction agents participating in a simulated online auction environment. We describe how the visualization assistant was used to choose methods to effectively visualize this data. | |||
| Interactive Volume Rendering based on a ''Bubble Model'' | | BIBA | PDF | MPEG | MPEG | MPEG | MPEG | GIF | GI Online Paper | 209-216 | |
| Balázs Csébfalvi; Eduard Gröller | |||
| In this paper an interactive volume rendering technique is presented which is based on a novel visualization model. We call the basic method ''bubble model'' since iso-surfaces are rendered as thin semi-transparent membranes similarly to blown soap bubbles. The primary goal is to develop a fast previewing technique for volumetric data which does not require a time-consuming transfer function specification to visualize internal structures. Our approach uses a very simple rendering model controlled by only two parameters. We also present an interactive rotation technique which does not rely on any specialized hardware, therefore it can be widely used even on low-end machines. Due to the interactive display, fine tuning is also supported since the modification of the rendering parameters has an immediate visual feedback. | |||
| Efficient View-dependent Rendering of Terrains | | BIBA | PDF | GI Online Paper | 217-222 | |
| Yadong Wu; Yushu Liu; Shouyi Zhan; Chunxiao Gao | |||
| Though considerable progress has been made with the view-dependent techniques in terrain visualization, the CPU overhead still precludes their wide application in many domains. The calculation complexity of view-dependent techniques mainly involves the calculation of node screen space error every frame, including the time-consuming screen space projection, the number of nodes whose projection error remains to be updated, and the evaluation of the valid life of the projection error. In this paper we introduce block-priority-based traversal of quadtree for reducing the traversal complexity and propose view-angle-based error metrics. Thus we successfully speed up the valid life evaluation of projection error by means of calculating the spatial relation between the viewpoint and a simplified split zone. In addition, constant frame rate has been achieved by scaling the split zone accordingly. Corresponding experimental results have shown that our methods can real-time render large scale terrain on a low-cost PC so as to satisfy the demand of most applications in this way. | |||
| Characterizing Image Fusion Techniques in Stereoscopic HTDs | | BIBA | PDF | GI Online Paper | 223-232 | |
| Zachary Wartell; Larry Hodges; William Ribarsky | |||
| Stereoscopic display is fundamental to many virtual reality systems. Stereoscopic systems render two perspective views of a scene one for each eye of the user. Ideally the user's visual system combines the stereo image pairs into a single, 3D perceived image. In practice, however, users can have difficulty fusing the stereo image pair into a single 3D image. Researchers have used a number of software methods to reduce fusion problems. We are particularly concerned with the effects of these techniques on stereoscopic HTDs (Head-Tracked Display). In these systems the head is tracked but the display is stationary, attached to a desk, tabletop or wall. This paper comprehensively surveys software fusion techniques. We then geometrically characterize and classify the various techniques and illustrate how they relate to stereoscopic HTD application characteristics. | |||