HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 909192939495969798990001020304050607080910

Proceedings of the 2000 Conference on Graphics Interface

Fullname:Proceedings of the 2000 conference on Graphics Interface
Editors:Sidney S. Fels; Pierre Poulin
Location:Montreal, Quebec, Canada
Dates:2000-May-15 to 2000-May-17
Publisher:Canadian Information Processing Society
Standard No:ISBN 0-9695338-9-6; hcibib: GI00
Papers:29
Pages:230
Links:Conference Series Home Page | Online Proceedings
  1. Invited Talk
  2. Issues and Techniques for Interactive Information Spaces
  3. Modeling
  4. Invited Speaker
  5. Animation
  6. Image-based Modeling and Rendering
  7. Collaborative and Community Spaces
  8. Rendering
  9. Image Processing and Visualization
  10. Advances in HCI Design and Applications
  11. Geometry

Invited Talk

Tangible Bits: Designing the Boundary between People, Bits, and Atoms BIBAPDFGI Online PaperMore Info 1-2
  Hiroshi Ishii
People have developed sophisticated skills for sensing and manipulating our physical environments. However, most of these skills are not employed by traditional Graphical User Interface (GUI). Tangible Bits, our vision of Human Computer Interaction (HCI), seeks to build upon these skills by giving physical form to digital information, seamlessly coupling the dual worlds of bits and atoms. Guided by the Tangible Bits vision, we are designing ''tangible user interfaces'' which employ physical objects, surfaces, and spaces as tangible embodiments of digital information. These involve foreground interactions with graspable objects and augmented surfaces, exploiting the human senses of touch and kinesthesia. We are also exploring background information displays which use ''ambient media'' - ambient light, sound, airflow, and water movement. Here, we seek to communicate digitally-mediated senses of activity and presence at the periphery of human awareness. Our goal is to realize seamless interfaces between humans, digital information, and the physical environment taking advantage of the richness of multimodal human senses and skills developed through our lifetime of interaction with the physical world.
   In this talk, I will present a variety of tangible user interfaces the Tangible Media Group has designed and presented within the CHI, SIGGRAPH, IST, and CSCW communities in the past years.

Issues and Techniques for Interactive Information Spaces

Multi-resolution Amplification Widgets BIBAPDFGI Online Paper 3-10
  Kiril Vidimce; David Banks
We describe a 3D graphical interaction tool called an amplification widget that allows a user to control the position or orientation of an object at multiple scales. Fine and coarse adjustments are available within a single tool which gives visual feedback to indicate the level of resolution being applied. Amplification widgets have been included in instructional modules of The Optics Project, designed to supplement undergraduate physics courses. The user evaluation is being developed by the Institute of the Mid-South Educational Research Association under the sponsorship of a 2-year grant from the National Science Foundation.Nous décrivons un outil graphique de l'interaction 3D q'on appele un "widget d'amplification" qui permet à un utilisateur de contrôler la position ou l'orientation d'un objet aux échelles divers. Les réglages fins et approchés sont disponibles dans un outil simple qui donne le feedback visuel pour indiquer le niveau de la résolution étant appliquée. Des widgets d'amplification ont été inclus dans des modules d'instruction de The Optics Project, conçus pour compléter des cours de physique. L'évaluation d'utilisateur est développée par l'Institute of the Mid-South Educational Research Association sous le patronage du National Science Foundation.
Navigating Complex Information with the ZTree BIBAPDFAVIGI Online Paper 11-18
  Lyn Bartram; Axel Uhl; Tom Calvert
This paper discusses navigation issues in large-scale databases and proposes hypermap visualizations as effective navigational views. We describe the ZTree, a technique that allows users to explore both hierarchical and relational aspects of the information space. The ZTree uses a fisheye map layout that aids the user in current navigational decisions and provides a history of previous information retrieval paths.
The Effects of Feedback on Targeting Performance in Visually Stressed Conditions BIBAPDFGI Online Paper 19-26
  Julie Fraser; Carl Gutwin
In most graphical user interfaces, a substantial proportion of the user's interaction involves targeting screen objects with the mouse cursor. Targeting tasks with small targets are visually demanding, and can cause users difficulty in some circumstances. These circumstances can arise either if the user has a visual disability or if factors such as fatigue or glare diminish acuity. One way of reducing the perceptual demands of targeting is to add redundant feedback to the interface that indicates when the user has successfully acquired a target. Under optimal viewing conditions, such feedback has not significantly improved targeting performance. However, we hypothesized that targeting feedback would be more beneficial in a visually stressed situation. We carried out an experiment in which normally-sighted participants in a reduced-acuity environment carried out targeting tasks with a mouse. We found that people were able to select targets significantly faster when they were given targeting feedback, and that they made significantly fewer errors. People also greatly preferred interfaces with feedback to those with none. The results suggest that redundant targeting feedback can improve the usability of graphical interfaces for low-vision users, and also for normally-sighted users in visually stressed environments.

Modeling

Fast and Controllable Simulation of the Shattering of Brittle Objects BIBAPDFGI Online Paper 27-34
  Jeffrey Smith; Andrew Witkin; David Baraff
We present a method for the rapid and controllable simulation of the shattering of brittle objects under impact. An object to be broken is represented as a set of point masses connected by distance-preserving linear constraints. This use of constraints, rather than stiff springs, gains us a significant advantage in speed while still retaining fine control over the fracturing behavior. The forces exerted by these constraints during impact are computed using Lagrange multipliers. These constraint forces are then used to determine when and where the object will break, and to calculate the velocities of the newly created fragments. We present the details of our technique together with examples illustrating its use.
Skinning Characters using Surface Oriented Free-Form Deformations BIBAPDFGI Online Paper 35-42
  Karan Singh; Evangelos Kokkevis
Skinning geometry effectively continues to be one of the more challenging and time consuming aspects of character setup. While anatomic and physically based approaches to skinning have been investigated, many skinned objects have no physical equivalents. Geometric approaches, which are more general and provide finer control, are thus predominantly used in the animation industry. Free-form deformations (FFD) are a powerful paradigm for the manipulation of deformable objects. Skinning objects indirectly using an FFD lattice reduces the geometric complexity that needs to be controlled by a skeleton. Many techniques have extended the original box-shaped FFD lattices to more general control lattice shapes and topologies, while preserving the notion of embedding objects within a lattice volume. This paper in contrast, proposes a surface-oriented FFD, where the space deformed by the control surface is defined by a distance function around the surface. Surface-oriented control structures bear a strong visual resemblance to the geometry they deform and can be constructed from the deformable geometry automatically. They also allow localization of control lattice complexity and deformation detail, making them ideally suited to the automated skinning of characters. This approach has been successfully implemented within the Maya2.0 animation system.

Invited Speaker

Artificial Animals (and Humans): From Physics to Intelligence BIBAPDFGI Online Paper 43-44
  Demetri Terzopoulos
The confluence of computer graphics and artificial life has produced virtual worlds inhabited by realistic ''artificial animals''. These synthetic organisms possess biomechanical bodies, sensors, and brains with locomotion, perception, behavior, learning, and cognition centers. Artificial animals, including artificial humans, are of interest because they are self-animating creatures that dramatically advance the state of the art of character animation and interactive games. As biomimetic autonomous agents situated in realistic virtual worlds, artificial animals also foster a deeper, computationally oriented understanding of complex living systems.

Animation

Dynamic Time Warp Based Framespace Interpolation for Motion Editing BIBAGZJPGGI Online Paper 45-52
  Ashraf Golam; Kok Cheong Wong
Motion capture (MOCAP) data clips can be visualized as a sequence of densely spaced curves, defining the joint angles of the articulated figure, over a specified period of time. Current research has focussed on frequency and time domain techniques to edit these curves, preserving the original qualities of the motion yet making it reusable in different spatio-temporal situations. We refine Guo et. al.'s&icedil;teGuo96 framespace interpolation algorithm which abstracts motion sequences as 1D signals, and interpolates between them to create higher dimension signals. Our method is more suitable for (though not limited to) editing densely spaced MOCAP data, than the existing algorithm. It achieves consistent motion transition through motion-state based dynamic warping of framespaces and automatic transition timing via framespace frequency interpolation.
Automatic Joint Parameter Estimation from Magnetic Motion Capture Data BIBAPDFMPGGI Online Paper 53-60
  James O'Brien; Robert Bodenheimer; Gabriel Brostow; Jessica Hodgins
This paper describes a technique for using magnetic motion capture data to determine the joint parameters of an articulated hierarchy. This technique makes it possible to determine limb lengths, joint locations, and sensor placement for a human subject without external measurements. Instead, the joint parameters are inferred with high accuracy from the motion data acquired during the capture Session. The parameters are computed by performing a linear least squares fit of a rotary joint model to the input data. A hierarchical structure for the articulated model can also be determined in situations where the topology of the model is not known. Once the system topology and joint parameters have been recovered, the resulting model can be used to perform forward and inverse kinematic procedures. We present the results of using the algorithm on human motion capture data, as well as validation results obtained with data from a simulation and a wooden linkage of known dimensions.
Animating Athletic Motion Planning By Example BIBAPDFQTQTQTQTQTGI Online Paper 61-68
  Ronald Metoyer; Jessica Hodgins
Character animation is usually reserved for highly skilled animators and computer programmers because few of the available tools allow the novice or casual user to create compelling animated content. In this paper, we explore a partial solution to this problem which lets the user coach animated characters by sketching their trajectories on the ground plane. The details of the motion are then computed with simulation. We create memory-based control functions for the high-level behaviors from examples supplied by the user and from real-world data of the behavior. The control function for the desired behavior is implemented through a lookup table using a K-nearest neighbor approximation algorithm. To demonstrate this approach, we present a system for defining the behaviors of defensive characters playing American football. The characters are implemented using either point-masses or dynamically simulated biped robots. We evaluate the quality of the coached behaviors by comparing the resulting trajectories to data from human players. We also assess the influence of the user's coaching examples by demonstrating that a user can construct a particular style of play.

Image-based Modeling and Rendering

Image-Based Virtual Camera Motion Strategies BIBAPDFGI Online Paper 69-76
  Éric Marchand; Nicolas Courty
This paper presents an original solution to the camera control problem in a virtual environment. Our objective is to present a general framework that allows the automatic control of a camera in a dynamic environment. The proposed method is based on the image-based control or visual servoing approach. It consists in positioning a camera according to the information perceived in the image. This is thus a very intuitive approach of animation. To be able to react automatically to modifications of the environment, we also considered the introduction of constraints into the control. This approach is thus adapted to highly reactive contexts (virtual reality, video games). Numerous examples dealing with classic problems in animation are considered within this framework and presented in this paper.
Analysis and Synthesis of Structural Textures BIBAPDFPNGTIFTIFTIFPNGGI Online Paper 77-86
  Laurent Lefebvre; Pierre Poulin
With the advent of image based modeling techniques, it becomes easier to apply textures extracted from reality onto virtual worlds. Many repetitive patterns (structural textures) in human constructions can be parametrized with procedural textures. These textures offer a powerful alternative to traditional color textures, but they require the artist to program the desired effects. We present a system to automatically extract from photographs values for parameters of structural textures, giving the user the possibility to guide the algorithms. Two common classes of procedural textures are studied: rectangular tilings and wood. The results demonstrate that synthesizing textures similar to their real counterpart can be very interesting for computer-augmented reality applications.
High-Quality Interactive Lumigraph Rendering Through Warping BIBAPDFGI Online Paper 87-94
  Hartmut Schirmacher; Wolfgang Heidrich; Hans-Peter Seidel
We introduce an algorithm for high-quality, interactive light field rendering from only a small number of input images with dense depth information. The algorithm bridges the gap between image warping and interpolation from image databases, which represent the two major approaches in image based rendering. By warping and blending only the necessary parts of each reference image, we are able to generate a single view-corrected texture for every output frame at interactive rates. In contrast to previous light field rendering approaches, our warping-based algorithm is able to fully exploit per-pixel depth information in order to depth-correct the light field samples with maximum accuracy. The complexity of the proposed algorithm is nearly independent of the number of stored reference images and of the final screen resolution. It performs with only small overhead and very few visible artifacts. We demonstrate the visual fidelity as well as the performance of our method through various examples.

Collaborative and Community Spaces

Effects of Gaze on Multiparty Mediated Communication BIBAPDFGI Online Paper 95-102
  Roel Vertegaal; Gerrit van der Veer; Harro Vons
We evaluated effects of gaze direction and other non-verbal visual cues on multiparty mediated communication. Groups of three participants (two actors, one subject) solved language puzzles in three audiovisual communication conditions. Each condition presented a different selection of images of the actors to subjects: (1) frontal motion video; (2) motion video with gaze directional cues; (3) still images with gaze directional cues. Results show that subjects used twice as many deictic references to persons when head orientation cues were present. We also found a linear relationship between the amount of actor gaze perceived by subjects and the number of speaking turns taken by subjects. Lack of gaze can decrease turn-taking efficiency of multiparty mediated systems by 25%. This is because gaze conveys whether one is being addressed or expected to speak, and is used to regulate social intimacy. Support for gaze directional cues in multiparty mediated systems is therefore recommended.
Towards Seamless Support of Natural Collaborative Interactions BIBAPDFGI Online Paper 103-110
  Stacey D. Scott; Garth B. D. Shoemaker; Kori M. Inkpen
In order to effectively support collaboration it is important that computer technology seamlessly support users' natural interactions instead of inhibiting or constraining the collaborative process. The research presented in this paper examines the human-human component of computer supported cooperative work and how the design of technology can impact how people work together. In particular, this study examined children's natural interactions when working in a physical medium compared to two computer-based environments (a traditional desktop computer and a system augmented to provide each user with a mouse and a cursor). Results of this research demonstrate that given the opportunity, children will take advantage of the ability to interact concurrently. In addition, users' verbal interactions and performance can be constrained when they are forced to interact sequentially, as in the traditional computer setup. Supporting concurrent interactions with multiple input devices is a first step towards developing effective collaborative environments that support users' natural collaborative interactions.
The ChatterBox: Using Text Manipulation in an Entertaining Information Display BIBAPDFGI Online Paper 111-118
  Johan Redström; Peter Ljungstrand; Patricija Jaksetic
The ChatterBox is an attempt to make use of the electronic "buzz" that exists in a modern workplace: the endless stream of emails, web pages, and electronic documents which fills the local ether(-net). The ChatterBox "listens" to this noise, transforms and recombines the texts in various ways, and presents the results in a public place. The goal is to provide a subtle reflection of the local activities and provide inspiration for new, unexpected combinations and thoughts. With the ChatterBox, we have tried to create something in between a traditional application and a piece of art: an entertaining and inspiring resource in the workplace. This poses several interesting questions concerning human-computer interaction design, e.g., information and display design. In this paper, we present the ChatterBox, its current implementation and experiences of its use.

Rendering

Approximation of Glossy Reflection with Prefiltered Environment Maps BIBAPDFGI Online Paper 119-126
  Jan Kautz; Michael D. McCool
A method is presented that can render glossy reflections with arbitrary isotropic bidirectional reflectance distribution functions (BRDFs) at interactive rates using texture mapping. This method is based on the well-known environment map technique for specular reflections.
   Our approach uses a single- or multilobe representation of bidirectional reflectance distribution functions, where the shape of each radially symmetric lobe is also a function of view elevation. This approximate representation can be computed efficiently using local greedy fitting techniques. Each lobe is used to filter specular environment maps during a preprocessing step, resulting in a three-dimensional environment map. For many BRDFs, simplifications using lower-dimensional approximations, coarse sampling with respect to view elevation, and small numbers of lobes can still result in a convincing approximation to the true surface reflectance.
Adaptive Representation of Specular Light Flux BIBAPDFGI Online Paper 127-136
  Normand Brière; Pierre Poulin
Caustics produce beautiful and intriguing illumination patterns. However, their complex behavior make them difficult to simulate accurately in all but the simplest configurations. To capture their appearance, we present an adaptive approach based upon light beams. The coherence between light rays forming a light beam greatly reduces the number of samples required for precise illumination reconstruction. The light beams characterize the distribution of light due to interactions with specular surfaces (specular light flux) in 3D space, thus allowing for the treatment of illumination within single-scattering participating media. The hierarchical structure enclosing the light beams possesses inherent properties to detect efficiently every light beam reaching any 3D point, to adapt itself according to illumination effects in the final image, and to reduce memory consumption via caching.
Multiscale Shaders for the Efficient Realistic Rendering of Pine-Trees BIBAGZGI Online Paper 137-144
  Alexandre Meyer; Fabrice Neyret
The frame of our work is the efficient realistic rendering of scenes containing a huge amount of data for which an a priori knowledge is available. In this paper, we present a new model able to render forests of pine-trees efficiently in ray-tracing and free of aliasing. This model is based on three scales of shaders representing the geometry (i.e. needles) that is smaller than a pixel size. These shaders are computed by analytically integrating the illumination reflected by this geometry using the a priori knowledge. They include the effects of local illumination, shadows and opacity within the concerned volume of data.

Image Processing and Visualization

Anisotropic Feature-Preserving Denoising of Height Fields and Bivariate Data BIBAPDFGI Online Paper 145-152
  Mathieu Desbrun; Mark Meyer; Peter Schr; Alan H. Barr
In this paper, we present an efficient way to denoise bivariate data like height fields, color pictures or vector fields, while preserving edges and other features. Mixing surface area minimization, graph flow, and nonlinear edge-preservation metrics, our method generalizes previous anisotropic diffusion approaches in image processing, and is applicable to data of arbitrary dimension. Another notable difference is the use of a more robust discrete differential operator, which captures the fundamental surface properties. We demonstrate the method on range images and height fields, as well as greyscale or color images.
A Fast, Space-Efficient Algorithm for the Approximation of Images by an Optimal Sum of Gaussians BIBAPDFGI Online Paper 153-162
  Jeffrey Childs; Cheng-Chang Lu; Jerry Potter
Gaussian decomposition of images leads to many promising applications in computer graphics. Gaussian representations can be used for image smoothing, motion analysis, and feature selection for image recognition. Furthermore, image construction from a Gaussian representation is fast, since the Gaussians only need to be added together. The most optimal algorithms [3, 6, 7] minimize the number of Gaussians needed for decomposition, but they involve nonlinear least-squares approximations, e.g. the use of the Marquardt algorithm [10]. This presents a problem, since, in the Marquardt algorithm, enormous amounts of computations are required and the resulting matrices use a lot of space. In this work, a method is offered, which we call the Quickstep method, that substantially reduces the number of computations and the amount of space used. Unlike the Marquardt algorithm, each iteration has linear time complexity in the number of variables and no Jacobian or Hessian matrices are formed. Yet, Quickstep produces optimal results, similar to those produced by the Marquardt algorithm.
Oriented Sliver Textures: A Technique for Local Value Estimation of Multiple Scalar Fields BIBAPDFGI Online Paper 163-170
  Christopher Weigle; William G. Emigh; Geniva Liu; Russell M. Taylor; James T. Enns; Christopher G. Healey
This paper describes a texture generation technique that combines orientation and luminance to support the simultaneous display of multiple overlapping scalar fields. Our orientations and luminances are selected based on psychophysical experiments that studied how the low-level human visual system perceives these visual features. The result is an image that allows viewers to identify data values in an individual field, while at the same time highlighting interactions between different fields. Our technique supports datasets with both smooth and sharp boundaries. It is stable in the presence of noise and missing values. Images are generated in real-time, allowing interactive exploration of the underlying data. Our technique can be combined with existing methods that use perceptual colours or perceptual texture dimensions, and can therefore be seen as an extension of these methods to further assist in the exploration and analysis of large, complex, multidimensional datasets.

Advances in HCI Design and Applications

Using a 3D Puzzle as a Metaphor for Learning Spatial Relations BIBAPDFGI Online Paper 171-178
  Felix Ritter; Bernhard Preim; Oliver Deussen; Thomas Strothotte
We introduce a new metaphor for learning spatial relations-the 3D puzzle. With this metaphor users learn spatial relations by assembling a geometric model themselves. For this purpose, a 3D model of the subject at hand is enriched with docking positions which allow objects to be connected. Since complex 3D interactions are required to compose 3D objects, sophisticated 3D visualization and interaction techniques are included. Among these techniques are specialized shadow generation, snapping mechanisms, collision detection and the use of two-handed interaction. The 3D puzzle, similar to a computer game, can be operated at different levels of difficulty. To simplify the task, a subset of the geometry, e.g., the skeleton of an anatomic model, can be given initially. Moreover, textual information concerning the parts of the model is provided to support the user. With this approach we motivate students to explore the spatial relations in complex geometric models and at the same time give them a goal to achieve while learning takes place. A prototype of a 3D puzzle, which is designed principally for use in anatomy education, is presented.
Affordances: Clarifying and Evolving a Concept BIBAPDFGI Online Paper 179-186
  Joanna McGrenere; Wayne Ho
The concept of affordance is popular in the HCI community but not well understood. Donald Norman appropriated the concept of affordances from James J. Gibson for the design of common objects and both implicitly and explicitly adjusted the meaning given by Gibson. There was, however, ambiguity in Norman's original definition and use of affordances which he has subsequently made efforts to clarify. His definition germinated quickly and through a review of the HCI literature we show that this ambiguity has lead to widely varying uses of the concept. Norman has recently acknowledged the ambiguity, however, important clarifications remain. Using affordances as a basis, we elucidate the role of the designer and the distinction between usefulness and usability. We expand Gibson's definition into a framework for design.
Are We All In the Same "Bloat"? BIBAPDFGI Online Paper 187-196
  Joanna McGrenere; Gale Moore
Bloat", a term that has existed in the technical community for many years, has recently received attention in the popular press. The term has a negative connotation implying that human, or system performance is diminished in some way when "bloat" exists. Yet "bloat" is seldom clearly defined and is often a catch-all phrase to suggest that software is filled with unnecessary features. However, to date there are no studies that explore how users actually experience complex functionality-filled software applications and most importantly, the extent to which they experience them in similar/different ways. The significance of understanding users' experience is in the implications this understanding has for design. Using both quantitative and qualitative methods, we carried out a study to gain a better understanding of the experiences of 53 members of the general population who use a popular word processor, Microsoft Word, Office 97. As a result we are able to further specify the term "bloat", distinguishing an objective and subjective dimension. It is the discovery of the subjective dimension that opens the design space and raises new challenges for interface designers. There is certainly more to "bloat" than meets the eye.

Geometry

Triangle Strip Compression BIBAGZGI Online Paper 197-204
  Martin Isenburg
In this paper we introduce a simple and efficient scheme for encoding the connectivity and the stripification of a triangle mesh. Since generating a good set of triangle strips is a hard problem, it is desirable to do this just once and store the computed strips with the triangle mesh. However, no previously reported mesh encoding scheme is designed to include triangle strip information into the compressed representation. Our algorithm encodes the stripification and the connectivity in an interwoven fashion, that exploits the correlation existing between the two.
Incremental Triangle Voxelization BIBAPDFPDFGZGI Online Paper 205-212
  Frank, IX Dachille; Arie Kaufman
We present a method to incrementally voxelize triangles into a volumetric dataset with pre-filtering, generating an accurate multivalued voxelization. Multivalued voxelization allows direct volume rendering of voxelized geometry as well as volumes with intermixed geometry, accurate multiresolution representations, and efficient antialiasing. Prior voxelization methods either computed only a binary voxelization or inefficiently computed a multivalued voxelization. Our method develops incremental equations to quickly decide which filter function to compute for each voxel value. The method requires eight additions per voxel of the triangle bounding box. Being simple and efficient, the method is suitable for implementation in a hardware volume rendering system.
Dynamic Plane Shifting BSP Traversal BIBAPDFGI Online Paper 213-220
  Stan Melax
Interactive 3D applications require fast detection of objects colliding with the environment. One popular method for fast collision detection is to offset the geometry of the environment according to the dimensions of the object, and then represent the object as a point (and the object's movement as a line segment). Previously, this geometry offset has been done in a preprocessing step and therefore requires knowledge of the object's dimensions before runtime. Furthermore, an extra copy of the environment's geometry is required for each shape used in the application. This paper presents a variation of the BSP tree collision algorithm that shifts the planes in order to offset the geometry of the environment at runtime. To prevent unwanted cases where offset geometry protrudes too much, extra plane equations, which bevel solid cells of space during expansion, are added by simply inserting extra nodes at the bottom of the tree. A simple line segment check can be used for collision detection of a moving object of any size against the environment. Only one BSP tree is needed by the application. This paper also discusses successful application of this technique within a commercial entertainment software product.
Model Simplification Through Refinement BIBAPDFGI Online Paper 221-228
  Dmitry Brodsky; Benjamin Watson
As modeling and visualization applications proliferate, there arises a need to simplify large polygonal models at interactive rates. Unfortunately existing polygon mesh simplification algorithms are not well suited for this task because they are either too slow (requiring the simplified model to be pre-computed) or produce models that are too poor in quality. These shortcomings become particularly acute when models are extremely large.
   We present an algorithm suitable for simplification of large models at interactive speeds. The algorithm is fast and can guarantee displayable results within a given time limit. Results also have good quality. Inspired by splitting algorithms from vector quantization literature, we simplify models in reverse, beginning with an extremely coarse approximation and refining it. Approximations of surface curvature guide the simplification process. Previously produced simplifications can be further refined by using them as input to the algorithm.