HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 929394959697989900010203040506070809101112

Proceedings of the 2002 Conference on Graphics Interface

Fullname:Proceedings of the 2002 conference on Graphics Interface
Editors:Wolfgang Sturzlinger; Michael D. McCool
Location:Calgary, Alberta, Canada
Dates:2002-May-27 to 2002-May-29
Publisher:Canadian Information Processing Society
Standard No:ISBN 1-56881-183-7; hcibib: GI02
Papers:26
Pages:236
Links:Conference Series Home Page | Online Proceedings
  1. Invited Talk
  2. Non-Photorealistic and Image-Based Rendering
  3. Interaction
  4. Natural Phenomena
  5. The Body
  6. Input Devices
  7. Surfaces and Meshes
  8. Reflectance and Lighting

Invited Talk

Rapid Prototyping of Physical User Interfaces BIBPDF -
  Saul Greenberg

Non-Photorealistic and Image-Based Rendering

Layered Environment-Map Impostors for Arbitrary Scenes BIBAPDFAVIMPG 1-8
  Stefan Jeschke; Michael Wimmer; Heidrun Schuman
This paper presents a new impostor-based approach to accelerate the rendering of very complex static scenes. The scene is partitioned into viewing regions, and a layered impostor representation is precalculated for each of them. An optimal placement of impostor layers guarantees that our representation is indistinguishable from the original geometry. Furthermore the algorithm exploits common graphics hardware both during preprocessing and rendering. Moreover the impostor representation is compressed using several strategies to cut down on storage space.
Animation with Threshold Textures BIBAPDFMOVMOVAVI 9-16
  Oleg Veryovka
We present a method for frame coherent texturing and hatching of 3D models with a discrete set of colors. Our technique is inspired by various artistic styles that use a limited set of colors to convey surface shape and texture. In previous research discrete color shading was produced by modifying smooth shading with a threshold function. We extend this approach and specify threshold values with an image or a procedural texture. Texture values and mapping coordinates are adapted to surface orientation and scale. Aliasing artifacts are eliminated by the modified filtering technique. The threshold texturing approach enables an animator to control local shading and to display surface roughness and curvature with a limited set of colors.
A Fresh Perspective BIBAPDFMPGGI Online Paper 17-24
  Karan Singh
Painting is an activity, and the artist will therefore tend to see what he paints rather than to paint what he sees. - E.H. Gombrich.
   While general trends in computer graphics continue to drive towards more photorealistic imagery, increasing attention is also being devoted to painterly renderings of computer generated scenes. Whereas artists using traditional media almost always deviate from the confines of a precise linear perspective view, digital artists struggle to transcend the standard pin-hole camera model in generating an envisioned image of a three dimensional scene. More specifically, a key limitation of existing camera models is that they inhibit the artistic exploration and understanding of a subject, which is essential for expressing it successfully. Past experiments with non-linear perspectives have primarily focused on abstract mathematical camera models for raytracing, which are both non-interactive and provide the artist with little control over seeing what he wants to see. We address this limitation with a cohesive, interactive approach for exploring non-linear perspective projections. The approach consists of a new camera model and a toolbox of interactive local and global controls for a number of properties, including regions of interest, distortion, and spatial relationship. Furthermore, the approach is incremental, allowing non-linear perspective views of a scene to be built gradually by blending and compositing multiple linear perspectives. In addition to artistic non-photorealistic rendering, our approach has interesting applications in conceptual design and scientific visualization.

Interaction

Constraint-Based Automatic Placement for Scene Composition BIBAPDFGI Online Paper 25-34
  Ken Xu; James Stewart; Eugene Fiume
The layout of large scenes can be a time-consuming and tedious task. In most current systems, the user must position each of the objects by hand, one at a time. This paper presents a constraint-based automatic placement system, which allows the user to quickly and easily lay out complex scenes.
   The system uses a combination of automatically-generated placement constraints, pseudo-physics, and a semantic database to guide the automatic placement of objects. Existing scenes can quickly be rearranged simply by reweighting the placement preferences. We show that the system enables a user to lay out a complex scene of 300 objects in less than 10 minutes.
FaST Sliders: Integrating Marking Menus and the Adjustment of Continuous Values BIBAPDFGI Online Paper 35-42
  Michael McGuffin; Nicolas Burtnyk; Gordon Kurtenbach
We propose a technique, called FaST Sliders, for selecting and adjusting continuous values using a fast, transient interaction much like pop-up menus. FaST Sliders combine marking menus and graphical sliders in a design that allows operation with quick ballistic movements for selection and coarse adjustment. Furthermore, additional controls can be displayed within the same interaction, for fine adjustments or other functions. We describe the design of FaST Sliders and a user study comparing FaST sliders to other transient techniques. The results of our user study indicate that FaST Sliders hold potential. We observed that users found FaST Slider easy to learn and made use of and preferred its affordances for ballistic movement and additional controls.
Traces: Visualizing the Immediate Past to Support Group Interaction BIBAPDFGI Online Paper 43-50
  Carl Gutwin
Virtual embodiments of people in groupware systems provide a wealth of information to others in the group. They allow for explicit gestural communication, and they provide implicit awareness information about people's locations and activities. However, the constraints of current networked groupware limit the effectiveness of these kinds of communication. This paper investigates how embodiments can be augmented with traces - visualizations of past movements - to help others perceive and interpret bodily communication more clearly and more accurately. The paper presents a case study of traces applied to telepointers, and gives several examples of how the concept can be used to improve interaction effectiveness in groupware.

Natural Phenomena

Image-Based Hair Capture by Inverse Lighting BIBAPDFGI Online Paper 51-58
  Stéphane Grabli; François X. Sillion; Stephen R. Marschner; Jerome E. Lengyel
We introduce an image-based method for modeling a specific subject's hair. The principle of the approach is to study the variations of hair illumination under controlled illumination. The use of a stationary viewpoint and the assumption that the subject is still allows us to work with perfectly registered images: all pixels in an image sequence represent the same portion of the hair, and the particular illumination profile observed at each pixel can be used to infer the missing degree of directional information. This is accomplished by synthesizing reflection profiles using a hair reflectance model, for a number of candidate directions at each pixel, and choosing the orientation that provides the best profile match. Our results demonstrate the potential of this approach, by effectively reconstructing accurate hair strands that are well highlighted by a particular light source movement.
The Simulation of Paint Cracking and Peeling BIBAPDFMOVMOVGI Online Paper 59-68
  Eric Paquette; Pierre Poulin; George Drettakis
Weathering over long periods of time results in cracking and peeling of layers such as paint. To include these effects in computer graphics images it is necessary to simulate crack propagation, loss of adhesion, and the curling effect of paint peeling. We present a new approach which computes such a simulation on surfaces. Our simulation is inspired by the underlying physical properties. We use paint strength and tensile stress to determine where cracks appear on the surface. Cracks are then propagated through a 2D grid overlaid on the original surface, and we consider elasticity to compute the reduction of paint stress around the cracks. Simulation of the adhesion between the paint and the underlying material finally determines how the paint layer curls as it peels from the surface. The result of this simulation is rendered by generating explicit geometry to represent the peeling curls. We provide user control of the surface properties influencing the propagation of cracks. Results of our simulation and rendering method show that our approach produces convincing images of cracks and peels.
Generating Spatial Distributions for Multilevel Models of Plant Communities BIBAPDFGI Online Paper 69-80
  Brendan Lane; Przemyslaw Prusinkiewicz
The simulation and visualization of large groups of plants has many applications. The extreme visual complexity of the resulting scenes can be captured using multilevel models. For example, in two-level models, plant distributions may be determined using coarse plant representations, and realistic visualizations may be obtained by substituting detailed plant models for the coarse ones. In this paper, we focus on the coarse aspect of modeling, the specification of plant distribution. We consider two classes of models: local-to-global models, rooted in the individual-based ecosystem simulations, and inverse, global-to-local models, in which positions of individual plants are inferred from a given distribution of plant densities. We extend previous results obtained using both classes of models with additional phenomena, including clustering and succession of plants. We also introduce the formalism of multiset L-systems to formalize the individual-based simulation models.
Simulation and Rendering of Liquid Foams BIBAPDFAVIGI Online Paper 81-88
  Hendrik Kück; Christian Vogelgsang; Günter Greiner
In this paper we present a technique for simulating and rendering liquid foams. We are aiming at a functional realism that allows our simulation to be consistent with the physical effects in real liquid foam while avoiding the prohibitive computational cost of a physically accurate simulation. To this end, we have to recreate two important attributes of foam. The dynamic behaviour of the simulated foam must be based on the physics of real foam, and the characteristic interior structures of foam and their optical properties must be reproduced. We tackle these requirements by introducing a two part hybrid rendering approach. The first stage is geometric and determines the dynamic behaviour of the foam by simulating structural forces on a set of spheres, which represent the foam bubbles. In the second stage we render these spheres using a special surface shader that implicitly reconstructs the foam surfaces and performs the shading calculations. This two step approach allows us to easily integrate our technique into existing ray-tracing systems. We include images of an example animation to demonstrate the visual quality.

The Body

Texturing Faces BIBAPDFPDFGI Online Paper 89-98
  Marco Tarini; Hitoshi Yamauchi; Jörg Haber; Hans-Peter Seidel
We present a number of techniques to facilitate the generation of textures for facial modeling. In particular, we address the generation of facial skin textures from uncalibrated input photographs as well as the creation of individual textures for facial components such as eyes or teeth. Apart from an initial feature point selection for the skin texturing, all our methods work fully automatically without any user interaction. The resulting textures show a high quality and are suitable for both photo-realistic and real-time facial animation.
A Direct Method for Positioning the Arms of a Human Model BIBAPDFGI Online Paper 99-106
  John McDonald; Karen Alkoby; Roymieco Carter; Juliet Christopher; Mary Jo Davidson; Dan Ethridge; Jacob Furst; Damien Hinkle; Glenn Lancaster; Lori Smallwood; Nedjla Ougouag-Tiouririne; Jorge Toro; Shuang Xu; Rosalee Wolfe
Many problems in computer graphics concern the precise positioning of a human figure, and in particular, the positioning of the joints in the upper body as a virtual character performs some action. We explore a new technique for precisely positioning the joints in the arms of a human figure to achieve a desired posture. We focus on an analytic solution for the IK chains of the model's arms and an interface for conveniently specifying a desired targeting point, or articulator, on the model's hand. Also, we consider the problem of specifying a target for that articulator in space or in contact with the model's own body. These methods recast the seven degrees of freedom in the arm to provide a more intuitive interface for animation. We demonstrate the efficacy and efficiency of these techniques in positioning a virtual American Sign Language interpreter.
Application-Specific Muscle Representations BIBAPDFMOVEPS 107-116
  Victor Ng-Thow-Hing; Eugene Fiume
The need to model muscles means different things to artistic and technical practitioners. Three different muscle representations are presented and the motivations behind their design are discussed. Each representation allows unique capabilities and operations to be performed on the model, yet the underlying mathematical foundation is the same for all. This is achieved by developing a data-fitting pipeline that allows samples that are generated from different data sources to be used in the guided construction of a B-spline solid. We show how B-spline solids can be used to create muscles from contour curves extracted out of medical images, digitized fibre sets from dissections of muscle specimens, and profile curves that can be interactively sketched and manipulated by an anatomical modeller.

Input Devices

A Model of Two-Thumb Text Entry BIBAPDFGI Online Paper 117-124
  I. Scott MacKenzie; R. William Soukoreff
Although text entry has been extensively studied for touch typing on standard keyboards and finger and stylus input on soft keyboards, no such work exists for two-thumb text entry on miniature Qwerty keyboards. In this paper, we propose a model for this mode of text entry. The model provides a behavioural description of the interaction as well as a predicted text entry rate in words per minute. The prediction obtained is 60.74 words per minute. The prediction is based solely on the linguistic and motor components of the task; thus, it is a peak rate for expert text entry. A detailed sensitivity analysis is included to examine the effect of changing the model's components and parameters over a broad range (+/-50% for the parameters). The model demonstrates reasonable stability - predictions remain within about 10% of the value just cited.
Virtual Sculpting with Haptic Displacement Maps BIBAPDFGI Online Paper 125-132
  Robert Jagnow; Julie Dorsey
This paper presents an efficient data structure that facilitates high-speed haptic (force feedback) interaction with detailed digital models. Models are partitioned into coarse slabs, which collectively define a piecewise continuous vector field over a thick volumetric region surrounding the surface of the model. Within each slab, the surface is represented as a displacement map, which uses the vector field to define a relationship between points in space and corresponding points on the model's surface. This representation facilitates efficient haptic interaction without compromising the visual complexity of the scene. Furthermore, the data structure provides a basis for interactive local editing of a model's color and geometry using the haptic interface. We describe implementation details and demonstrate the use of the data structure with a variety of digital models.
A Desktop Input Device and Interface for Interactive 3D Character Animation BIBAPDFGI Online Paper 133-140
  Sageev Oore; Demetri Terzopoulos; Geoffrey Hinton
We present a novel input device and interface for interactively controlling the animation of graphical human character from a desktop environment. The trackers are embedded in a new physical design, which is both simple yet also provides significant benefits, and establishes a tangible interface with coordinate frames inherent to the character. A layered kinematic motion recording strategy accesses subsets of the total degrees of freedom of the character. We present the experiences of three novice users with the system, and that of a long-term user who has prior experience with other complex continuous interfaces.
Laser Pointers as Collaborative Pointing Devices BIBAPDFGI Online Paper 141-150
  Ji-Young Oh; Wolfgang Stuerzlinger
Single Display Groupware (SDG) is a research area that focuses on providing collaborative computing environments. Traditionally, most hardware platforms for SDG support only one person interacting at any given time, which limits collaboration. In this paper, we present laser pointers as input devices that can provide concurrent input streams ideally required to the SDG environment. First, we discuss several issues related to utilization of laser pointers and present the new concept of computer controlled laser pointers. Then we briefly present a performance evaluation of laser pointers as input devices and a baseline comparison with the mouse according to the ISO 9241-9 standard. Finally, we describe a new system that uses multiple computer controlled laser pointers as interaction devices for one or more displays. Several alternatives for distinguishing between different laser pointers are presented, and an implementation of one of them is demonstrated with SDG applications.

Surfaces and Meshes

Real-Time Extendible-Resolution Display of On-line Dynamic Terrain BIBAPDFGI Online Paper 151-160
  Yefei He; James Cremer; Yiannis Papelis
We present a method for multiresolution view-dependent real-time display of terrain undergoing on-line modification. In other words, the method does not assume static terrain geometry, nor does it assume that the terrain update sequence is known ahead of time. The method is both fast and space efficient. It is fast because it relies on local updates to the multiresolution structure as terrain changes. It is much more space efficient than many previous approaches because the multiresolution structure can be extended on-line, to provide higher resolution terrain only where needed. Our approach is especially well-suited for applications like real-time off-road driving simulation involving large terrain areas with localized high-resolution terrain updates.
Compressing Polygon Mesh Connectivity with Degree Duality Prediction BIBAPDFGI Online Paper 161-170
  Martin Isenburg
In this paper we present a coder for polygon mesh connectivity that delivers the best connectivity compression rates meshes reported so far. Our coder is an extension of the vertex-based coder for triangle mesh connectivity by Touma and Gotsman[GI98]. We code polygonal connectivity as a sequence of face and vertex degrees and exploit the correlation between them for mutual predictive compression. Because low-degree vertices are likely to be surrounded by high-degree faces and vice versa, we predict vertex degrees based on neighboring face degrees and face degrees based on neighboring vertex degrees.
Efficient Bounded Adaptive Tessellation of Displacement Maps BIBAPDFGI Online Paper 171-180
  Kevin Moule; Michael D. McCool
Displacement mapping is a technique for applying fine geometric detail to a simpler base surface. The displacement is often specified as a scalar function which makes it relatively easy to increase visual complexity without the difficulties inherent in more general modeling techniques. We would like to use displacement mapping in real-time applications. Ideally, a graphics accelerator should create a polygonal tessellation of the displaced surface on the fly to avoid storage and host bandwidth overheads.
   We present an online, adaptive, crack-free tessellation scheme for real-time displacement mapping that uses only local information for each triangle to perform a view-dependent tessellation. The tessellation works in homogeneous coordinates and avoids re-transformation of displaced points, making it suitable for high-performance hardware implementation. The use of interval analysis produces meshes with good error bounds that converge quickly to the true surface.
Automatic Generation of Subdivision Surface Head Models from Point Cloud Data BIBAPDFPDFGI Online Paper 181-188
  Won-Ki Jeong; Kolja Kähler; Jörg Haber; Hans-Peter Seidel
An automatic procedure is presented to generate a multiresolution head model from sampled surface data. A generic control mesh serves as the starting point for a fitting algorithm that approximates the points in an unstructured set of surface samples, e.g. a point cloud obtained directly from range scans of an individual. A hierarchical representation of the model is generated by repeated refinement using subdivision rules and measuring displacements to the input data. Key features of our method are the fully automated construction process, the ability to deal with noisy and incomplete input data, and no requirement for further processing of the scan data after registering the range images into a single point cloud.

Reflectance and Lighting

A BRDF Database Employing the Beard-Maxwell Reflection Model BIBAPDFGI Online Paper 189-200
  Harold Westlund; Gary Meyer
The Beard-Maxwell reflection model is presented as a new local reflection model for use in realistic image synthesis. The model is important because there is a public domain database of surface reflection parameters, the Nonconventional Exploitation Factors Data System (NEFDS), that utilizes a modified form of the Beard-Maxwell model. Additional surface reflection parameters for the database can be determined because a measurement protocol, using existing radiometric instruments, has been specified. The Beard-Maxwell model is also of historical significance because it predates many computer graphics reflection models and because it includes several features that are incorporated into existing local reflection models. The NEFDS is described and a special shader is developed for use with NEFDS. The shader makes use of the alias method for determining random variates from discrete probability distributions. Realistic images are synthesized from the existing database and from samples that were characterized using the measurement protocol.
Coherent Bump Map Recovery from a Single Texture Image BIBAPDFTGATGAGI Online Paper 201-208
  Jean-Michel Dischler; Karl Maritaud; Djamchid Ghazanfarpour
In order to texture surfaces realistically with texture images (e.g. photos), it is important to consider the underlying relief. Here, a method is proposed to recover a coherent bump map from a single texture image. Different visual zones are first identified using segmentation and classification. Then, by linearly separating the relief into a noise-like small-scale component and a smooth "shape-related" large-scale component, we can automatically deduce the bump map as well as an "unshaded" color map of the texture. The major advantage of our approach, compared to sophisticated measurement techniques based on multiple photos or specific devices, is its practical simplicity and broad accessibility, while it allows us to obtain very easily, via basic bump mapping or displacement mapping, rendering results of good quality.
Interactive Lighting Models and Pre-Integration for Volume Rendering on PC Graphics Accelerators BIBAGI Online PaperPDF 209-218
  Michael Meissner; Stefan Guthe; Wolfgang Strasser
Shading and classification are among the most powerful and important techniques used in volume rendering. Unfortunately, for hardware accelerated volume rendering based on OpenGL, direct classification was previously only supported on SGI platforms and shading could only be approximated inaccurately, resulting in artifacts mostly visible in darkening.
   In this paper, we present a novel approach for accurate shading of complex lighting models using multi-texturing, dependent textures (e.g. cube maps), and register combiners. Additionally, we present how different material properties can be integrated as a per voxel property to allow for more realistic image synthesis. Furthermore, we present a new technique circumventing the shading artifacts of previous approaches by pre-integrating an interpolation weight. Finally, we discuss how texture compression can be integrated to reduce the memory bandwidth required for relatively large volumes.
Single Sample Soft Shadows Using Depth Maps BIBAPDFGI Online Paper 219-228
  Stefan Brabec; Hans-Peter Seidel
In this paper we propose a new method for rendering soft shadows at interactive frame rates. Although the algorithm only uses information obtained from a single light source sample, it is capable of producing subjectively realistic penumbra regions. We do not claim that the proposed method is physically correct but rather that it is aesthetically correct. Since the algorithm operates on sampled representations of the scene, the shadow computation does not directly depend on the scene complexity. Having only a single depth and object ID map representing the pixels seen by the light source, we can approximate penumbrae by searching the neighborhood of pixels warped from the camera view for relevant blocker information. We explain the basic technique in detail, showing how simple observations can yield satisfying results. We also address sampling issues relevant to the quality of the computed shadows, as well as speed-up techniques that are able to bring the performance up to interactive frame rates.