HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 949596979899000102030405060708091011121314

Proceedings of the 2004 Conference on Graphics Interface

Fullname:Proceedings of the 2004 Conference on Graphics Interface
Editors:Wolfgang Heidrich; Ravin Balakrishnan
Location:London, Ontario, Canada
Dates:2004-May-17 to 2004-May-19
Publisher:Canadian Information Processing Society
Standard No:ISBN 1-56881-227-2; ACM DL: Table of Contents hcibib: GI04
Links:Conference Series Home Page
  1. Input
  2. Rendering
  3. Next Generation Interfaces
  4. Perception, Awareness, Collaboration, and Information Management
  5. Displays
  6. Hardware
  7. Sampling
  8. Layout and Visualization
  9. Textures and Materials


Writing with a joystick: a comparison of date stamp, selection keyboard, and EdgeWrite BIBAFull-Text 1-8
  Jacob O. Wobbrock; Brad A. Myers; Htet Htet Aung
A joystick text entry method for game controllers and mobile phones would be valuable, since these devices often have joysticks but no conventional keyboards. But prevalent joystick text entry methods are slow because they are selection-based. EdgeWrite, a new joystick text entry method, is not based on selection but on gestures from a unistroke alphabet. Our experiment shows that this new method is faster, leaves fewer errors, and is more satisfying than date stamp and selection keyboard (two prevalent selection-based methods) for novices after minimal practice. For more practiced users, our results show that EdgeWrite is at least 1.5 times faster than selection keyboard, and 2.4 times faster than date stamp.
Object pointing: a complement to bitmap pointing in GUIs BIBAFull-Text 9-16
  Yves Guiard; Renaud Blanch; Michel Beaudouin-Lafon
Pointing has been conceptualized and implemented so far as the act of selecting pixels in bitmap displays. We show that the current technique, which we call bitmap pointing (BMP), is often sub-optimal as it requires continuous information from the mouse while the system often just needs the discrete specification of objects. The paper introduces object pointing (OP), a novel interaction technique based on a special screen cursor that skips empty spaces, thus drastically reducing the waste of input information. We report data from 1D and 2D Fitts' law experiments showing that OP outperforms BMP and that the performance facilitation increases with the task's index of difficulty. We discuss the implementation of OP in current interfaces.
Toolglasses, marking menus, and hotkeys: a comparison of one and two-handed command selection techniques BIBAFull-Text 17-24
  Daniel L. Odell; Richard C. Davis; Andrew Smith; Paul K. Wright
This paper introduces a new input technique, bimanual marking menus, and compares its performance with five other techniques: static toolbars, hotkeys, grouped hotkeys, marking menus, and toolglasses. The study builds on previous work by setting the comparison in a commonly encountered task, shape drawing. In this context, grouped hotkeys and bimanual marking menus were found to be the fastest. Subjectively, the most preferred input method was bimanual marking menus. Toolglass performance was unexpectedly slow, which hints at the importance of low-level toolglass implementation choices.
The effects of feedback on targeting with multiple moving targets BIBAFull-Text 25-32
  David Mould; Carl Gutwin
A number of task settings involve selection of objects from dynamic visual environments with multiple moving targets. Target selection is difficult in these settings because objects move, because there are a number of distracter objects for any targeting action, and because objects can occlude the target. Target feedback has been suggested as a way to assist targeting in visual environments. We carried out an experiment to test the effects of visual target feedback. We found that targeting does become more difficult as the number and speed of objects increases, and that feedback can improve error rates. When feedback was provided on all objects in the space, performance improved significantly over no feedback. Target-only feedback, however, was not significantly better than no feedback. This is a valuable result because all-object feedback is in most cases the only implementation option -- since it is usually not possible to pre-determine the user's target among the set of objects.


Object representation using 1D displacement mapping BIBAFull-Text 33-40
  Yi Xu; Yee-Hong Yang
In this paper, we propose a new method for rendering image-based objects. An object is decomposed into a set of 1D displacement textures in a preprocessing stage. During rendering, each of these 1D displacement textures is rendered to its appropriate position in 3D space, using the hardware accelerated displacement mapping approach. Our method can represent and render complex objects with accurate appearance from arbitrary viewpoints. It can also render a complex object at a high frame rate, using commodity graphics hardware. The 1D displacement texture set is reconstructed from the images with depth maps.
A hybrid hardware-accelerated algorithm for high quality rendering of visual hulls BIBAFull-Text 41-48
  Ming Li; Marcus Magnor; Hans-Peter Seidel
In this paper, a novel hybrid algorithm is presented for the fast construction and high-quality rendering of visual hulls. We combine the strengths of two complementary hardware-accelerated approaches: direct constructive solid geometry (CSG) rendering and texture mapping-based visual cone trimming. The former approach completely eliminates the aliasing artifacts inherent in the latter, whereas the rapid speed of the latter approach compensates for the performance deficiency of the former. Additionally, a new view-dependent texture mapping method is proposed. This method makes efficient use of graphics hardware to perform per-fragment blending weight computation, which yields better rendering quality. Our rendering algorithm is integrated in a distributed system that is capable of acquiring synchronized video streams and rendering visual hulls in real time or at interactive frame rates from up to eight reference views.
Blueprints: illustrating architecture and technical parts using hardware-accelerated non-photorealistic rendering BIBAFull-Text 49-56
  Marc Nienhaus; Jurgen Dollner
Outlining and enhancing visible and occluded features in drafts of architecture and technical parts are essential techniques to visualize complex aggregated objects and to illustrate position, layout, and relations of their components.
   In this paper, we present blueprints, a novel non-photorealistic hardware-accelerated rendering technique that outlines visible and non-visible perceptually important edges of 3D objects. Our technique is based on the edge map algorithm and the depth peeling technique to extract these edges from arbitrary 3D scene geometry in depth-sorted order. After edge maps have been generated, they are composed in image space using depth sprites, which allow us to combine blueprints with further 3D scene contents. We introduce depth masking to dynamically adapt the number of rendering passes for highlighting and illustrating features of particular importance and their relation to the entire assembly. Finally, we give an example of blueprints that visualize and illustrate ancient architecture in the scope of cultural heritage.
Transfer functions on a logarithmic scale for volume rendering BIBAFull-Text 57-63
  Simeon Potts; Torsten Moller
Manual opacity transfer function editing for volume rendering can be a difficult and counter-intuitive process. This paper proposes a logarithmically scaled editor, and argues that such a scale relates the height of the transfer function to the rendered intensity of a region of particular density in the volume almost directly, resulting in much improved, simpler manual transfer function editing.

Next Generation Interfaces

Exploring gradient-based face navigation interfaces BIBAFull-Text 65-72
  Tzu-Pei Grace Chen; Sidney Fels
We have created a gradient-based face navigation interface that allows users to explore a large face space based on an eigenface technique. This approach to synthesizing faces contrasts with more typical techniques for forming composite faces based on the blending of facial features. We compare three ways of moving through the face space, using two types of sliders and a face-wheel. These are adapted from typical color space interfaces since they are commonly used. However, eigenface dimensions do not have meaningful text labels, unlike primary colors, necessitating the use of faces themselves for the labels of the navigation axes. Results suggest that users can navigate with face-labelled axes. They find slider interfaces best suited to finding the neighborhood of a target face, but that the face-wheel is better for refinement once inside the neighborhood.
Towards the next generation of tabletop gaming experiences BIBAFull-Text 73-80
  Carsten Magerkurth; Maral Memisoglu; Timo Engelke; Norbert Streitz
In this paper we present a novel hardware and software platform (STARS) to realize computer augmented tabletop games that unify the strengths of traditional board games and computer games. STARS game applications preserve the social situation of traditional board games and provide a tangible interface with physical playing pieces to facilitate natural interaction. The virtual game components offer exciting new opportunities for game design and provide richer gaming experiences impossible to realize with traditional media.
   This paper describes STARS in terms of the hardware setup and the software platform used to develop and play STARS games. The interaction design within STARS is discussed and sample games are presented with regard to their contributions to enhancing user experience. Finally, real-world experiences with the platform are reported.
Haptic interaction with fluid media BIBAFull-Text 81-88
  William Baxter; Ming C. Lin
We present a method for integrating force feedback with interactive fluid simulation. We use the method described to generate haptic display of an incompressible Navier-Stokes fluid simulation. The force feedback calculation is based on the equations of fluid motion, and enables us to generate forces as well as torques for use with haptic devices capable of delivering torque. In addition, we adapt our fluid-haptic feedback method for use in a painting application that is based on fluid simulation, enabling the artist to feel the paint. Finally we describe a force filtering technique to reduce the artifacts that result from using 60Hz simulation data to drive the 1KHz haptic servo loop, a situation which often arises in practice.

Perception, Awareness, Collaboration, and Information Management

Remote collaboration using Augmented Reality Videoconferencing BIBAFull-Text 89-96
  Istvan Barakonyi; Tamer Fahmy; Dieter Schmalstieg
This paper describes an Augmented Reality (AR) Videoconferencing System, which is a novel remote collaboration tool combining a desktop-based AR system and a videoconference module. The novelty of our system is the combination of these tools with AR applications superimposed on live video background displaying the conference parties' real environment, merging the advantages of the natural face-to-face communication of videoconferencing and AR's interaction capabilities with distributed virtual objects using tangible physical artifacts. The simplicity of the system makes it affordable for everyday use. We explain our system design based on concurrent video streaming, optical tracking and 3D application sharing, and provide experimental proof that it yields superior quality compared to pure video streaming with successive optical tracking from the compressed streams. We demonstrate the system's collaborative features with a volume rendering application that allows users to display and examine volumetric data simultaneously and to highlight or explore slices of the volume by manipulating an optical marker as a cutting plane interaction device.
View direction, surface orientation and texture orientation for perception of surface shape BIBAFull-Text 97-106
  Graeme Sweet; Colin Ware
Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above.
ARIS: an interface for application relocation in an interactive space BIBAFull-Text 107-116
  Jacob T. Biehl; Brian P. Bailey
By enabling users to better manage information across PDAs, laptops, graphics tablets, and large screens, the use of an interactive space could dramatically improve how users share information in collaborative work. To enable a user to better manage information in an interactive space, we iteratively designed an interactive space window manager called ARIS. We discuss the implementation of ARIS and we share lessons learned from user evaluations about how to design a more effective window manager for an interactive space and how to better evaluate low-fidelity prototypes in an interactive space. Our work can enable richer collaborations among users of an interactive space.
Is a picture worth a thousand words?: an evaluation of information awareness displays BIBAFull-Text 117-126
  Christopher Plaue; Todd Miller; John Stasko
What makes a peripheral or ambient display more effective at presenting awareness information than another? Presently, little is known in this regard and techniques for evaluating these types of displays are just beginning to be developed. In this article, we focus on one aspect of a peripheral display's effectiveness -- its ability to communicate information at a glance. We conducted an evaluation of the InfoCanvas, a peripheral display that conveys awareness information graphically as a form of information art, by assessing how well people recall information when it is presented for a brief period of time. We compare performance of the InfoCanvas to two other electronic information displays, a Web portal style and a text-based display, when each display was viewed for a short period of time. We found that participants noted and recalled significantly more information when presented by the InfoCanvas than by either of the other displays despite having to learn the additional graphical representations employed by the InfoCanvas.


Revisiting display space management: understanding current practice to inform next-generation design BIBAFull-Text 127-134
  Dugald Ralph Hutchings; John Stasko
Most modern computer systems allow the user to control the space allocated to interfaces through a window system. While much of the understanding of how people interact with windows may be regarded as well-known, there are very few reports of documented window management practices. Recent work on larger display spaces indicates that multiple monitor use is becoming more commonplace, and that users are experiencing a variety of usability issues with their window systems. The lack of understanding of how people generally interact with windows implies that future design and evaluation of window managers may not address emerging user needs and display systems. Thus we present a study of people using a variety of window managers and display configurations to illustrate manager- and display-independent space management issues. We illustrate several issues with space management, and each issue includes discussion of the implications of both evaluations and design directions for future window managers. We also present a classification of users' space management styles and relationships to window system types.
An evaluation of techniques for controlling focus+context screens BIBAFull-Text 135-144
  Mark J. Flider; Brian P. Bailey
We evaluated four techniques for controlling focus+context screens. We compared an egocentric versus exocentric View mixed with whether the display on the focus screen moves in the same (paper mapping) versus opposite (scroll mapping) direction as imput force. Our results show that (i) View had little effect, (ii) users almost always allocated attention to the context screen when controlling the display, (iii) scroll mappings enabled a user to perform tasks faster, commit fewer errors, and be more satisfied with the system compared to paper mappings, and (iv) a user can better control focus+context screens when the frame of reference either does move or is perceived to move in the direction of input force. We discuss these results and recommend how to enable a user to better control focus+context screens.
Interacting with big interfaces on small screens: a comparison of fisheye, zoom, and panning techniques BIBAFull-Text 145-152
  Carl Gutwin; Chris Fedak
Mobile devices with small screens are becoming more common, and will soon be powerful enough to run desktop software. However, the large interfaces of desktop applications do not fit on the small screens. Although there are ways to redesign a UI to fit a smaller area, there are many cases where the only solution is to navigate the large UI with the small screen. The best way to do this, however, is not known. We compared three techniques for using large interfaces on small screens: a panning system similar to what is in current use, a two-level zoom system, and a fisheye view. We tested the techniques with three realistic tasks. We found that people were able to carry out a web navigation task significantly faster with the fisheye view, that the two-level zoom was significantly better for a monitoring task, and that people were slowest with the panning system.


Hardware accelerated per-pixel displacement mapping BIBAFull-Text 153-158
  Johannes Hirche; Alexander Ehlert; Stefan Guthe; Michael Doggett
In this paper we present an algorithm capable of rendering a displacement mapped triangle mesh interactively on latest GPUs. The algorithm uses only pixel shaders and does not rely on adaptively adding geometry. All sampling of the displacement map takes place in the pixel shader and bi- or trilinear ltering can be applied to it, and at the same time as the calculations are done per pixel in the shader, the algorithm has automatic level of detail control. The triangles of the base mesh are extruded along the respective normal directions and then the resulting prisms are rendered by casting rays inside and intersecting them with the displaced surface. Two different implementations are discussed in detail.
Radiosity on graphics hardware BIBAFull-Text 161-168
  Greg Coombe; Mark J. Harris; Anselmo Lastra
Radiosity is a widely used technique for global illumination. Typically the computation is performed offline and the result is viewed interactively. We present a technique for computing radiosity, including an adaptive subdivision of the model, using graphics hardware. Since our goal is to run at interactive rates, we exploit the computational power and programmability of modern graphics hardware. Using our system on current hardware, we have been able to compute and display a radiosity solution for a 10,000 element scene in less than one second.
Compressed multisampling for efficient hardware edge antialiasing BIBAFull-Text 169-176
  Philippe Beaudoin; Pierre Poulin
Today's hardware graphics accelerators incorporate techniques to antialias edges and minimize geometry-related sampling artifacts. Two such techniques, brute force supersampling and multisampling, increase the sampling rate by rasterizing the triangles in a larger antialiasing buffer that is then filtered down to the size of the framebuffer. The sampling rate is proportional to the number of subsamples in the antialiasing buffer and, when no compression is used, to the memory it occupies. In turn, a larger antialiasing buffer implies an increase in bandwidth, one of the limiting resources for today's applications. In this paper we propose a mechanism to compress the antialiasing buffer and limit the bandwidth requirements while maintaining higher sampling rates. The usual framebuffer-related functions of OpenGL are supported: alpha blending, stenciling, color operations, and color masking. The technique is scalable, allowing for user-specified maximal and minimal sampling rates. The compression scheme includes a mechanism to nicely degrade the quality when too much information would be required. A lower bound on the quality of the resulting image is also available since the sampling rate will never be less than the user-specified minimal rate. The compression scheme is simple enough to be incorporated into standard hardware graphics accelerators. Software simulations show that, for a given bandwidth, our technique offers improved visual results over multisampling schemes.


Decoupling BRDFs from surface mesostructures BIBAFull-Text 177-182
  Jan Kautz; Mirko Sattler; Ralf Sarlette; Reinhard Klein; Hans-Peter Seidel
We present a technique for the easy acquisition of realistic materials and mesostructures, without acquiring the actual BRDF. The method uses the observation that under certain circumstances the mesostructure of a surface can be acquired independently of the underlying BRDF.
   The acquired data can be used directly for rendering with little preprocessing. Rendering is possible using an offline renderer but also using graphics hardware, where it achieves real-time frame rates. Compelling results are achieved for a wide variety of materials.
Segmenting motion capture data into distinct behaviors BIBAFull-Text 185-194
  Jernej Barbic; Alla Safonova; Jia-Yu Pan; Christos Faloutsos; Jessica K. Hodgins; Nancy S. Pollard
Much of the motion capture data used in animations, commercials, and video games is carefully segmented into distinct motions either at the time of capture or by hand after the capture session. As we move toward collecting more and longer motion sequences, however, automatic segmentation techniques will become important for processing the results in a reasonable time frame.
   We have found that straightforward, easy to implement segmentation techniques can be very effective for segmenting motion sequences into distinct behaviors. In this paper, we present three approaches for automatic segmentation. The first two approaches are online, meaning that the algorithm traverses the motion from beginning to end, creating the segmentation as it proceeds. The first assigns a cut when the intrinsic dimensionality of a local model of the motion suddenly increases. The second places a cut when the distribution of poses is observed to change. The third approach is a batch process and segments the sequence where consecutive frames belong to different elements of a Gaussian mixture model. We assess these three methods on fourteen motion sequences and compare the performance of the automatic methods to that of transitions selected manually.
Image-space silhouettes for unprocessed models BIBAFull-Text 195-202
  Michael Ashikhmin
A set of image-space techniques for visualizing silhouettes of 3D models is presented. We assume that very little information is available about the data and rely only on simple counting arguments. This also allows us to automatically handle surface boundaries. While based on previous work, the presented technique is generally simpler and provides some extra convenience compared to the few existing methods which work under similar conditions. We also introduce a method for visualizing all silhouettes of a model in a single view which in many cases allows faster and better understanding of underlying geometric structure.

Layout and Visualization

Interactive image-based exploded view diagrams BIBAFull-Text 203-212
  Wilmot Li; Maneesh Agrawala; David Salesin
We present a system for creating interactive exploded view diagrams using 2D images as input. This image-based approach enables us to directly support arbitrary rendering styles, eliminates the need for building 3D models, and allows us to leverage the abundance of existing static diagrams of complex objects. We have developed a set of semi-automatic authoring tools for quickly creating layered diagrams that allow the user to specify how the parts of an object expand, collapse, and occlude one another. We also present a viewing system that lets users dynamically filter the information presented in the diagram by directly expanding and collapsing the exploded view and searching for individual parts. Our results demonstrate that a simple 2.5D diagram representation is powerful enough to enable a useful set of interactions and that, with the right authoring tools, effective interactive diagrams in this format can be created from existing static illustrations with a small amount of effort.
A comparison of fisheye lenses for interactive layout tasks BIBAFull-Text 213-220
  Carl Gutwin; Chris Fedak
Interactive fisheye views allow users to edit data and manipulate objects through the distortion lens. Although several varieties of fisheye lens exist, little is known about how the different types fare for different interactive tasks. In this paper, we investigate one kind of interaction -- layout of graphical objects -- that can be problematic in fisheyes. Layout involves judgments of distance, alignment, and angle, all of which can be adversely affected by the distortion of a fisheye. We compared performance on layout tasks with three kinds of fisheye: a full-screen pyramid lens, a constrained hemispherical lens, and a constrained flat-topped hemisphere. We found that accuracy was significantly better with the constrained lenses compared to the full-screen lens, and also that the simple hemisphere was better at higher levels of distortion than the flat-topped version. The study shows that although there is a cost to doing layout through distortion, it is feasible, particularly with constrained lenses. In addition, our findings provide initial empirical evidence of the differences between competing fisheye varieties.
Improving menu placement strategies for pen input BIBAFull-Text 221-230
  Mark S. Hancock; Kellogg S. Booth
We investigate menu selection in circular and rectangular pop-up menus using stylus-driven direct input on horizontal and vertical display surfaces. An experiment measured performance in a target acquisition task in three different conditions: direct input on a horizontal display surface, direct input on a vertical display and indirect input to a vertical display. The third condition allows comparison of direct and indirect techniques commonly used for vertical displays. The results of the study show that both left-handed and right-handed users demonstrate a consistent, but mirrored pattern of selection times that is corroborated by qualitative measures of user preference. We describe a menu placement strategy for a tabletop display that detects the handedness of the user and displays rectangular pop-up menus. This placement is based on the results of our study.
Map morphing: making sense of incongruent maps BIBAFull-Text 231-238
  Derek F. Reilly; Kori M. Inkpen
Map morphing is an interactive visualization technique that provides a user-controlled, animated translation from one map to another. Traditionally, overlay mechanisms are used to present layers of information over a single projection. Map morphing provides a way to relate maps with significant spatial and schematic differences. This paper presents the morphing technique and the results of a comparative evaluation of map morphing against standard ways of presenting related maps. Our results demonstrate that map morphing provides additional information that can be used to effectively relate maps. In particular, significantly more tasks were completed correctly using the morphing interface than either a windowed or an inset interface.

Textures and Materials

Interactive virtual materials BIBAFull-Text 239-246
  Matthias Muller; Markus Gross
In this paper we present a fast and robust approach for simulating elasto-plastic materials and fracture in real time. Our method extends the warped stiffness finite element approach for linear elasticity and combines it with a strain-state-based plasticity model. The internal principal stress components provided by the finite element computation are used to determine fracture locations and orientations. We also present a method to consistently animate and fracture a detailed surface mesh along with the underlying volumetric tetrahedral mesh. This multi-resolution strategy produces realistic animations of a wide spectrum of materials at interactive rates that have typically been simulated off-line thus far.
Perspective accurate splatting BIBAFull-Text 247-254
  Matthias Zwicker; Jussi Rasanen; Mario Botsch; Carsten Dachsbacher; Mark Pauly
We present a novel algorithm for accurate, high quality point rendering, which is based on the formulation of splatting using homogeneous coordinates. In contrast to previous methods, this leads to perspective correct splat shapes, avoiding artifacts such as holes caused by the affine approximation of the perspective projection. Further, our algorithm implements the EWA resampling filter, hence providing high image quality with anisotropic texture filtering. We also present an extension of our rendering primitive that facilitates the display of sharp edges and corners. Finally, we describe an efficient implementation of the entire point rendering pipeline using vertex and fragment programs of current GPUs.
Dihedral Escherization BIBAFull-Text 255-262
  Craig S. Kaplan; David H. Salesin
"Escherization" [9] is a process that finds an Escher-like tiling of the plane from tiles that resemble a user-supplied goal shape. We show how the original Escherization algorithm can be adapted to the dihedral case, producing tilings with two distinct shapes. We use a form of the adapted algorithm to create drawings in the style of Escher's print Sky and Water. Finally, we develop an Escherization algorithm for the very different case of Penrose's aperiodic tilings.
A hybrid physical/device-space approach for spatio-temporally coherent interactive texture advection on curved surfaces BIBAFull-Text 263-270
  Daniel Weiskopf; Thomas Ertl
We propose a novel approach for a dense texture-based visualization of vector fields on curved surfaces. Our texture advection mechanism relies on a Lagrangian particle tracing that is simultaneously computed in the physical space of the object and in the device space of the image plane. This approach retains the benefits of previous image-space techniques, such as output sensitivity, independence from surface parameterization or mesh connectivity, and support for dynamic surfaces. At the same time, frame-to-frame coherence is achieved even when the camera position is changed, and potential inflow issues at silhouette lines are overcome. Noise input for texture advection is modeled as a solid 3D texture and constant spatial noise frequency on the image plane is achieved in a memory-efficient way by appropriately scaling the noise in physical space. For the final rendering, we propose color schemes to effectively combine the visualization of surface shape and flow. Hybrid physical/device-space texture advection can be efficiently implemented on GPUs and therefore supports interactive vector field visualization. Finally, we show some examples for typical applications in scientific visualization.