HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 9697989900010203040506070809101112131415

Proceedings of the 2006 Conference on Graphics Interface

Fullname:Proceedings of the 2006 Conference on Graphics Interface
Editors:Carl Gutwin; Stephen Mann
Location:Quebec City, Canada
Dates:2006-Jun-07 to 2006-Jun-09
Publisher:Canadian Information Processing Society
Standard No:ISBN 1-56881-308-2; ACM DL: Table of Contents hcibib: GI06
Links:Conference Series Home Page
  1. Finger and hand input
  2. Animation
  3. Interaction and performance
  4. Geometric modelling
  5. Displays
  6. Gesture and interaction
  7. Lighting
  8. GPU rendering
  9. Web and design

Finger and hand input

Multi-finger cursor techniques BIBAFull-Text 1-7
  Tomer Moscovich; John F. Hughes
The mouse cursor acts as a digital proxy for a finger on graphical displays. Our hands, however, have ten fingers and many degrees of freedom that we use to interact with the world. We posit that by creating graphical cursors that reflect more of the hand's physical properties, we can allow for richer and more fluid interaction. We demonstrate this idea with three new cursors that are controlled by the user's fingers using a multi-point touchpad. The first two techniques allow for simultaneous control of several properties of graphical objects, while the third technique makes several enhancements to object selection.
symTone: two-handed manipulation of tone reproduction curves BIBAFull-Text 9-16
  Celine Latulipe; Ian Bell; Charles L. A. Clarke; Craig S. Kaplan
We present symTone, a dual-mouse, symmetric image manipulation application. symTone includes a symmetric method for manipulating a tone reproduction curve using two standard USB mice. The symTone technique is an important contribution because the two mice are manipulating a geometric object as a tool to improve the underlying digital image, thus a spatial object (the curve) is being used to manipulate non-spatial data (the image tones). Our empirical evaluation of the technique shows that symmetric interaction can be effective for manipulating non-spatial data. This novel technique offers a significant improvement in ease of use and is a precursor to more advanced symmetric tone-mapping applications.
Concurrent bimanual stylus interaction: a study of non-preferred hand mode manipulation BIBAFull-Text 17-24
  Edward Lank; Jaime Ruiz; William Cowan
Pen/Stylus input systems are constrained by the limited input capacity of the electronic stylus. Stylus modes, which allow multiple interpretations of the same input, lift capacity limits, but confront the user with possible cognitive and motor costs associated with switching modes. This paper examines the costs of bimanual mode switching, in which the non-preferred hand performs actions that change modes while the preferred hand executes gestures that provide input. We examine three variants to control mode of a stylus gesture: pre-gesture mediation, post-gesture mediation, and mediation that occurs concurrently with stylus gesturing. The results show that concurrent mode-switching is faster than the alternatives, and, in one trial, marginally outperforms the control condition, un-moded drawing. These results demonstrate an instance in which suitably designed mode-switching offers minimal cost to the user. The implications of this result for the design of stylus input systems are highlighted.
TNT: improved rotation and translation on digital tables BIBAFull-Text 25-32
  Jun Liu; David Pinelle; Samer Sallam; Sriram Subramanian; Carl Gutwin
Digital tabletop systems allow users to work on computational objects in a flexible and natural setting. Since users can easily move to different positions around a table, systems must allow people to orient artifacts to their current position. However, it is only recently that rotation and translation techniques have been specifically designed for tabletops, and existing techniques still do not feel as simple and efficient as their real-world counterparts. To address this problem, we studied the ways that people move and reorient sheets of paper on real-world tabletops. We found that in almost all cases, rotation and translation are carried out simultaneously, and that an open-palm hand position was the most common way to carry out the motion. Based on our observations, we designed a new set of reorientation techniques that more closely parallel real-world motions. The new techniques, collectively called TNT, use three-degree-of-freedom (3DOF) input to allow simultaneous rotation and translation. A user study showed that all three variants of TNT were faster than a recent technique called RNT; in addition, participants strongly preferred TNT.


Style components BIBAFull-Text 33-39
  Ari Shapiro; Yong Cao; Petros Faloutsos
We propose a novel method for interactive editing of motion data based on motion decomposition. Our method employs Independent Component Analysis (ICA) to separate motion data into visually meaningful components called style components. The user then interactively identifies suitable style components and manipulates them based on a proposed set of operations. In particular, the user can transfer style components from one motion to another in order to create new motions that retain desirable aspects of the style and expressiveness of the original motion. For example, a clumsy walking motion can be decomposed so as to separate the clumsy nature of the motion from the underlying walking pattern. The clumsy style component can then be applied to a running motion, which will then yield a clumsy-looking running motion. Our approach is simple, efficient and intuitive since the components are themselves motion data. We demonstrate that the proposed method can serve as an effective tool for interactive motion analysis and editing.
Realistic and interactive simulation of rivers BIBAFull-Text 41-48
  Peter Kipfer; Rudiger Westermann
In this paper we present interactive techniques for physics-based simulation and realistic rendering of rivers using Smoothed Particle Hydrodynamics. We describe the design and implementation of a grid-less data structure to efficiently determine particles in close proximity and to resolve particle collisions. Based on this data structure, an efficient method to extract and display the fluid free surface from elongated particle structures as they are generated in particle based fluid simulation is presented. The proposed method is far faster than the Marching Cubes approach, and it constructs an explicit surface representation that is well suited for rendering. The surface extraction can be implemented on the GPU and only takes a fraction of the simulation time step. It is thus amenable to real-time scenarios like computer games and virtual reality environments.
Particle-based immiscible fluid-fluid collision BIBAFull-Text 49-55
  Hai Mao; Yee-Hong Yang
In this paper, we propose a new particle-based fluid-fluid collision model for immiscible fluid animation. Our model consists of two components, namely, collision detection and collision response. A useful modeling feature is that our model not only can prevent immiscible fluids from mixing with each other, but also can allow one fluid to run through or to wrap around another fluid. The model is very flexible and can work with many existing particle-based fluid models. The animation results are presented and show that the proposed model can produce a variety of fluid animations.
Spherical billboards and their application to rendering explosions BIBAFull-Text 57-63
  Tamas Umenhoffer; Laszlo Szirmay-Kalos; Gabor Szijarto
This paper proposes an improved billboard rendering method that splats particles as aligned quadrilaterals similarly to previous techniques, but takes into account the spherical geometry of the particles during fragment processing. The new method can eliminate billboard clipping and popping artifacts of previous techniques, happening when the participating medium contains objects, or the camera flies into the volume. This paper also discusses how to use spherical billboards to render high detail explosions made of fire and smoke at high frame rates.

Interaction and performance

Faster cascading menu selections with enlarged activation areas BIBAFull-Text 65-71
  Andy Cockburn; Andrew Gin
Cascading menus are used in almost all graphical user interfaces. Most current cascade widgets implement an explicit delay between the cursor entering/leaving a parent cascade menu item and posting/unposting the associated menu. The delay allows users to make small steering errors while dragging across items, and it allows optimal diagonal paths from parent to cascade items. However, the delay slows the pace of interaction for users who wait for the delay to expire, and it demands jerky discrete movements for experts who wish to pre-empt the delay by clicking. This paper describes Enlarged activation area MenUs (EMUs), which have two features: first, they increase the area of the parent menu associated with each cascade; second, they eliminate the posting and unposting delay. An evaluation shows that EMUs allow cascade items to be selected up to 29% faster than traditional menus, without harming top-level item selection times. They also have a positive smoothing effect on menu selections, allowing continuous sweeping selections in contrast to discrete movements that are punctuated with clicks.
Performance measures of game controllers in a three-dimensional environment BIBAFull-Text 73-79
  Chris Klochek; I. Scott MacKenzie
Little work exists on the testing and evaluation of computer-game related input devices. This paper presents five new performance metrics and utilizes two tasks from the literature to quantify differences between input devices in constrained three-dimensional environments, similar to "first-person"-genre games. The metrics are Mean Speed Variance, Mean Acceleration Variance, Percent View Moving, Target Leading Analysis, and Mean Time-to-Reacquire. All measures are continuous, as they evaluate movement during a trial. The tasks involved tracking a moving target for several seconds, with and without target acceleration. An evaluation between an X-Box gamepad and a standard PC mouse demonstrated the ability of the metrics to help reveal and explain performance differences between the devices.
Human on-line response to visual and motor target expansion BIBAFull-Text 81-87
  Andy Cockburn; Philip Brock
The components of graphical user interfaces can be made to dynamically expand as the cursor approaches, providing visually appealing effects. Expansion can be implemented in a variety of ways: in some cases the targets expand visually while maintaining a constant smaller motor-space for selection; and in others both the visual and motor-spaces of the objects are enlarged. Previous research by McGuffin & Balakrishnan [15], and confirmed by Zhai et al. [19], has shown that enlarged motor-space expansion improves acquisition performance. It remains unclear, however, what proportion of the performance improvement is due to the enlarged motor-space, and what to the confirmation of the over-target state provided by visual expansion. We report on two experiments which indicate that for small targets, visual expansion in unaltered motor-space results in similar performance gains to enlarged motor-spaces. These experiments are based on tasks where users are unable to anticipate the behaviour of the targets. Implications for commercial use of visual expansion in unaltered motor-space are discussed.

Geometric modelling

Early-split coding of triangle mesh connectivity BIBAFull-Text 89-97
  Martin Isenburg; Jack Snoeyink
The two main schemes for coding triangle mesh connectivity traverse a mesh with similar region-growing operations. Rossignac's Edgebreaker uses triangle labels to encode the traversal whereas the coder of Touma and Gotsman uses vertex degrees. Although both schemes are guided by the same spiraling spanning tree, they process triangles in a different order, making it difficult to understand their similarities and to explain their varying compression success.
   We describe a coding scheme that can operate like a label-based coder similar to Edgebreaker or like a degree-based coder similar to the TG coder. In either mode our coder processes vertices and triangles in the same order by performing the so-called "split operations" earlier than previous schemes. The main insights offered by this unified view are (a) that compression rates depend mainly on the choice of decoding strategy and less on whether labels or degrees are used and (b) how to do degree coding without storing "split" offsets. Furthermore we describe a new heuristic that allows the TG coder's bit-rates to drop below the vertex degree entropy.
Compression of time varying isosurfaces BIBAFull-Text 99-105
  Ilya Eckstein; Mathieu Desbrun; C.-C. Jay Kuo
Compressing sequences of complex time-varying surfaces as generated by medical instrumentations or complex physical simulations can be extremely challenging: repeated topology changes during the surface evolution render most of the previous techniques for compression of time-varying surfaces inefficient or impractical. In order to provide a viable solution, we propose a new approach based upon an existing isosurface compression technique designed for static surfaces. We exploit temporal coherence of the data by adopting the paradigm of block-based motion prediction developed in video coding and extending it using local surface registration. The resulting prediction errors across frames are treated as a static isosurface and encoded progressively using an adaptive octree-based scheme. We also exploit local spatiotemporal patterns through context-based arithmetic coding. Fine-grain geometric residuals are encoded separately with user-specified precision. The other design choices made to handle large datasets are detailed.
Surfacing by numbers BIBAFull-Text 107-113
  Steve Zelinka; Michael Garland
We present a novel technique for surface modelling by example called surfacing by numbers. Our system allows easy detail reuse from existing 3D models or images. The user selects a source region and a target region, and the system transfers detail from the source to the target. The source may be elsewhere on the target surface, on another surface altogether, or even part of an image. As transfer is formulated as synthesis with a novel surface-based adaptation of graph cuts, the source and target regions need not match in size or shape, and details can be geometric, textural or even user-defined in nature.
   A major contribution of our work is our fast, graph cut-based interactive surface segmentation algorithm. Unlike approaches based on scissoring, the user loosely strokes within the body of each desired region, and the system computes optimal boundaries between regions via minimum-cost graph cut. Thus, less precision is required, the amount of interaction is unrelated to the complexity of the boundary, and users do not need to search for a view of the model in which a cut can be made.
Streaming compression of tetrahedral volume meshes BIBAFull-Text 115-121
  Martin Isenburg; Peter Lindstrom; Stefan Gumhold; Jonathan Shewchuk
Geometry processing algorithms have traditionally assumed that the input data is entirely in main memory and available for random access. This assumption does not scale to large data sets, as exhausting the physical memory typically leads to IO-inefficient thrashing. Recent works advocate processing geometry in a "streaming" manner, where computation and output begin as soon as possible. Streaming is suitable for tasks that require only local neighbor information and batch process an entire data set.
   We describe a streaming compression scheme for tetrahedral volume meshes that encodes vertices and tetrahedra in the order they are written. To keep the memory footprint low, the compressor is informed when vertices are referenced for the last time (i.e. are finalized). The compression achieved depends on how coherent the input order is and how many tetrahedra are buffered for local reordering. For reasonably coherent orderings and a buffer of 10,000 tetrahedra, we achieve compression rates that are only 25 to 40 percent above the state-of-the-art, while requiring drastically less memory resources and less than half the processing time.


Evaluation of viewport size and curvature of large, high-resolution displays BIBAFull-Text 123-130
  Lauren Shupp; Robert Ball; Beth Yost; John Booker; Chris North
Tiling multiple monitors to increase the amount of screen space has become an area of great interest to researchers. While previous research has shown user performance benefits when tiling multiple monitors, little research has analyzed whether much larger high-resolution displays result in better user performance. We compared user performance time, accuracy, and mental workload on multi-scale geospatial search, route tracing, and comparison tasks across one, twelve (4x3), and twenty-four (8x3) tiled monitor configurations. We also compare user performance time in conditions that uniformly curve the twelve and twenty-four monitor displays. Results show that curving displays decreases user performance time, and we observed less strenuous physical navigation on the curved conditions. Depending on the task, the larger viewport sizes also improve performance time, and user frustration is significantly less with the larger displays than with one monitor.
The importance of accurate VR head registration on skilled motor performance BIBAFull-Text 131-137
  David W. Sprague; Barry A. Po; Kellogg S. Booth
Many virtual reality (VR) researchers consider exact head registration (HR) and an exact multi-sensory alignment between real world and virtual objects to be a critical factor for effective motor performance in VR. Calibration procedures, however, can be error prone, time consuming and sometimes impractical to perform. To better understand the relationship between head registration and fine motor performance, we conducted a series of reciprocal tapping tasks under four conditions: real world tapping, VR with correct HR, VR with mildly perturbed HR, and VR with highly perturbed HR. As might be expected, VR performance was worse than real world performance. There was no effect of HR perturbation on motor performance in the tapping tasks. We believe that sensorimotor adaptation enabled subjects to perform equally well in the three VR conditions despite the incorrect head registration in two of the conditions. This suggests that exact head registration may not be as critically important as previously thought, and that extensive per-user calibration procedures may not be necessary for some VR tasks.
Increased display size and resolution improve task performance in Information-Rich Virtual Environments BIBAFull-Text 139-146
  Tao Ni; Doug A. Bowman; Jian Chen
Physically large-size high-resolution displays have been widely applied in various fields. There is a lack of research, however, that demonstrates empirically how users benefit from the increased size and resolution afforded by emerging technologies. We designed a controlled experiment to evaluate the individual and combined effects of display size and resolution on task performance in an Information-Rich Virtual Environment (IRVE). We also explored how a wayfinding aid would facilitate spatial information acquisition and mental map construction when users worked with various displays. We found that users were most effective at performing IRVE search and comparison tasks on large high-resolution displays. In addition, users working with large displays became less reliant on wayfinding aids to form spatial knowledge. We discuss the impact of these results on the design and presentation of IRVEs, the choice of displays for particular applications, and future work to extend our findings.

Gesture and interaction

Phrasing techniques for multi-stroke selection gestures BIBAFull-Text 147-154
  Ken Hinckley; François Guimbretière; Maneesh Agrawala; Georg Apitz; Nicholas Chen
Pen gesture interfaces have difficulty supporting arbitrary multiple-stroke selections because lifting the pen introduces ambiguity as to whether the next stroke should add to the existing selection, or begin a new one. We explore and evaluate techniques that use a non-preferred-hand button or touchpad to phrase together one or more independent pen strokes into a unitary multi-stroke gesture. We then illustrate how such phrasing techniques can support multiple-stroke selection gestures with tapping, crossing, lassoing, disjoint selection, circles of exclusion, selection decorations, and implicit grouping operations. These capabilities extend the expressiveness of pen gesture interfaces and suggest new directions for multiple-stroke pen input techniques.
Fluid inking: augmenting the medium of free-form inking with gestures BIBAFull-Text 155-162
  Robert Zeleznik; Timothy Miller
We present Fluid Inking, a generally applicable approach to augmenting the fluid medium of free-form inking with gestural commands. Our approach is characterized by four design criteria, including: 1) pen-based hardware impartiality: all interactions can be performed with a button-free stylus, the minimal input hardware requirement for inking, and the least common denominator device for pen-based systems ranging from PDAs to whiteboards; 2) performability: gestures use short sequences of simple and familiar inking interactions that require minimal targeting; 3) extensibility: gestures are a regular pattern of optional shortcuts for commands in an arbitrarily scalable menu system; and 4) discoverability: gesture shortcuts (analogous to modifier keys) are displayed in the interactive menu and are suggested with dynamic feedback during inking. This paper presents the Fluid Inking techniques in the unified context of a prototype notetaking application and emphasizes how post-fix terminal punctuation and prefix flicks can disambiguate gestures from regular inking. We also discuss how user feedback influenced the Fluid Inking design.
Superflick: a natural and efficient technique for long-distance object placement on digital tables BIBAFull-Text 163-170
  Adrian Reetz; Carl Gutwin; Tadeusz Stach; Miguel Nacenta; Sriram Subramanian
Moving objects past arms' reach is a common action in both real-world and digital tabletops. In the real world, the most common way to accomplish this task is by throwing or sliding the object across the table. Sliding is natural, easy to do, and fast: however, in digital tabletops, few existing techniques for long-distance movement bear any resemblance to these real-world motions. We have designed and evaluated two tabletop interaction techniques that closely mimic the action of sliding an object across the table. Flick is an open-loop technique that is extremely fast. Superflick is based on Flick, but adds a correction step to improve accuracy for small targets. We carried out two user studies to compare these techniques to a fast and accurate proxy-based technique, the radar view. In the first study, we found that Flick is significantly faster than the radar for large targets, but is inaccurate for small targets. In the second study, we found no differences between Superflick and radar for either time or accuracy. Given the simplicity and learnability of flicking, our results suggest that throwing-based techniques have promise for improving the usability of digital tables.
HingeSlicer: interactive exploration of volume images using extended 3D slice plane widgets BIBAFull-Text 171-178
  Tim McInerney; Sara Broughton
We present a 3D interaction model for exploring volume image data by extending the capabilities of 3D slice plane widgets. Our model provides the ability to navigate through a volume image in a fast, intuitive manner, using object-relative user navigation. Employing a cut-fold-slide analogy, 3D slice plane widgets are rotated and translated relative to each other. The planes can be progressively cut to extend existing views and form staircase-like arrangements, minimizing occlusion and visual clutter problems that result from multiple, disconnected slice planes. Extending existing views also allows cutting actions to be easily "mended", providing users with the ability to return to a previous "good" view and explore again. A user makes cuts by drawing "hinge" lines on a slice plane widget, in any orientation, dividing the slice plane into two pieces. These pieces can fold (rotate) around the hinge line or slide (translate) with respect to each other, allowing the user to retain a better contextual understanding of the 3D spatial relationships between structures and of 3D structure shape.


Image synthesis using adjoint photons BIBAFull-Text 179-186
  R. Keith Morley; Solomon Boulos; Jared Johnson; David Edwards; Peter Shirley; Michael Ashikhmin; Simon Premoze
The most straightforward image synthesis algorithm is to follow photon-like particles from luminaires through the environment. These particles scatter or are absorbed when they interact with a surface or a volume. They contribute to the image if and when they strike a sensor. Such an algorithm implicitly solves the light transport equation. Alternatively. adjoint photons can be traced from the sensor to the luminaires to produce the same image. This "adjoint photon" tracing algorithm is described, and its strengths and weaknesses are discussed, as well as details needed to make adjoint photon tracing practical.
Light animation with precomputed light paths on the GPU BIBAFull-Text 187-194
  Laszlo Szeczi; Laszlo Szirmay-Kalos; Mateu Sbert
This paper presents a real-time global illumination method for static scenes illuminated by arbitrary, dynamic light sources. The algorithm obtains the indirect illumination caused by the multiple scattering of the light from precomputed light paths. The indirect illumination due to the precomputed light paths is stored in texture maps. Texture based representations allow the GPU to render the scene with global illumination effects at high frame rates even when the camera or the lights move. The proposed method requires moderate preprocessing time and can also work well for small light sources that are close to the surface. The implemented version considers only diffuse reflections. The method scales up very well for complex scenes and storage space can be traded for high frequency details in the indirect illumination.

GPU rendering

Rendering geometry with relief textures BIBAFull-Text 195-201
  Lionel Baboud; Xavier Decoret
We propose to render geometry using an image based representation. Geometric information is encoded by a texture with depth and rendered by rasterizing the bounding box geometry. For each resulting fragment, a shader computes the intersection of the corresponding ray with the geometry using pre-computed information to accelerate the computation. Our method is almost always artifact free even when zoomed in or at grazing angles. We integrate our algorithm with reverse perspective projection to represent a larger class of shapes. The extra texture requirement is small and the rendering cost is output sensitive, so our representation can be used to model many parts of a 3D scene.
Fast GPU ray tracing of dynamic meshes using geometry images BIBAFull-Text 203-209
  Nathan A. Carr; Jared Hoberock; Keenan Crane; John C. Hart
Using the GPU to accelerate ray tracing may seem like a natural choice due to the highly parallel nature of the problem. However, determining the most versatile GPU data structure for scene storage and traversal is a challenge. In this paper, we introduce a new method for quick intersection of triangular meshes on the GPU. The method uses a threaded bounding volume hierarchy built from a geometry image, which can be efficiently traversed and constructed entirely on the GPU. This acceleration scheme is highly competitive with other GPU ray tracing methods, while allowing for both dynamic geometry and an efficient level of detail scheme at no extra cost.
Implementing the render cache and the edge-and-point image on graphics hardware BIBAFull-Text 211-217
  Edgar Velazquez-Armendariz; Eugene Lee; Kavita Bala; Bruce Walter
The render cache and the edge-and-point image (EPI) are alternative point-based rendering techniques that combine interactive performance with expensive, high quality shading for complex scenes. They use sparse sampling and intelligent reconstruction to enable fast framerates and to decouple shading from the display update.
   We present a hybrid CPU/GPU multi-pass system that accelerates these techniques by utilizing programmable graphics processing units (GPUs) to achieve better framerates while freeing the CPU for other uses such as high-quality shading (including global illumination). Because the render cache and EPI differ from the traditional graphics pipeline in interesting ways, we encountered several challenges in using the GPU effectively. We discuss our optimizations to achieve good performance, limitations with the current generation hardware, as well as possibilities for future improvements.
Cycle shading for the assessment and visualization of shape in one and two codimensions BIBAFull-Text 219-226
  Daniel Weiskopf; Helwig Hauser
In this paper we propose cycle shading and hatched cycle shading as new local shading techniques for shape assessment and visualization. Natural surface highlights are extended to not only appear in isolated parts of a surface, but to reappear throughout the surface in a regular and easy-to-control pattern. Thereby even small surface variations become visible, wherever they are located on the surface. We further extend (hatched) cycle shading to curves in 3D, i.e., to shapes of higher codimension. We demonstrate how (hatched) cycle shading improves 3D vector field visualization by showing higher-order discontinuities of streamlines, pathlines, or streaklines. Our visualization approach is generic, simple, efficient, and can readily be used where Phong illumination is applicable because information on curvature or mesh connectivity is not required. The effectiveness of cycle shading for the assessment of surface quality is demonstrated by a user study. Finally, this paper addresses issues of anti-aliasing, parameter control, applications, and efficient GPU implementations.

Web and design

Generating custom notification histories by tracking visual differences between web page visits BIBAFull-Text 227-234
  Saul Greenberg; Michael Boyle
We contribute a method that lets people create a visual history of custom notifications to track personally meaningful changes to web pages. Notifications are assembled as a collage of regions extracted from the fully rendered (bitmap) representation of the web pages. They are triggered when visual changes between successive visits are detected within regions. To use the system, a person specifies a notification by clipping personally interesting regions from the bitmap representation of a web page and reformatting them into a small collage. The person then specifies regions on the page that will be monitored and compared for visual differences over time. Based on this specification, the system periodically revisits the page in the background on behalf of the user and automatically generations a notification (the collage plus a title and timestamp) when differences are detected. Finally, the person views the generated notifications in several ways: as only the most recently changed version (to illustrate current state), or as an image history that can be individually browsed or played back as a continuous video stream.
The impact of task on the usage of web browser navigation mechanisms BIBAFull-Text 235-242
  Melanie Kellar; Carolyn Watters; Michael Shepherd
In this paper, we explore how factors such as task and individual differences influence the usage of different web browser navigation mechanisms (e.g., clicked links, bookmarks, auto-complete). We conducted a field study of 21 participants and logged detailed web browser usage. Participants were asked to categorize their web usage according to the following schema: Fact Finding, Information Gathering, Browsing, and Transactions. Using this data, we have identified three factors that play a role in the use of navigation mechanisms: task session, task type, and individual differences. These findings have implications for the future design of new and improved web navigation mechanisms.
A case-study of affect measurement tools for physical user interface design BIBAFull-Text 243-250
  Colin Swindells; Karon E. MacLean; Kellogg S. Booth; Michael Meitner
Designers of human-computer interfaces often overlook issues of affect. An example illustrating the importance of affective design is the frustration many of us feel when working with a poorly designed computing device. Redesigning such computing interfaces to induce more pleasant user emotional responses would improve the user's health and productivity. Almost no research has been conducted to explore affective responses in rendered haptic interfaces. In this paper, we describe results and analysis from two user studies as a starting point for future systematic evaluation and design of rendered physical controls. Specifically, we compare and contrast self-report and biometric measurement techniques for two common types of haptic interactions. First, we explore the tactility of real textures such as silk, putty, and acrylic. Second, we explore the kinesthetics of physical control renderings such as friction and inertia. We focus on evaluation methodology, on the premise that good affect evaluation and analysis cycles can be a useful element of the interface designer's tool palette.