HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 959697989900010203040506070809101112131415

Proceedings of the 2005 Conference on Graphics Interface

Fullname:Proceedings of the 2005 Conference on Graphics Interface
Editors:Kori Inkpen; Michiel van de Panne
Location:Victoria, Canada
Dates:2005-May-09 to 2005-May-11
Publisher:Canadian Information Processing Society
Standard No:ISSN 0713-5424; ISBN 1-56881-265-5; ACM DL: Table of Contents hcibib: GI05
Papers:31
Pages:256
Links:Conference Series Home Page
  1. Two hands are better than one
  2. Interacting with walls and tables
  3. Animation
  4. Rendering
  5. Shadows
  6. Sensing interaction
  7. Privacy and security awareness
  8. Geometric modeling
  9. Hand/eye interaction
  10. Image-based editing and image-based animation
  11. Invited

Two hands are better than one

The haptic hand: providing user interface feedback with the non-dominant hand in virtual environments BIBAFull-Text 1-8
  Luv Kohli; Mary Whitton
We present a user interface for virtual environments that utilizes the non-dominant hand to provide haptic feedback to the dominant hand while it interacts with widgets on a virtual control panel. We believe this technique improves on existing prop-based methods of providing haptic feedback. To gauge the interface's effectiveness, we performed a usability study. We do not present a formal comparison with prior techniques here. The goal of this study was to determine the feasibility of using the non-dominant hand for haptic feedback, and to obtain subjective data about usability. The results demonstrated that the interface allowed users to perform precision tasks. These results have convinced us that this technique has potential and warrants further development.
TangiMap: a tangible interface for visualization of large documents on handheld computers BIBAFull-Text 9-15
  Martin Hachet; Joachim Pouderoux; Pascal Guitton; Jean-Christophe Gonzato
The applications for handheld computers have evolved from very simple schedulers or note editors to more complex applications where high-level interaction tasks are required. Despite this evolution, the input devices for interaction with handhelds are still limited to a few buttons and styluses associated with sensitive screens.
   In this paper we focus on the visualization of large documents (e.g. maps) that cannot be displayed in their entirety on the small-size screens. We present a new task-adapted and device-adapted interface called TangiMap.
   TangiMap is a three degrees of freedom camera-based interface where the user interacts by moving a tangible interface behind the handheld computer. TangiMap benefits from two-handed interaction providing a kinaesthetic feedback and a frame of reference.
   We undertook an experiment to compare TangiMap with a classical stylus interface for a two-dimensional target searching task. The results showed that TangiMap was faster and that the user preferences were largely in its favor.
When it gets more difficult, use both hands: exploring bimanual curve manipulation BIBAFull-Text 17-24
  Russell Owen; Gordon Kurtenbach; George Fitzmaurice; Thomas Baudel; Bill Buxton
In this paper we investigate the relationship between bimanual (two-handed) manipulation and the cognitive aspects of task integration, divided attention and epistemic action. We explore these relationships by means of an empirical study comparing a bimanual technique versus a unimanual (one-handed) technique for a curve matching task. The bimanual technique was designed on the principle of integrating the visual, conceptual and input device space domain of both hands. We provide evidence that the bimanual technique has better performance than the unimanual technique and, as the task becomes more cognitively demanding, the bimanual technique exhibits even greater performance benefits. We argue that the design principles and performance improvements are applicable to other task domains.

Interacting with walls and tables

Improving drag-and-drop on wall-size displays BIBAFull-Text 25-32
  Maxime Collomb; Mountaz Hascoet; Patrick Baudisch; Brian Lee
On wall-size displays with pen or touch input, users can have difficulties reaching display contents located too high, too low, or too far away. Drag-and-drop interactions can be further complicated by bezels separating individual display units. Researchers have proposed a variety of interaction techniques to address this issue, such as extending the user's reach (e.g., push-and-throw) and bringing potential targets to the user (drag-and-pop). In this paper, we introduce a new technique called push-and-pop that combines the strengths of push-and-throw and drag-and-pop. We present two user studies comparing six different techniques designed for extending drag-and-drop to wall-size displays. In both studies, participants were able to file icons on a wall-size display fastest when using the push-and-pop interface.
TractorBeam: seamless integration of local and remote pointing for tabletop displays BIBAFull-Text 33-40
  J. Karen Parker; Regan L. Mandryk; Kori M. Inkpen
This paper presents a novel interaction technique for tabletop computer displays. When using a direct input device such as a stylus, reaching objects on the far side of a table is difficult. While remote pointing has been investigated for large wall displays, there has been no similar research into reaching distant objects on tabletop displays. Augmenting a stylus to allow remote pointing may facilitate this process. We conducted two user studies to evaluate remote pointing on tabletop displays. Results from our work demonstrate that remote pointing is faster than stylus touch input for large targets, slower for small distant targets, and comparable in all other cases. In addition, when given a choice, people utilized the pointing interaction technique more often than stylus touch. Based on these results we developed the TractorBeam, a hybrid point-touch input technique that allows users to seamlessly reach distant objects on tabletop displays.
Exploring non-speech auditory feedback at an interactive multi-user tabletop BIBAFull-Text 41-50
  Mark S. Hancock; Chia Shen; Clifton Forlines; Kathy Ryall
We present two experiments on the use of non-speech audio at an interactive multi-touch, multi-user tabletop display. We first investigate the use of two categories of reactive auditory feedback: affirmative sounds that confirm user actions and negative sounds that indicate errors. Our results show that affirmative auditory feedback may improve one's awareness of group activity at the expense of one's awareness of his or her own activity. Negative auditory feedback may also improve group awareness, but simultaneously increase the perception of errors for both the group and the individual. In our second experiment, we compare two methods of associating sounds to individuals in a co-located environment. Specifically, we compare localized sound, where each user has his or her own speaker, to coded sound, where users share one speaker, but the waveform of the sounds are varied so that a different sound is played for each user. Results of this experiment reinforce the presence of tension between group awareness and individual focus found in the first experiment. User feedback suggests that users are more easily able to identify who caused a sound when either localized or coded sound is used, but that they are also more able to focus on their individual work. Our experiments show that, in general, auditory feedback can be used in co-located collaborative applications to support either individual work or group awareness, but not both simultaneously, depending on how it is presented.

Animation

Controllable real-time locomotion using mobility maps BIBAFull-Text 51-59
  Madhusudhanan Srinivasan; Ronald A. Metoyer; Eric N. Mortensen
Graph-based approaches for sequencing motion capture data have produced some of the most realistic and controllable character motion to date. Most previous graph-based approaches have employed a run-time global search to find paths through the motion graph that meet user-defined constraints such as a desired locomotion path. Such searches do not scale well to large numbers of characters. In this paper, we describe a locomotion approach that benefits from the realism of graph-based approaches while maintaining basic user control and scaling well to large numbers of characters. Our approach is based on precomputing multiple least cost sequences from every state in a state-action graph. We store these precomputed sequences in a data structure called a mobility map and perform a local search of this map at run-time to generate motion sequences in real time that achieve user constraints in a natural manner. We demonstrate the quality of the motion through various example locomotion tasks including target tracking and collision avoidance. We demonstrate scalability by animating crowds of up to 150 rendered articulated walking characters at real-time rates.
Dynamic Animation and Control Environment BIBAFull-Text 61-70
  Ari Shapiro; Petros Faloutsos; Victor Ng-Thow-Hing
We introduce the Dynamic Animation and Control Environment (DANCE) as a publicly available simulation platform for research and teaching. DANCE is an open and extensible simulation framework and rapid prototyping environment for computer animation. The main focus of the DANCE platform is the development of physically-based controllers for articulated figures. In this paper we (a) present the architecture and potential applications of DANCE as a research tool, and (b) discuss lessons learned in developing a large framework for animation.

Rendering

A practical self-shadowing algorithm for interactive hair animation BIBAFull-Text 71-78
  Florence Bertails; Clement Menier; Marie-Paule Cani
This paper presents a new fast and accurate self-shadowing algorithm for animated hair. Our method is based on a 3D light-oriented density map, a novel structure that combines an optimized volumetric representation of hair with a light-oriented partition of space. Using this 3D map, accurate hair self-shadowing can be interactively processed (several frames per second for a full hairstyle) on a standard CPU. Beyond the fact that our application is independent of any graphics hardware (and thus portable), it can easily be parallelized for better performance. Our method is especially adapted to render animated hair since there is no geometry-based precomputation and since the density map can be used to optimize hair self-collisions. The approach has been validated on a dance motion sequence, for various hairstyles.
A computational approach to simulate subsurface light diffusion in arbitrarily shaped objects BIBAFull-Text 79-86
  Tom Haber; Tom Mertens; Philippe Bekaert; Frank Van Reeth
To faithfully display objects consisting of translucent materials such as milk, fruit, wax and marble, one needs to take into account subsurface scattering of light. Accurate renderings require expensive simulation of light transport. Alternatively, the widely-used fast dipole approximation [15] cannot deal with internal visibility issues, and has limited applicability (only homogeneous materials).
   We present a novel algorithm to plausibly reproduce subsurface scattering based on the diffusion approximation. This yields a relatively simple partial differential equation, which we propose to solve numerically using the multigrid method. The main difficulty in this approach consists of accurately representing interactions near the object's surface, for which we employ the embedded boundary discretization [5, 16]. Also, our method allows us to refine the simulation hierarchically where needed in order to optimize performance and memory usage. The resulting approach is capable of rapidly and accurately computing subsurface scattering in polygonal meshes for both homogeneous and heterogeneous materials. The amount of time spent computing subsurface scattering in a complex object is generally a few minutes.
Interactive rendering of caustics using interpolated warped volumes BIBAFull-Text 87-96
  Manfred Ernst; Tomas Akenine-Moller; Henrik Wann Jensen
In this paper we present an improved technique for interactive rendering of caustics using programmable graphics hardware. Previous real-time methods have used simple prisms for the caustic volumes and a constant intensity approximation at the receiver. Our approach uses interpolated caustic volumes to render smooth high-quality caustics. We have derived a simple formula for evaluating the density of wave-fronts along a caustic ray, and we have developed a precise method for rendering caustic volumes bounded by bilinear patches. The new optimizations are well suited for programmable graphics hardware and our results demonstrate interactive rendering of caustics from refracting and reflecting surfaces as well as volume caustics. In contrast to previous work, our method renders high quality caustics generated by specular surfaces with much fewer polygons.
Reordering for cache conscious photon mapping BIBAFull-Text 97-104
  Joshua Steinhurst; Greg Coombe; Anselmo Lastra
Photon mapping is a global illumination algorithm for generating and visualizing a sparse representation of the incident radiance on surfaces. Photon mapping places an enormous burden on the memory hierarchy. A 512x512 image using the standard kd-tree data structure requires more than 196GB of raw bandwidth to access the photon map. This bandwidth is a major obstacle to our long term goal of designing hardware capable of real time photon mapping.
   This paper investigates two approaches for reducing the required bandwidth: 1) reordering the kNN searches; and 2) cache conscious data structures. Using a Hilbert curve reordering, we demonstrate an approximate lower bound of 15MB of bandwidth. This improvement of four orders of magnitude requires a prohibitive amount of intermediate storage. We then demonstrate two more cost-effective algorithms that reduce the bandwidth by one order of magnitude to 24GB with IMB of storage. We explain why the choice of data structure can not, by itself, achieve this reduction. Irradiance caching, a popular technique that reduces the number of required kNN searches, receives the same proportional benefit as the higher quality photon gathers.

Shadows

Soft shadows from extended light sources with penumbra deep shadow maps BIBAFull-Text 105-112
  Jean-Francois St-Amour; Eric Paquette; Pierre Poulin
This paper presents a new method of precomputing high-quality soft shadows that can be cast on a static scene as well as on dynamic objects added to that scene. The method efficiently merges the visibility computed from many shadow maps into a penumbra deep shadow map (PDSM) structure. The resulting structure effectively captures the changes of attenuation in each PDSM pixel, and therefore constitutes an accurate representation of light attenuation. By taking advantage of the visibility coherence, the method is able to store a compact representation of the visibility for every location within the field of view of the PDSM. Modern programmable graphics hardware technology is used by the method to cast real-time complex soft shadows.
Automatic generation of consistent shadows for augmented reality BIBAFull-Text 113-120
  Katrien Jacobs; Jean-Daniel Nahmias; Cameron Angus; Alex Reche; Celine Loscos; Anthony Steed
In the context of mixed reality, it is difficult to simulate shadow interaction between real and virtual objects when only an approximate geometry of the real scene and the light source is known. In this paper, we present a real-time rendering solution to simulate colour-consistent virtual shadows in a real scene. The rendering consists of a three-step mechanism: shadow detection, shadow protection and shadow generation. In the shadow detection step, the shadows due to real objects are automatically identified using the texture information and an initial estimate of the shadow region. In the next step, a protection mask is created to prevent further rendering in those shadow regions. Finally, the virtual shadows are generated using shadow volumes and a pre-defined scaling factor that adapts the intensity of the virtual shadows to the real shadow. The procedure detects and generates shadows in real time, consistent with those already present in the scene and offers an automatic and real-time solution for common illumination, suitable for augmented reality.

Sensing interaction

An empirical investigation of capture and access for software requirements activities BIBAFull-Text 121-128
  Heather Richter; Chris Miller; Gregory D. Abowd; Idris Hsi
Researchers have been exploring the ubiquitous capture and access of meetings for the past decade. Yet, few evaluations of these systems have demonstrated the benefits from using recorded meeting information. We are exploring the capture and access of Knowledge Acquisition sessions, discussions to understand the problems and requirements that feed systems development. In this paper, we evaluate the use of these recordings in creating a requirements document. We show that recordings of discussions will not be utilized without appropriate structure and indexing. Our study demonstrates how captured information can be used in such a task and the potential benefits that use may afford.
Case studies in the use of ROC curve analysis for sensor-based estimates in human computer interaction BIBAFull-Text 129-136
  James Fogarty; Ryan S. Baker; Scott E. Hudson
Applications that use sensor-based estimates face a fundamental tradeoff between true positives and false positives when examining the reliability of these estimates, one that is inadequately described by the straightforward notion of accuracy. To address this tradeoff, this paper examines the use of Receiver Operating Characteristic (ROC) curve analysis, a method that has a long history but is under-appreciated in the human computer interaction research community. We present the fundamentals of ROC analysis, the use of the A' statistic to compute the area under an ROC curve, and the equivalence of A' to the Wilcoxon statistic. We then present several case studies, framed in the context of our work on human interruptibility, demonstrating how ROC analysis can yield better results than analyses based on accuracy. These case studies compare sensor-based estimates with human performance, optimize a feature selection process for the area under the ROC curve, and examine end-user selection of a desirable tradeoff.

Privacy and security awareness

Gathering evidence: use of visual security cues in web browsers BIBAFull-Text 137-144
  Tara Whalen; Kori M. Inkpen
We browsers support secure online transactions, and provide visual feedback mechanisms to inform the user about security. These mechanisms have had little evaluation to determine how easily they are noticed and how effectively they are used. This paper describes a preliminary study conducted to determine which elements are noted, which are ignored, and how easily they are found. We collected eyetracker data to study user's attention to browser security, and gathered additional subjective data through questionnaires. Our results demonstrated that while the lock icon is commonly viewed, its interactive capability is essentially ignored. We also found that certificate information is rarely used, and that people stop looking for security information after they have signed into a site. These initial results provide insights into how browser security cues might be improved.
Using relationship to control disclosure in Awareness servers BIBAFull-Text 145-152
  Scott Davis; Carl Gutwin
Awareness servers provide information about a person to help observers determine whether they are available for contact. A tradeoff exists in these systems: more sources of information, and higher fidelity in those sources, can improve people's decisions, but each increase in information reduces privacy. In this paper, we look at whether the type of relationship between the observer and the person being observed can be used to manage this tradeoff. We conducted a survey that asked people what amount of information from different sources that they would disclose to seven different relationship types. We found that in more than half of the cases, people would give different amounts of information to different relationships. We also found that the only relationship to consistently receive less information was the acquaintance -- essentially the person without a strong relationship at all. Our results suggest that awareness servers can be improved by allowing finer-grained control than what is currently available.

Geometric modeling

A pattern-based data structure for manipulating meshes with regular regions BIBAFull-Text 153-160
  Le-Jeng Shiue; Jorg Peters
Automatically generated or laser-scanned surfaces typically exhibit large clusters with a uniform pattern. To take advantage of the regularity within clusters and still be able to edit without decompression, we developed a two-level data structure that uses an enumeration by orbits and an individually adjustable stencil to flexibly describe connectivity. The structure is concise for storing mesh connectivity; efficient for random access, interactive editing, and recursive refinement; and it is flexible by supporting a large assortment of connectity patterns and subdivision schemes.
Extraction and remeshing of ellipsoidal representations from mesh data BIBAFull-Text 161-168
  Patricio D. Simari; Karan Singh
Dense 3D polygon meshes are now a pervasive product of various modelling and scanning processes that need to be subsequently processed and structured appropriately for various applications. In this paper we address the restructuring of dense polygon meshes using their segmentation based on a number of ellipsoidal regions. We present a simple segmentation algorithm where connected components of a mesh are fit to ellipsoidal surface regions. The segmentation of a mesh into a small number of ellipsoidal elements makes for a compact geometric representation and facilitates efficient geometric queries and transformations. We also contrast and compare two polygon remeshing techniques based on the ellipsoidal surfaces and the segmentation boundaries.
Distance extrema for spline models using tangent cones BIBAFull-Text 169-175
  David E. Johnson; Elaine Cohen
We present a robust search for distance extrema from a point to a curve or a surface. The robustness comes from using geometric operations rather than numerical methods to find all local extrema. Tangent cones are used to search for regions where distance extrema conditions are satisfied and patch refinement hierarchically improves the search. Instead of preprocessing and storing a large hierarchy, elements are computed as needed and retained only if useful. However, for spatially coherent queries, this provides a significant speedup.
Islamic star patterns from polygons in contact BIBAFull-Text 177-185
  Craig S. Kaplan
We present a simple method for rendering Islamic star patterns based on Hankin's "polygons-in-contact" technique. The method builds star patterns from a tiling of the plane and a small number of intuitive parameters. We show how this method can be adapted to construct Islamic designs reminiscent of Huff's parquet deformations. Finally, we introduce a geometric transformation on tilings that expands the range of patterns accessible using our method. This transformation simplifies construction techniques given in previous work, and clarifies previously unexplained relationships between certain classes of star patterns.

Hand/eye interaction

Evaluation of an on-line adaptive gesture interface with command prediction BIBAFull-Text 187-194
  Xiang Cao; Ravin Balakrishnan
We present an evaluation of a hybrid gesture interface framework that combines on-line adaptive gesture recognition with a command predictor. Machine learning techniques enable on-line adaptation to differences in users' input patterns when making gestures, and exploit regularities in command sequences to improve recognition performance. A prototype using 2D single-stroke gestures was implemented with a minimally intrusive user interface for on-line re-training. Results of a controlled user experiment show that the hybrid adaptive system significantly improved overall gesture recognition performance, and reduced users' need to practice making the gestures before achieving good results.
Moving objects with 2D input devices in CAD systems and Desktop Virtual Environments BIBAFull-Text 195-202
  Ji-Young Oh; Wolfgang Stuerzlinger
Part assembly and scene layout are basic tasks in 3D design in Desktop Virtual Environment (DVE) systems as well as Computer Aided Design (CAD) systems. 2D input devices such as a mouse or a stylus are still the most common input devices for such systems. With such devices, a notably difficult problem is to provide an efficient and predictable object motion in 3D based on their 2D motion. This paper presents a new technique to move objects in CAD/DVE using 2D input devices.
   The technique presented in this paper utilizes the fact that people easily recognize the depth-order of shapes based on occlusions. In the presented technique, the object position follows the mouse cursor position, while the object slides on various surfaces in the scene. In contrast to existing techniques, the movement surface and the relative object position is determined using the whole area of overlap of the moving object with the static scene. The resulting object movement is visually smooth and predictable, while avoiding undesirable collisions. The proposed technique makes use of the framebuffer for efficiency and runs in real-time. Finally, the evaluation of the new technique with a user study shows that it compares very favorably to conventional techniques.
Efficient eye pointing with a fisheye lens BIBAFull-Text 203-210
  Michael Ashmore; Andrew T. Duchowski; Garth Shoemaker
This paper evaluates refinements to existing eye pointing techniques involving a fisheye lens. We use a fisheye lens and a video-based eye tracker to locally magnify the display at the point of the user's gaze. Our gaze-contingent fisheye facilitates eye pointing and selection of magnified (expanded) targets. Two novel interaction techniques are evaluated for managing the fisheye, both dependent on real-time analysis of the user's eye movements. Unlike previous attempts at gaze-contingent fisheye control, our key innovation is to hide the fisheye during visual search, and morph the fisheye into view as soon as the user completes a saccadic eye movement and has begun fixating a target. This style of interaction allows the user to maintain an overview of the desktop during search while selectively zooming in on the foveal region of interest during selection. Comparison of these interaction styles with ones where the fisheye is continuously slaved to the user's gaze (omnipresent) or is not used to affect target expansion (nonexistent) shows performance benefits in terms of speed and accuracy.
Using social geometry to manage interruptions and co-worker attention in office environments BIBAFull-Text 211-218
  Maria Danninger; Roel Vertegaal; Daniel P. Siewiorek; Aadil Mamuji
Social geometry is a novel technique for reasoning about the engagement of participants during group meetings on the basis of head orientation data provided by computer vision. This form of group context can be used by ubiquitous environments to route communications between users, or sense availability of users for interruption. We explored problems of distraction by co-workers in office cubicle farms, applying our method to the design of a cubicle that automatically regulates visual and auditory communications between users.

Image-based editing and image-based animation

Image-guided fracture BIBAFull-Text 219-226
  David Mould
We present an image filter that transforms an input line drawing into an image of a fractured surface, where the cracks echo the input drawing. The basis of our algorithm is the Voronoi diagram of a weighted graph, where the distance between nodes is path cost in the graph. Modifying the edge costs gives us control over the placement of region boundaries; we interpret region boundaries as cracks. The rendering of our crack maps into final images is accomplished either by image analogies or by modulation of an uncracked texture.
Interactive material replacement in photographs BIBAFull-Text 227-232
  Steve Zelinka; Hui Fang; Michael Garland; John C. Hart
Material replacement has wide application throughout the entertainment industry, particularly for post-production make-up application or wardrobe adjustment. More generally, any low-cost mock-up object can be processed to have the appearance of expensive, high-quality materials. We demonstrate a new system that allows fast, intuitive material replacement in photographs. We extend recent work in object selection and fast texture synthesis, as well as develop a novel approach to shape-from-shading capable of handling objects with albedo changes. Each component of our system runs with interactive speed, allowing for easy experimentation and refinement of results.
Isoluminant color picking for non-photorealistic rendering BIBAFull-Text 233-240
  Tran Quan Luong; Ankush Seth; Allison Klein; Jason Lawrence
The physiology of human visual perception helps explain different uses for color and luminance in visual arts. When visual fields are isoluminant, they look the same to our luminance processing pathway, while potentially looking quite different to the color processing path. This creates a perceptual tension exploited by skilled artists. In this paper, we show how reproducing a target color using a set of isoluminant yet distinct colors can both improve existing NPR image filters and help create new ones. A straight-forward, geometric technique for isoluminant color picking is presented, and then applied in an improved pointillist filter, a new Chuck Close inspired filter, and a novel type of image mosaic filter.
Interactive vector fields for painterly rendering BIBAFull-Text 241-247
  Sven C. Olsen; Bruce A. Maxwell; Bruce Gooch
We present techniques for generating and manipulating vector fields for use in the creation of painterly images and animations. Our aim is to enable casual users to create results evocative of expressionistic art. Rather than defining stroke alignment fields globally, we divide input images into regions using a colorspace clustering algorithm. Users interactively assign characteristic brush stroke alignment fields and stroke rendering parameters to each region. By combining vortex dynamics and semi-Lagrangian fluid simulation we are able to create stable, easily controlled vector fields. In addition to fluid simulations, users can align strokes in a given region using more conventional field models such as smoothed gradient fields and optical flow, or hybrid fields that combine the desirable features of fluid simulations and smoothed gradient information.

Invited

Forty years of human-computer interaction and knowledge media design: twelve challenges to meet in fewer than the next forty years BIBAKFull-Text 249-250
  Ronald M. Baecker
Inspired in part by a seminal article by JCR Licklider on "man-computer symbiosis" [3, see also 4, 5], a wonderful course entitled "Technological aids to human thought" taught by Anthony Oettinger that I took at Harvard early in 1966, and the vitality and excitement of MIT Project Mac, the AI Lab, and especially Lincoln Lab [2], I began research in interactive computing shortly after the September 1965 start of my Ph.D. work at M.I.T. Now, 40 years later, receiving this honour (with gratitude) allows me the indulgence to rant for at least 40 minutes, reflecting first on the miracles in processor speed, memory capacity, bandwidth, I/O technology, graphics algorithms, and human-computer interfaces that have transpired over this interval [see also 1], and then speaking at much greater length over things that remain undone.
   The latter topics will be organized into two categories, compelling research challenges (junior faculty without tenure and Ph.D. students searching for topics listen carefully :-)), and broader challenges for the fields of human-computer interaction and knowledge media design (senior faculty with tenure seeking to slay dragons listen even more carefully :-) :-)).
   I will briefly sketch and articulate the following six research challenges:
  • Collaboration technologies -- why are these tools still so hard to use?
  • Intelligent interfaces -- can AI finally aid humans instead of aiming to
       replace them, or, why can the computer beat Kasporov, but cannot connect me
       to the Net?
  • Design methodologies -- can we do less boasting about being user-centred and
       start doing better science?
  • Evaluation methodologies -- how can we gather design intelligence by mining
       rich potential sources of user experience data from the field?
  • Interfaces for seniors -- what we can do for seniors and what can they can do
       for us?
  • Electronic memory aids -- is this a compelling area promising a major payoff
       for human productivity and morale? I will then rant for as long as possible on the following six broader issues:
  • Courses on computers and society and communication skills for computer
       science students -- if we don't insist that this be taught, and take the
       lead, who will?
  • Interfaces in context -- why do I teach knowledge media design and not user
       interface design?
  • HCI in computer science departments -- should we continue to "pretend" that
       we do computer science?
  • Open source and open access -- if the intellectual property and technology
       transfer system is broken, shouldn't we try to fix it?
  • Appropriate automation -- can it and will it ever stop?
  • Interfaces everywhere -- is change possible, and how can we make things
       better?
    Keywords: Human-computer interaction, knowledge media design, user interface design, design methodologies, evaluation methodologies, collaboration technologies, computer-supported cooperative work