HCI Bibliography Home | HCI Conferences | ETRA Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ETRA Tables of Contents: 000204060810121416

Proceedings of the 2002 Symposium on Eye Tracking Research & Applications

Fullname:Proceedings of the 2002 Symposium Eye Tracking Research & Applications
Editors:Andrew T. Duchowski; Roel Vertegaal; John W. Senders
Location:New Orleans, Louisiana, USA
Dates:2002-Mar-25 to 2002-Mar-27
Publisher:ACM
Standard No:ISBN: 1-58113-467-3; ACM DL: Table of Contents hcibib: ETRA02
Papers:20
Pages:156
Links:Conference Series Home Page
  1. Keynote address
  2. Principles & methodology
  3. Blink response, visual attention, and the www
  4. Panel discussion
  5. Systems & applications I
  6. Gaze-contingent displays
  7. Eye movement analysis & visual search
  8. Systems & applications II

Keynote address

Vision in natural and virtual environments BIBAKFull-Text 7-13
  Mary M. Hayhoe; Dana H. Ballard; Jochen Triesch; Hiroyuki Shinoda; Pilar Aivar; Brian Sullivan
Our knowledge of the way that the visual system operates in everyday behavior has, until recently, been very limited. This information is critical not only for understanding visual function, but also for understanding the consequences of various kinds of visual impairment, and for the development of interfaces between human and artificial systems. The development of eye trackers that can be mounted on the head now allows monitoring of gaze without restricting the observer's movements. Observations of natural behavior have demonstrated the highly task-specific and directed nature of fixation patterns, and reveal considerable regularity between observers. Eye, head, and hand coordination also reveals much greater flexibility and task-specificity than previously supposed. Experimental examination of the issues raised by observations of natural behavior requires the development of complex virtual environments that can be manipulated by the experimenter at critical points during task performance. Experiments where we monitored gaze in a simulated driving environment demonstrate that visibility of task relevant information depends critically on active search initiated by the observer according to an internally generated schedule, and this schedule depends on learnt regularities in the environment. In another virtual environment where observers copied toy models we showed that regularities in the spatial structure are used by observers to control eye movement targeting. Other experiments in a virtual environment with haptic feedback show that even simple visual properties like size are not continuously available or processed automatically by the visual system, but are dynamically acquired and discarded according to the momentary task demands.
Keywords: attention, saccadic targeting, virtual environments

Principles & methodology

Twenty years of eye typing: systems and design issues BIBAKFull-Text 15-22
  Päivi Majaranta; Kari-Jouko Räihä
Eye typing provides a means of communication for severely handicapped people, even those who are only capable of moving their eyes. This paper considers the features, functionality and methods used in the eye typing systems developed in the last twenty years. Primary concerned with text production, the paper also addresses other communication related issues, among them customization and voice output.
Keywords: Eye typing, alternative communication, eye tracking
Designing attentive interfaces BIBAKFull-Text 23-30
  Roel Vertegaal
In this paper, we propose a tentative framework for the classification of Attentive Interfaces, a new category of user interfaces. An Attentive Interface is a user interface that dynamically prioritizes the information it presents to its users, such that information processing resources of both user and system are optimally distributed across a set of tasks. The interface does this on the basis of knowledge -- consisting of a combination of measures and models -- of the past, present and future state of the user's attention, given the availability of system resources. We will show how the Attentive Interface provides a natural extension to the windowing paradigm found in Graphical User Interfaces. Our taxonomy of Attentive Interfaces allows us to identify classes of user interfaces that would benefit most from the ability to sense, model and optimize the user's attentive state. In particular, we show how systems that influence user workflow in concurrent task situations, such as those involved with management of multiparty communication, may benefit from such facilities.
Keywords: Attention, Attentive Interfaces, Eye Tracking, Nonverbal Computing, User Interfaces
Fixation maps: quantifying eye-movement traces BIBAKFull-Text 31-36
  David S. Wooding
The analysis of eye-movement traces (i.e. the patterns of fixations in a search) is a powerful but often neglected area of eye-movement research. This is largely because it requires a more complex analysis than parameters such as mean fixation duration and as a result, previous attempts have focused on qualitative appraisal of the form of an eye-movement trace. In this paper, we introduce the concept of the "fixation map". We discuss its application to the quantification of similarity of traces, and the degree of "coverage" by fixations of a visual stimulus. The use of fixation maps in the understanding and communication of large numbers of eye-movement traces is also examined.
Keywords: Eye-movements, analysis, fixation map, similarity, traces

Blink response, visual attention, and the www

The act of task difficulty and eye-movement frequency for the 'Oculo-motor indices' BIBAKFull-Text 37-42
  Minoru Nakayama; Koji Takahashi; Yasutaka Shimizu
The oculo-motor reflects the viewer's ability to process visual information. This paper examines whether the oculo-motor was affected by two factors: firstly task difficulty and secondly eye-movement frequency. In this paper, oculo-motor indices were defined as measurements of pupil size, blink and eye-movement. For the purpose of this study, two experiments were designed based on previous subsequential ocular tasks where subjects were required to solve a series of mathematical problems and to orally report their calculations.
   The results of this experiment found that pupil size and blink rate increased in response to task difficulty in the oral calculation group. In contrast, however, both the saccade occurrence rate and saccade length were found to decrease with the increased difficulty of the task. The results suggests that oculo-motor indices respond to task difficulty. Secondly, eye-movement frequencies were elicited by the switching frequency of a visual target. Pupil size and the saccade time were found to increase with the frequency however, blink and gazing time were found to decrease in response to the frequency. There was a negative correlation between blinking and gazing time. Additionally, the correlation between blinking and saccade time appeared in the higher frequencies.
   These results indicate the oculo-motor indices are affected by both task difficulty and eye-movement frequency. Furthermore, eye-movement frequency appears to play a different role than that of task difficulty.
Keywords: Blink, Eye-movement, Gaze, Pupil Size, Saccade
Visual attention to repeated internet images: testing the scanpath theory on the world wide web BIBAKFull-Text 43-49
  Sheree Josephson; Michael E. Holmes
The somewhat controversial and often-discussed theory of visual perception, that of scanpaths, was tested using Web pages as visual stimuli. In 1971, Noton and Stark defined "scanpaths" as repetitive sequences of fixations and saccades that occur upon re-exposure to a visual stimulus, facilitating recognition of that stimulus. Since Internet users are repeatedly exposed to certain visual displays of information, the Web is an ideal stimulus to test this theory. Eye-movement measures were recorded while subjects repeatedly viewed three different kinds of Internet pages -- a portal page, an advertising page and a news story page -- over the course of a week. Scanpaths were compared by using the string-edit methodology that measures resemblance between sequences. Findings show that on the World Wide Web, with somewhat complex visual digital images, some viewers' eye movements may follow a habitually preferred path -- a scanpath -- across the visual display. In addition, strong similarity among eye-path sequences of different viewers may indicate that other forces such as features of the Web site or memory are important.
Keywords: Eye movement, Internet imagery, World Wide Web, eye tracking, optimal matching analysis, scanpath, sequence comparison, string editing
Eye tracking in web search tasks: design implications BIBAKFull-Text 51-58
  Joseph H. Goldberg; Mark J. Stimson; Marion Lewenstein; Neil Scott; Anna M. Wichansky
An eye tracking study was conducted to evaluate specific design features for a prototype web portal application. This software serves independent web content through separate, rectangular, user-modifiable portlets on a web page. Each of seven participants navigated across multiple web pages while conducting six specific tasks, such as removing a link from a portlet. Specific experimental questions included (1) whether eye tracking-derived parameters were related to page sequence or user actions preceding page visits, (2) whether users were biased to traveling vertically or horizontally while viewing a web page, and (3) whether specific sub-features of portlets were visited in any particular order. Participants required 2-15 screens, and from 7-360+ seconds to complete each task. Based on analysis of screen sequences, there was little evidence that search became more directed as screen sequence increased. Navigation among portlets, when at least two columns exist, was biased towards horizontal search (across columns) as opposed to vertical search (within column). Within a portlet, the header bar was not reliably visited prior to the portlet's body, evidence that header bars are not reliably used for navigation cues. Initial design recommendations emphasized the need to place critical portlets on the left and top of the web portal area, and that related portlets do not need to appear in the same column. Further experimental replications are recommended to generalize these results to other applications.
Keywords: Eye Tracking, Software, Usability Evaluation, World Wide Web

Panel discussion

What do the eyes behold for human-computer interaction? BIBAFull-Text 59-60
  Roel Vertegaal
In recent years, there has been a resurgence of interest in the use of eye tracking systems for interactive purposes. However, it is easy to be fooled by the interactive power of eye tracking. When first encountering eye based interaction, most people are genuinely impressed with the almost magical window into the mind of the user that it seems to provide. There are two reasons why this belief may lead to subsequent disappointment. Firstly, although current eye tracking equipment is far superior to that used in the seventies and early eighties, it is by no means perfect. For example, there is still the tradeoff between the use of an obtrusive head-based system or a desk-based system with limited head movement. Such technical problems continue to limit the usefulness of eye tracking as a generic form of input. Secondly, there are real methodological problems regarding the interpretation of eye input for use in graphical user interfaces. One example, the "Midas Touch" problem, is observed in systems that use eye movements to directly control a mouse cursor. When does the system decide that a user is interested in a visual object? Systems that implement dwell time for this purpose run the risk of disallowing visual scanning behavior, requiring users to control their eye movements for the purposes of output, rather than input. However, difficulties in the interpretation of visual interest remain even when systems use another input modality for signaling intent. Another classic methodological problem is exemplified by the application of eye movement recording in usability studies. Although eye fixations provide some of the best measures of visual interest, they do not provide a measure of cognitive interest. It is one thing to determine whether a user has observed certain visual information, but quite another to determine whether this information has in fact been processed or understood. Some of our technological problems can and will be solved. However, we believe that our methodological issues point to a more fundamental problem: What is the nature of the input information conveyed by eye movements and to what interactive functions can this information provide added value?

Systems & applications I

On-road driver eye movement tracking using head-mounted devices BIBAKFull-Text 61-68
  M. Sodhi; B. Reimer; J. L. Cohen; E. Vastenburg; R. Kaars; S. Kirschenbaum
It is now evident from anecdotal evidence and preliminary research that distractions can hinder the task of operating a vehicle, and consequently reduce driver safety. However with increasing wireless connectivity and the portability of office devices, the vehicle of the future is visualized as an extension of the static work place -- i.e. an office-on-the-move, with a phone, a fax machine and a computer all within the reach of the vehicle operator. For this research a Head mounted Eye-tracking Device (HED), is used for tracking the eye movements of a driver navigating a test route in an automobile while completing various driving tasks. Issues arising from data collection of eye movements during the completion of various driving tasks as well as the analysis of this data are discussed. Methods for collecting video and scan-path data, as well as difficulties and limitations are also reported.
Keywords: Camera calibration, Ergonomics, Perceptual reasoning, Tracking
A software-based eye tracking system for the study of air-traffic displays BIBAKFull-Text 69-76
  Jeffrey B. Mulligan
This paper describes a software-based system for offline tracking of eye and head movements using stored video images, designed for use in the study of air-traffic displays. These displays are typically dense with information; to address the research questions, we wish to be able to localize gaze within a single word within a line of text (a few minutes of arc), while at the same time allowing some freedom of movement to the subject. Accurate gaze tracking in the presence of head movements requires high precision head tracking, and this was accomplished by registration of images from a forward-looking scene camera with a narrow field of view.
Keywords: Head and eye tracking, air traffic displays, image registration, scan-path analysis
Eye gaze correction for videoconferencing BIBAKFull-Text 77-81
  Jason Jerald; Mike Daily
This paper describes a 2D videoconferencing system with eye gaze correction. Tracking the eyes and warping the eyes appropriately each frame appears to create natural eye contact between users. The geometry of the eyes as well as the displacement of the camera with the remote user's image determines the warp. We implement this system within software, not requiring any specialized hardware.
Keywords: Videoconferencing, eye contact, eye tracking, gaze correction, mutual gaze, nonverbal social interaction, telepresence

Gaze-contingent displays

Real-time simulation of arbitrary visual fields BIBAKFull-Text 83-87
  Wilson S. Geisler; Jeffrey S. Perry
This report describes an algorithm and software for creating and displaying, in real time, arbitrary variable resolution displays, contingent on the direction of gaze. The software produces precise, artifact-free video at high frame rates in either 8-bit gray scale or 24-bit color. The software is demonstrated by simulating the visual fields of normal individuals and low-vision patients.
Keywords: eye disease, eye movements, foveated imaging, gaze contingent display, low vision, variable resolution image, visual fields
Reduced saliency of peripheral targets in gaze-contingent multi-resolutional displays: blended versus sharp boundary windows BIBAKFull-Text 89-93
  Eyal M. Reingold; Lester C. Loschky
Gaze-contingent multi-resolutional displays (GCMRDs) have been proposed to solve the processing and bandwidth bottleneck in many single-user displays, by dynamically placing high-resolution in a window at the center of gaze, with lower resolution everywhere else. GCMRDs are also useful for investigating the perceptual processes involved in natural scene viewing. Several such studies suggest that potential saccade targets in degraded regions are less salient than those in the high-resolution window. Consistent with this, Reingold, Loschky, Stampe and Shen [2001b] found longer initial saccadic latencies to a salient peripheral target in conditions with a high-resolution window and degraded surround than in an all low-pass filtered no-window condition. Nevertheless, these results may have been due to parafoveal load caused by saliency of the boundary between the high- and low-resolution areas. The current study extends Reingold, et al. [2001b] by comparing both sharp- and blended-resolution boundary conditions with an all low-resolution no-window condition. The results replicate the previous findings [Reingold et al. 2001b] but indicate that the effect is unaltered by the type of window boundary (sharp or blended). This rules out the parafoveal load hypothesis, while further supporting the hypothesis that potential saccade targets in the degraded region are less salient than those in the high-resolution region.
Keywords: area of interest, bi-resolution displays, dual-resolution displays, eye movements, eyetracking, high-detail inset, multi-resolutional displays, peripheral degradation, peripheral vision, saliency, variable resolution displays, visual perception, visual search
Saccade contingent updating in virtual reality BIBAKFull-Text 95-102
  Jochen Triesch; Brian T. Sullivan; Mary M. Hayhoe; Dana H. Ballard
We are interested in saccade contingent scene updates where the visual information presented in a display is altered while a saccadic eye movement of an unconstrained, freely moving observer is in progress. Since saccades typically last only several tens of milliseconds depending on their size, this poses difficult constraints on the latency of detection. We have integrated two complementary eye trackers in a virtual reality helmet to simultaneously 1) detect saccade onsets with very low latency and 2) track the gaze with high precision albeit higher latency. In a series of experiments we demonstrate the system s capability of detecting saccade onsets with sufficiently low latency to make scene changes while a saccade is still progressing. While the method was developed to facilitate studies of human visual perception and attention, it may nd interesting applications in human-computer interaction and computer graphics.
Keywords: change blindness, eye tracking, limbus tracking, saccade contingent updating, saccades, virtual reality

Eye movement analysis & visual search

3D eye movement analysis for VR visual inspection training BIBAFull-Text 103-110
  Andrew T. Duchowski; Eric Medlin; Nathan Cournia; Anand Gramopadhye; Brian Melloy; Santosh Nair
This paper presents an improved 3D eye movement analysis algorithm for binocular eye tracking within Virtual Reality for visual inspection training. The user's gaze direction, head position and orientation are tracked to allow recording of the user's fixations within the environment. The paper summarizes methods for (1) integrating the eye tracker into a Virtual Reality framework, (2) calculating the user's 3D gaze vector, and (3) calibrating the software to estimate the user's inter-pupillary distance post-facto. New techniques are presented for eye movement analysis in 3D for improved signal noise suppression. The paper describes (1) the use of Finite Impulse Response (FIR) filters for eye movement analysis, (2) the utility of adaptive thresholding and fixation grouping, and (3) a heuristic method to recover lost eye movement data due to miscalibration. While the linear signal analysis approach is itself not new, its application to eye movement analysis in three dimensions advances traditional 2D approaches since it takes into account the 6 degrees of freedom of head movements and is resolution independent. Results indicate improved noise suppression over our previous signal analysis approach.
What attracts the eye to the location of missed and reported breast cancers? BIBAFull-Text 111-117
  Claudia Mello-Thoms; Calvin F. Nodine; Harold L. Kundel
The primary detector of breast cancer is the human eye, as it examines mammograms searching for signs of the disease. Nonetheless, it has been shown that 10-30% of all cancers in the breast are not reported by the radiologist, even though most of these are visible retrospectively. Studies of eye position have shown that the eye tends to dwell in the locations of both reported and not reported cancers, indicating that the problem is not faulty visual search, but rather, that is primarily related to perceptual and decision making mechanisms. In this paper we model the areas that attracted the radiologists' visual attention when reading mammograms and that yielded a decision by the radiologist, being this decision overt or covert. We contrast the characteristics of areas that contain cancers that were reported from the ones that contain cancers that, albeit attracting attention, did not reach an internal conspicuity threshold to be reported.
Visual search: structure from noise BIBAKFull-Text 119-123
  Umesh Rajashekar; Lawrence K. Cormack; Alan C. Bovik
In this paper, we present two techniques to reveal image features that attract the eye during visual search: the discrimination image paradigm and principal component analysis. In preliminary experiments, we employed these techniques to identify image features used to identify simple targets embedded in 1/ƒ noise. Two main findings emerged. First, the loci of fixations were not random but were driven by local image features, even in very noisy displays. Second, subjects often searched for a component feature of a target rather that the target itself, even if the target was a simple geometric form. Moreover, the particular relevant component varied from individual to individual. Also, principal component analysis of the noise patches at the point of fixation reveals global image features used by the subject in the search task. In addition to providing insight into the human visual system, these techniques have relevance for machine vision as well. The efficacy of a foveated machine vision system largely depends on its ability to actively select 'visually interesting' regions in its environment. The techniques presented in this paper provide valuable low-level criteria for executing human-like scanpaths in such machine vision systems.
Keywords: 1/ƒ noise, Discrimination Images, Eye movements, Principal Component Analysis, Visual Search

Systems & applications II

FreeGaze: a gaze tracking system for everyday gaze interaction BIBAKFull-Text 125-132
  Takehiko Ohno; Naoki Mukawa; Atsushi Yoshikawa
In this paper we introduce a novel gaze tracking system called FreeGaze, which is designed for the use of everyday gaze interaction. Among various possible applications of gaze tracking system, Human-Computer Interaction (HCI) is one of the most promising fields. However, existing systems require complicated and burden-some calibration and are not robust to the measurement variations. To solve these problems, we introduce a geometric eyeball model and sophisticated image processing. Unlike existing systems, our system needs only two points for each individual calibration. When the personalization finishes, our system needs no more calibration before each measurement session. Evaluation tests show that the system is accurate and applicable to everyday use for the applications.
Keywords: FreeGaze, Gaze tracing system, gaze interaction, the eyeball model
Differences in the infrared bright pupil response of human eyes BIBAKFull-Text 133-138
  Karlene Nguyen; Cindy Wagner; David Koons; Myron Flickner
In this paper, we describe experiments conducted to explain observed differences in the bright pupil response of human eyes. Many people observe the bright pupil response as the red-eye effect when taking flash photography. However, there is significant variation in the magnitude of the bright pupil response across the population. Since many commercial gaze-tracking systems use the infrared bright pupil response for eye detection, a clear understanding of the magnitude and cause of the bright pupil variation gives critical insight into the robustness of gaze tracking systems. This paper documents studies we have conducted to measure the bright pupil differences using infrared light and hypothesis factors that lead to these differences.
Keywords: Gaze tracking, bright pupil response, eye tracking, red-eye effect, retro-reflective pupil response
Real-time eye detection and tracking under various light conditions BIBAKFull-Text 139-144
  Zhiwei Zhu; Kikuo Fujimura; Qiang Ji
Non-intrusive methods based on active remote IR illumination for eye tracking are important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.
Keywords: Eye Tracking, Kalman Filter, Mean Shift, Support Vector Machine