HCI Bibliography Home | HCI Conferences | ETRA Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ETRA Tables of Contents: 000204060810121416

Proceedings of the 2000 Symposium on Eye Tracking Research & Applications

Fullname:Proceedings of the 2000 Symposium Eye Tracking Research & Applications
Editors:Andrew T. Duchowski
Location:Palm Beach Gardens, Florida
Dates:2000-Nov-06 to 2000-Nov-08
Publisher:ACM
Standard No:ISBN: 1-58113-280-8; ACM DL: Table of Contents hcibib: ETRA00
Papers:23
Pages:147
Links:Conference Series Home Page
Design issues of iDICT: a gaze-assisted translation aid BIBAKFull-Text 9-14
  Aulikki Hyrskykari; Päivi Majaranta; Antti Aaltonen; Kari-Jouko Räihä
Eye-aware applications have existed for long, but mostly for very special and restricted target populations. We have designed and are currently implementing an eye-aware application, called iDict, which is a general-purpose translation aid aimed at mass markets. iDict monitors the user's gaze path while s/he is reading text written in a foreign language. When the reader encounters difficulties, iDict steps in and provides assistance with the translation. To accomplish this, the system makes use of information obtained from reading research, a language model, and the user profile. This paper describes the idea of the iDict application, the design problems and the key solutions for resolving these problems.
Keywords: eye tracking, gaze, input techniques, non-command interfaces, post-WIMP interfaces
Text input methods for eye trackers using off-screen targets BIBAKFull-Text 15-21
  Poika Isokoski
Text input with eye trackers can be implemented in many ways such as on-screen keyboards or context sensitive menu-selection techniques. We propose the use of off-screen targets and various schemes for decoding target hit sequences into text. Off-screen targets help to avoid the Midas' touch problem and conserve display area. However, the number and location of the off-screen targets is a major usability issue. We discuss the use of Morse code, our Minimal Device Independent Text Input Method (MDITIM), QuikWriting, and Cirrin-like target arrangements. Furthermore, we describe our experience with an experimental system that implements eye tracker controlled MDITIM for the Windows environment.
Keywords: Cirrin, MDITIM, Morse code, QuikWriting, eye tracker, off-screen targets, text input
Effective eye-gaze input into Windows BIBAKFull-Text 23-27
  Chris Lankford
The Eye-gaze Response Interface Computer Aid (ERICA) is a computer system developed at the University of Virginia that tracks eye movement. To allow true integration into the Windows environment, an effective methodology for performing the full range of mouse actions and for typing with the eye needed to be constructed. With the methods described in this paper, individuals can reliably perform all actions of the mouse and the keyboard with their eye.
Keywords: disabled, eye-gaze, mouse clicking, typing, windows
Comparing interfaces based on what users watch and do BIBAKFull-Text 29-36
  Eric C. Crowe; N. Hari Narayanan
With the development of novel interfaces controlled through multiple modalities, new approaches are needed to analyze the process of interaction with such interfaces and evaluate them at a fine grain of detail. In order to evaluate the usability and usefulness of such interfaces, one needs tools to collect and analyze richly detailed data pertaining to both the process and outcomes of user interaction. Eye tracking is a technology that can provide detailed data on the allocation and shifts of users' visual attention across interface entities. Eye movement data, when combined with data from other input modalities (such as spoken commands, haptic actions with the keyboard and the mouse, etc.), results in just such a rich data on set. However, integrating, analyzing and visualizing multimodal data on user interactions remains a difficult task. In this paper we report on a first step toward developing a suite of tools to facilitate this task. We designed and implemented an Eye Tracking Analysis System that generates combined gaze and action visualizations from eye movement data and interaction logs. This new visualization allows an experimenter to see the visual attention shifts of users interleaved with their actions on each screen of a multi-screen interface. A pilot experiment on comparing two interfaces -- a traditional interface and a speech-controlled one -- to an educational multimedia application was carried out to test the utility of our tool.
Keywords: eye tracking, interaction log analysis, speech-controlled interface, visualization
Extended tasks elicit complex eye movement patterns BIBAKFull-Text 37-43
  Jeff B. Pelz; Roxanne Canosa; Jason Babcock
Visual perception is an inherently complex task, yet the bulk of studies in the past were undertaken with subjects performing relatively simple tasks under reduced laboratory conditions. In the research reported here, we examined subjects' oculomotor performance as they performed two complex, extended tasks. In the first task, subjects built a model rocket from a kit. In the second task, a wearable eyetracker was used to monitor subjects as they walked to a restroom, washed their hands, and returned to the starting point. For the purposes of analysis, both tasks can be broken down into smaller sub-tasks that are performed in sequence. Differences in eye movement patterns and high-level strategies were observed in the model building and hand-washing tasks. Fixation durations recorded in the model building tasks were significantly shorter than those reported in simpler tasks. Performance in the hand-washing task revealed look-ahead eye movements made to objects well in advance of a subject's interaction with the object. Often occurring in the middle of another task, they provide overlapping temporal information about the environment, providing a mechanism to produce our conscious visual experience.
Keywords: complex tasks, extended tasks, eyetracking, fixation duration, portable/wearable eyetracking, visual perception
The effects of a simulated cellular phone conversation on search for traffic signs in an elderly sample BIBAKFull-Text 45-50
  Charles T. Scialfa; Lisa McPhee; Geoffrey Ho
The effects of clutter and a simulated cellular telephone conversation on search for traffic signs were investigated using eye movement and reaction time measures. One-half of an elderly sample searched for traffic signs while simultaneously listening to a story, followed by 15 "yes or no" questions. This simulated cellular phone conversation had detrimental effects on reaction time, fixation number and fixation duration. Performance decrements observed might be an indication of the demands cellular telephones have on a driver's processing resources. In addition, these methods could be used to further investigate the safety implementation of using a cellular telephone while driving.
Keywords: aging, cellular telephone use, conspicuity, driving performance, eye movements, traffic signs
Gazetracker: software designed to facilitate eye movement analysis BIBAKFull-Text 51-55
  Chris Lankford
The Eye-gaze Response Interface Computer Aid (ERICA) is a computer system developed at the University of Virginia that tracks eye movement. Originally developed as a means to allow individuals with disabilities to communicate, ERICA was then expanded to provide methods for experimenters to analyze eye movements. This paper describes an application called Gaze Tracker™ that facilitates the analysis of a test subject's eye movements and pupil response to visual stimuli, such as still images or dynamic software applications that the test subject interacts with (for example, Internet Explorer).
Keywords: eye-tracking, internet, usability, windows
An interactive model-based environment for eye-movement protocol analysis and visualization BIBAKFull-Text 57-63
  Dario D. Salvucci
This paper describes EyeTracer, an interactive environment for manipulating, viewing, and analyzing eye-movement protocols. EyeTracer augments the typical functionality of such systems by incorporating model-based tracing algorithms that interpret protocols with respect to the predictions of a cognitive process model. These algorithms provide robust strategy classification and fixation assignment that help to alleviate common difficulties with eye-movement data, such as equipment noise and individual variability. Using the tracing algorithms for analysis and visualization, EyeTracer facilitates both exploratory analysis for initial understanding of behavior and confirmatory analysis for model evaluation and refinement.
Keywords: eye movements, protocol analysis, tracing, visualization
Analysis of eye tracking movements using FIR median hybrid filters BIBAKFull-Text 65-69
  J. Gu; M. Meng; A. Cook; M. G. Faulkner
This paper presents an approach of using FIR Median Hybrid Filters for analysis of eye tracking movements. The proposed filter can remove the eye blink artifact from the eye movement signal. The background of the project is described first. The whole idea is to put movements into eyes, which are used as static prosthesis, so that the ocular implant will have the same natural movement as the real eye. First step is to obtain the movement of the real eye. From the review of the eye movement methods, the electro-oculogram (EOG) is used to determine the eye position. Because the eye blink artifact is always corrupted in the EOG signal, it must be filtered out for the purpose of our project. The FIR Median Hybrid Filter is studied in the paper; its properties are explored with examples. Finally the filter is used to deal the real eye blink corrupted EOG signal. Examples are given of analysis procedure for eye tracking or a random moving target. The method is proved to be highly reliable.
Keywords: FIR median hybrid filter, electrooculogram, eye blink, eye movement
Identifying fixations and saccades in eye-tracking protocols BIBAKFull-Text 71-78
  Dario D. Salvucci; Joseph H. Goldberg
The process of fixation identification -- separating and labeling fixations and saccades in eye-tracking protocols -- is an essential part of eye-movement data analysis and can have a dramatic impact on higher-level analyses. However, algorithms for performing fixation identification are often described informally and rarely compared in a meaningful way. In this paper we propose a taxonomy of fixation identification algorithms that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking protocols. Using this taxonomy, we describe five algorithms that are representative of different classes in the taxonomy and are based on commonly employed techniques. We then evaluate and compare these algorithms with respect to a number of qualitative characteristics. The results of these comparisons offer interesting implications for the use of the various algorithms in future work.
Keywords: data analysis algorithms, eye tracking, fixation identification
Visual fixations and level of attentional processing BIBFull-Text 79-85
  Boris M. Velichkovsky; Sascha M. Dornhoefer; Sebastian Pannasch; Pieter J. A. Unema
"Saccade pickers" vs. "fixation pickers": the effect of eye tracking instrumentation on research BIBAFull-Text 87-88
  Keith S. Karn
Users of most video-based eye trackers apply proximity algorithms to identify fixations and assume that saccades are what happen in between. Most video-based eye trackers sample at 60 Hz., a rate which is too low to reliably find small saccades in an eye position record. We propose to call these slower eye trackers and their typical proximity analysis routines "fixation pickers."
   Systems such as dual-Purkinje-image (DPI) trackers, coil systems, and electro-oculography permit higher sampling rates, typically providing the 250 Hz or greater sampling frequency necessary to detect most saccades. Researchers using these types of eye trackers typically focus on identifying saccades using velocity based algorithms and assume that fixations are what happen in between these movements. We propose to call these faster eye trackers and their velocity-based analysis routines "saccade pickers."
   Given that fixation pickers and saccade pickers extract different things from an eye position record, it is no wonder that the two systems yield different results. This discrepancy has become a problem in eye movement research. A study of cognitive processing conducted with one eye tracking system is likely to give results which cannot be easily compared to a study conducted with another eye tracker. Imagine that two investigators are both interested in studying visual search. Both choose the number of saccades as one of their dependent variables to measure search performance. One investigator chooses a video-based fixation picker. The other investigator chooses a DPI-based saccade picker. Because the saccade picker is tuned to identify smaller saccades, the investigator with the DPI tracker reports more saccades and fixations during an equivalent visual search task compared to the investigator with the video-based tracker. Both investigations are likely to produce valid results, but results which are not comparable to the other investigator's.
Binocular eye tracking in virtual reality for inspection training BIBAKFull-Text 89-96
  Andrew T. Duchowski; Vinay Shivashankaraiah; Tim Rawls; Anand K. Gramopadhye; Brian J. Melloy; Barbara Kanki
This paper describes the development of a binocular eye tracking Virtual Reality system for aircraft inspection training. The aesthetic appearance of the environment is driven by standard graphical techniques augmented by realistic texture maps of the physical environment. A "virtual flashlight" is provided to simulate a tool used by inspectors. The user's gaze direction, as well as head position and orientation, are tracked to allow recording of the user's gaze locations within the environment. These gaze locations, or scanpaths, are calculated as gaze/polygon intersections, enabling comparison of fixated points with stored locations of artificially generated defects located in the environment interior. Recorded scanpaths provide a means of comparison of the performance of experts to novices, thereby gauging the effects of training.
Keywords: eye tracking, virtual reality, visual inspection
User performance with gaze contingent multiresolutional displays BIBAKFull-Text 97-103
  Lester C. Loschky; George W. McConkie
One way to economize on bandwidth in single-user head-mounted displays is to put high-resolution information only where the user is currently looking. This paper summarizes results from a series of 6 studies investigating spatial, resolutional, and temporal parameters affecting perception and performance in such eye-contingent multi-resolutional displays. Based on the results of these studies, suggestions are made for the design of eye-contingent multi-resolutional displays.
Keywords: bandwidth, dual-resolution displays, eye movements, eyetracking, multiresolutional displays, peripheral vision, visual perception, wavelets
Evaluating variable resolution displays with visual search: task performance and eye movements BIBAKFull-Text 105-109
  Derrick Parkhurst; Eugenio Culurciello; Ernst Niebur
Gaze-contingent variable resolution display techniques allocate computational resources for image generation preferentially to the area around the center of gaze where visual sensitivity to detail is the greatest. Although these techniques are computationally efficient, their behavioral consequences with realistic tasks and materials are not well understood. The behavior of human observers performing visual search of natural scenes using gaze-contingent variable resolution displays is examined. A two-region display was used where a high-resolution region was centered on the instantaneous center of gaze, and the surrounding region was presented in a lower resolution. The radius of the central high-resolution region was varied from 1 to 15 degrees while the total amount of computational resources required to generate the visual display was kept constant. Measures of reaction time, accuracy, and fixation duration suggest that task performance is comparable to that seen for uniform resolution displays when the central region size is approximately 5 degrees.
Keywords: eye movements, variable resolution displays, virtual reality, visual search
"GazeToTalk": a nonverbal interface with meta-communication facility (Poster Session) BIBAFull-Text 111
  Tetsuro Chino; Kazuhiro Fukui; Kaoru Suzuki
We propose a new human interface (HI) system named "GazeToTalk" that is implemented by vision based gaze detection, acoustic speech recognition (ASR), and animated human-like agent CG with facial expressions and gestures. The "GazeToTalk" system demonstrates that eye-tracking technologies can be utilized to improve HI effectively by working with other non-verbal messages such as facial expressions and gestures.
   Conventional voice interface system have the following serious drawbacks. (1) They cannot distinct between input voice and other noise, and (2) cannot understand who is the intended hearer of each utterance. A "push-to-Wk" mechanism can be used to ease these problems, but it spoils the advantages of voice interfaces (e.g. contact-less, suitability in hand-busy situation).
   In real human dialogues, besides exchanging content messages, people use non-verbal messages such as gaze, facial expressions and gestures to establish or maintain conversations, or recover from problems that arise in the conversation.
   The "GazeToTalk" system simulates this kind of "meta-communication" facility by utilizing vision based gaze detection, ASR, and human-like agent CG. When the user intends to input voice commands, he gazes on the agent on the display in order to request to talk, just as in daily human-human dialogues. This gaze is recognized by the gaze detection module and the agent shows a particular facial expression and gestures as a feedback to establish an "eye-contact." Then the system accepts or rejects speech input from the user depending on the state of the "eye-contact."
   This mechanism allows the "GazeToTalk" system to accept only intended voice input and ignore another voices and environmental noises successfully, without forcing any arbitrary operation to the user. We also demonstrate an extended mechanism to treat more flexible "eye contact" variations.
   The preliminary experiments suggest that in the context of meta-communication, nonverbal messages can be utilized to improve HI in terms of naturalness, friendliness and tactfulness.
Using eye tracking to investigate graphical elements for fully sighted and low vision users BIBFull-Text 112
  Julie A. Jacko; Armando B. Barreto; Josey Y. M. Chu; Holly S. Bautsch; Gottlieb J. Marmet; Ingrid U. Scott; Robert H., Jr. Rosa
Eye-movement-contingent release of speech and "SEE"- multi-modalities eye tracking system BIBAFull-Text 113-114
  Weimin Liu
A novel research method, the eye-movement-contingent release of speech method, was developed to examine the use and nature of the sound code in visual word recognition in reading. A new multi-modalities eye tracking system ("SEE") was developed to implement the eye-movement-contingent release of speech method. The SEE system provides the experimenter synchronized orthogonal manipulations of visual and auditory signals, accurate measurement of oculomotor responses, post-hoc data replay facilities and statistical analysis tool.
   A new method, eye-movement-contingent release of speech, was developed to study the use of sound codes during visual word recognition. This new method examines effects of spoken -- rather then visual/orthographic -- words on the concurrent recognition of visual words. Speech is to be presented while text are viewed and presentation of the speech signal is coordinated with the viewing of a preselected visual target word. During sentence reading, the reader's eye positions are continuously sampled, and a real time fixation detection algorithm is developed to present the auditory signal when a certain pre-specified eye position criterion (called boundary) is fulfilled. The auditory stimuli are presented from eye tracking data, that is, eye movement are contingent on the change of heard/seen stimuli, so that the location of the eye position during visual object perception is calculated relative to the auditory presentation of speech.
   "SEE"-multi-modalities eye tracking system was developed to implement this new method. SEE system was mainly built on a fifth-generation Dual-Purkinje SRI eye tracker but also provides general API supports for use with other eye trackers (i.e. ICAN and SM systems). SEE software toolkit was installed on an IBM compatible personal computer, which was connected with the eye tracker through an Analogue-to-Digital converter.
   SEE software toolkit is a 32-bit windows application running under Windows 95/98/NT/2000. It consists of two running phases: real-time data collection phase and post-hoc analysis phase. Data collection phase includes modules of calibration, real time eye data processing and raw data recording. Real time data processing module has sub-modules to process the raw data to filter noise, detect fixation/saccade, and manually/dynamically recalibrate to correct drift over time. Post-hoc analysis phase provides replay and statistic analysis. SEE software toolkit also provides visual and audio stimuli presentation.
   Calibration. During the calibration, the reader is instructed to fixate a consequential displayed series of 5-9 target points on the screen whose coordinates are known; the raw data from the eye tracker are collected for each point over a period of time and are used to calculate parameters of a mapping transformation equation. After successful calibration, the raw digital stream are continuously mapped to X/Y coordinates on the display.
   Eye-movement-contingent control. Movements of the eyes control the release of visual and acoustic information. Visual and/or sound boundaries are pre-defined in the sentence for contingent control. When a reader's eye moves across the release boundary, a visual and/or speech signal is presented. In the following illustration table, visual/sound boundary are defined at space before 'yellow' and 'stripe' respectively. * indicates the fixation location; ( ) indicates the spoken duration.
What eye-movements tell us about ratios and spatial proportions BIBAFull-Text 115
  Catherine Sophian; Martha E. Crosby
Complex computer interfaces often provide quantitative information which the user must integrate. For example a scroll bar gives information about position in a document. The user can adjust the scroll bar to maintain an appropriate ratio of the portion of the document displayed relative to the documents position. Users are often given icons to select, does the icon's proportionality influence their choice? The task of identifying two shapes that are geometrically similar, though different in size, is a proportional one, but also one in which spatial processing may be important. Our research examines ways in which spatial presentations of the data can facilitate the process of integrating quantitative information.
   Mathematically a proportion is defined as a four-term relation among quantities: A is to B as C is to D. In essence, the task of proportional reasoning is to compare the relation between one pair of terms (or quantities) with the relation between the other pair. One of the simplest examples of proportional reasoning occurs in the identification of shapes. Even young children can tell whether two shapes are equally proportioned, in terms of the ratios of their heights to their widths for instance -- even if they are very different in overall size.
   In effect, in this task they are identifying a proportional equivalence between the ratio relating the width and height of the smaller shape and the ratio relating the width and height of the larger one. Although psychological tests of proportional reasoning typically control for perceptual, as our shape comparison example illustrates there is nothing in principle that makes a perceptually based judgment of proportionality mathematically unlike other proportionality judgments. What is important is that non-relational bases for task performance be controlled. Thus, for instance, matching of the fatness of two shapes that are the same height might not involve a consideration of ratios, since comparisons based on the horizontal dimension alone would be sufficient to determine whether the shapes were the same or different. Differences in overall magnitude are thus an essential element in any assessment of proportional judgments.
   In order to learn more about the visual processes underlying the apprehension of spatial-configurational ratios, we performed an experiment which recorded adults' eye movements during the performance of a shape comparison task. We found that observers do not need to fixate on both shapes in order to make a proportional comparison between them, suggesting that the perception of height: width ratios does not depend on foveal vision. At the same time, it is clear that scanning patterns were affected by characteristics of the scenes. The variation in looking at left vs. right sides of the scenes as a function of whether the correct (fatter) stimulus was on the left or the right is particularly intriguing. It appears that the smaller stimulus could be processed with relatively few fixations, but when it was the correct choice viewers verified its fatness with more extended viewing before responding. An interesting implication of this finding is that visual objects can be too big to be processed with optimal efficiency. Whereas it is often assumed that bigger is better, at least when what viewers need to discern is the overall shape of an object a large size may actually impede rapid apprehension by requiring more extensive scanning. Providing adequate spacing between visual elements may therefore be a more effective means of facilitating processing than enlarging the sizes of those objects (beyond some point). These findings support the idea that, when overall size is not too great, the proportional relations that characterize the shape of an object can be rapidly apprehended, often without directly fixating it. This supports the conjecture with which we began, that spatial configurations can be a powerful way of presenting information about the relations between two quantitative measures.
Technical aspects in the recording of scanpath eye movements BIBAFull-Text 116
  Daniela Zambarbieri; Stefano Ramat; Carlo Robino
The two dimensional representation of the eye movement performed by a subject while exploring a scene is commonly called "scanpath." By examining both the spatial and temporal evolution of the scanpath on the scene presented to the subject we are able to quantify, in a completely objective way, where the subject is looking and for how long his gaze remains on a specific area. The recording of scanpaths during the exploration of a computer display can represent a powerful tool in the context of usability testing.
   In order to study the scanpath of a user interacting with a computer display we need a system able to record the vertical and horizontal components of eye movement with respect to the screen surface. Several systems are currently available on the market that are based on different eye movement recording techniques and each one is characterized by both advantages and drawbacks.
   We have used and compared two such eye-tracking systems that exploit video camera based recording and track the center of the pupil and of the corneal reflections elicited by infrared light illumination. One system consists of a headset supporting two dichroic mirrors and two high-speed CCD cameras for binocular recording. Two sets of infrared light emitting diodes illuminate each eye for the generation of corneal reflections. The headset is heavy and must be fastened very tightly to avoid slipping since any displacement relative to the head can introduce an error in the reconstruction of gaze position on the display.
   The other system is composed of a video camera mounted below the computer screen and exploits the Pupil-Center/Corneal-Reflection method to determine gaze direction. A small infrared light emitting diode illuminates the eye and generates a bright reflection on the cornea. The major advantage of the system is that it operates without any sort of contact with the subject thus allowing even very long acquisition sessions without causing discomfort to the subject. Moreover, since the measure of gaze position is performed with respect to the computer display, small movements of the head do not introduce errors.
   In order to be able to examine a subject's behavior while interacting with a computer system in a most natural way, we developed a software program that allows for the recording of both the gaze and mouse actions of a user while freely navigating the World Wide Web, any hypertext document, and possibly while using any graphical user interface.
   Data analysis is performed offline by reproducing the vertical and horizontal components of eye movement on each page that the subject has explored during the recording. Zones of interest can be defined on each page in an interactive way. Then the software can compute the number of accesses to each zone, the fixation time for each access, i.e. the time spent by the subject looking inside the zone, and the sequence of exploration, i.e. which zone has been observed as the first, which one as the second and so on.
Hand eye coordination patterns in target selection BIBAKFull-Text 117-122
  Barton A. Smith; Janet Ho; Wendy Ark; Shumin Zhai
In this paper, we describe the use of eye gaze tracking and trajectory analysis in the testing of the performance of input devices for cursor control in Graphical User Interfaces (GUIs). By closely studying the behavior of test subjects performing pointing tasks, we can gain a more detailed understanding of the device design factors that may influence the overall performance with these devices. Our Results show them are many patterns of hand eye coordination at the computer interface which differ from patterns found in direct hand pointing at physical targets (Byrne, Anderson, Douglass, & Matessa, 1999).
Keywords: eye tracking, hand eye coordination, motor control, mouse, pointing, pointing stick, target selection, touchpad
Pupillary responses to emotionally provocative stimuli BIBAKFull-Text 123-129
  Timo Partala; Maria Jokiniemi; Veikko Surakka
This paper investigated in two experiments pupillary responses to emotionally provocative sound stimuli. In experiment one, 30 subjects' pupillary responses were measured while listening to 10 negatively and 10 positively arousing sounds, and 10 emotionally neutral sounds. In addition, the subjects rated their subjective experiences to these stimuli. The results showed that the pupil size was significantly larger after highly arousing positive and negative stimuli than after neutral stimuli with medium arousal.
   In experiment two, the contents of the stimuli were more controlled than in experiment one. 22 subjects' pupillary responses were measured while listening to four negatively and four positively arousing sounds, and four emotionally neutral sounds. The results showed that the pupil size was significantly larger during negative highly arousing stimuli than during moderately arousing positive stimuli. The pupil size was also significantly larger after highly arousing negative stimuli than after moderately arousing neutral and positive stimuli.
   The results of the two experiments suggest that pupil size discriminates during and after different kinds of emotional stimuli. Thus, the measurement of pupil size variation my be a potentially useful computer input signal, for example, for affective computing.
Keywords: affective computing, autonomic nervous system, human emotions, pupil size
The response of eye-movement and pupil size to audio instruction while viewing a moving target BIBAKFull-Text 131-138
  Koji Takahashi; Minoru Nakayama; Yasutaka Shimizu
Eye movement reflects a viewer's visual information process. This study examines whether eye-movement responds to the viewer's cognitive load. It is already known that pupil size and blink can use as an indicator of mental workload. Saccades are rapid eye movements to turn a fovea to a focusing target. For this process, saccade was extracted to observe viewing process.
   The ocular-following task was conducted with audio-response task. The moving target was controlled at visual angle 3deg 5deg and 10deg. Audio response task required oral response. Experimental results showed that pupil size and blink rate increased with visual angle and audio response task. Both increased largest when the subject gave incorrect response in audio response task.
   Eye movement was also controlled by certain factors. Saccadic movement time increased with visual angle and it had negative correlation to blink time. This relationship was observed in larger visual angle, but despite of this negative correlation, saccadic movement time increased in incorrect response. Furthermore saccade length increased with the visual angle and decreased in incorrect response.
   The saccade is divided into miniature saccade appearing in gaze area, and larger saccade appearing between gazes. In incorrect response, saccade in following targets decreased and saccade inside gazing targets increased. It suggests that saccade occurrence changes not only in following targets but also inside gazing targets. The results of these experiments provide evidence that eye-movement can be an index of mental work-load.
Keywords: blink, eye-movement, gaze, pupil size, saccadic eye-movement