| ExoInterfaces: novel exosceleton haptic interfaces for virtual reality, augmented sport and rehabilitation | | BIBAK | Full-Text | 1 | |
| Dzmitry Tsetserukou; Katsunari Sato; Susumu Tachi | |||
| We developed novel haptic interfaces, FlexTorque and FlexTensor that enable
realistic physical interaction with real and Virtual Environments. The idea
behind FlexTorque is to reproduce human muscle structure, which allows us to
perform dexterous manipulation and safe interaction with environment in daily
life. FlexTorque suggests new possibilities for highly realistic, very natural
physical interaction in virtual environments. There are no restrictions on the
arm movement, and it is not necessary to hold a physical object during
interaction with objects in virtual reality. Because the system can generate
strong forces, even though it is light-weight, easily wearable, and intuitive,
users experience a new level of realism as they interact with virtual
environments. Keywords: augmented games, augmented sport, exoskeleton, force feedback, game
controller, haptic display, haptic interface, rehabilitation, virtual reality | |||
| PossessedHand: a hand gesture manipulation system using electrical stimuli | | BIBAK | Full-Text | 2 | |
| Emi Tamaki; Takashi Miyaki; Jun Rekimoto | |||
| Acquiring knowledge about the timing and speed of hand gestures is important
to learn physical skills, such as playing musical instruments, performing arts,
and making handicrafts. However, it is difficult to use devices that
dynamically and mechanically control a user's hand for learning because such
devices are very large, and hence, are unsuitable for daily use. In addition,
since groove-type devices interfere with actions such as playing musical
instruments, performing arts, and making handicrafts, users tend to avoid
wearing these devices. To solve these problems, we propose PossessedHand, a
device with a forearm belt, for controlling a user's hand by applying
electrical stimulus to the muscles around the forearm of the user. The
dimensions of PossessedHand are 10 x 7.0 x 8.0 cm, and the device is portable
and suited for daily use. The electrical stimuli are generated by an electronic
pulse generator and transmitted from 14 electrode pads. Our experiments
confirmed that PossessedHand can control the motion of 16 joints in the hand.
We propose an application of this device to help a beginner learn how to play
musical instruments such as the piano and koto. Keywords: electrical stimuli, hand gesture, interaction device, output device,
wearable | |||
| A GMM based 2-stage architecture for multi-subject emotion recognition using physiological responses | | BIBA | Full-Text | 3 | |
| Gu Yuan; Tan Su Lim; Wong Kai Juan; Ho Moon-Ho Ringo; Qu Li | |||
| There is a trend these days to add emotional characteristics as new features into human-computer interaction to equip machines with more intelligence when communicating with humans. Besides traditional audio-visual techniques, physiological signals provide a promising alternative for automatic emotion recognition. Ever since Dr. Picard and colleagues brought forward the initial concept of physiological signals based emotion recognition, various studies have been reported following the same system structure. In this paper, we implemented a novel 2-stage architecture of the emotion recognition system in order to improve the performance when dealing with multi-subject context. This type of system is more realistic practical implementation. Instead of directly classifying data from all the mixed subjects, one step was added ahead to transform a traditional subject-independent case into several subject-dependent cases by classifying new coming sample into each existing subject model using Gaussian Mixture Model (GMM). For simultaneous classification on four affective states, the correct classification ration (CCR) shows significant improvement from 80.7% to over 90% which supports the feasibility of the system. | |||
| Gaze-directed ubiquitous interaction using a Brain-Computer Interface | | BIBAK | Full-Text | 4 | |
| Dieter Schmalstieg; Alexander Bornik; Gernot Müller-Putz; Gert Pfurtscheller | |||
| n this paper, we present a first proof-of-concept for using a mobile
Brain-Computer Interface (BCI) coupled to a wearable computer as an ambient
input device for a ubiquitous computing service. BCI devices, such as
electroencephalogram (EEG) based BCI, can be used as a novel form of
human-computer interaction device. A user can log into a nearby computer
terminal by looking at its screen. This feature is enabled by detecting a
user's gaze through the analysis of the brain's response to visually evoked
patterns. We present the experimental setup and discuss opportunities and
limitations of the technique. Keywords: authentication, biometrics, brain computer interface, electroencephalogram,
gaze tracking, object selection | |||
| Relevance of EEG input signals in the augmented human reader | | BIBAK | Full-Text | 5 | |
| Inês Oliveira; Ovidiu Grigore; Nuno Guimarães; Luís Duarte | |||
| This paper studies the discrimination of electroencephalographic (EEG)
signals based in their capacity to identify silent attentive visual reading
activities versus non reading states.
The use of physiological signals is growing in the design of interactive systems due to their relevance in the improvement of the coupling between user states and application behavior. Reading is pervasive in visual user interfaces. In previous work, we integrated EEG signals in prototypical applications, designed to analyze reading tasks. This work searches for signals that are most relevant for reading detection procedures. More specifically, this study determines which features, input signals, and frequency bands are more significant for discrimination between reading and non-reading classes. This optimization is critical for an efficient and real time implementation of EEG processing software components, a basic requirement for the future applications. We use probabilistic similarity metrics, independent of the classification algorithm. All analyses are performed after determining the power spectrum density of delta, theta, alpha, beta and gamma rhythms. The results about the relevance of the input signals are validated with functional neurosciences knowledge. The experiences have been performed in a conventional HCI lab, with non clinical EEG equipment and setup. This is an explicit and voluntary condition. We anticipate that future mobile and wireless EEG capture devices will allow this work to be generalized to common applications. Keywords: EEG processing and classification, HCI, feature relevance measurement,
reading detection, similarity metrics | |||
| Brain Computer Interfaces for inclusion | | BIBAK | Full-Text | 6 | |
| P. J. McCullagh; M. P. Ware; G. Lightbody | |||
| In this paper, we describe an intelligent graphical user interface (IGUI)
and a User Application Interface (UAI) tailored to Brain Computer Interface
(BCI) interaction, designed for people with severe communication needs. The
IGUI has three components; a two way interface for communication with BCI2000
concerning user events and event handling; an interface to user applications
concerning the passing of user commands and associated device identifiers, and
the receiving of notification of device status; and an interface to an
extensible mark-up language (xml) file containing menu content definitions. The
interface has achieved control of domotic applications. The architecture
however permits control of more complex 'smart' environments and could be
extended further for entertainment by interacting with media devices. Using
components of the electroencephalogram (EEG) to mediate expression is also
technically possible, but is much more speculative, and without proven
efficacy. The IGUI-BCI approach described could potentially find wider use in
the augmentation of the general population, to provide alternative computer
interaction, an additional control channel and experimental leisure activities. Keywords: brain computer interfaces, domotic control, entertainment, user interface | |||
| Emotion detection using noisy EEG data | | BIBAK | Full-Text | 7 | |
| Mina Mikhail; Khaled El-Ayat; Rana El Kaliouby; James Coan; John J. B. Allen | |||
| Emotion is an important aspect in the interaction between humans. It is
fundamental to human experience and rational decision-making. There is a great
interest for detecting emotions automatically. A number of techniques have been
employed for this purpose using channels such as voice and facial expressions.
However, these channels are not very accurate because they can be affected by
users' intentions. Other techniques use physiological signals along with
electroencephalography (EEG) for emotion detection. However, these approaches
are not very practical for real time applications because they either ask the
participants to reduce any motion and facial muscle movement or reject EEG data
contaminated with artifacts. In this paper, we propose an approach that
analyzes highly contaminated EEG data produced from a new emotion elicitation
technique. We also use a feature selection mechanism to extract features that
are relevant to the emotion detection task based on neuroscience findings. We
reached an average accuracy of 51% for joy emotion, 53% for anger, 58% for fear
and 61% for sadness. Keywords: affective computing, brain signals, feature extraction, support vector
machines | |||
| World's first wearable humanoid robot that augments our emotions | | BIBAK | Full-Text | 8 | |
| Dzmitry Tsetserukou; Alena Neviarouskaya | |||
| In the paper we are proposing a conceptually novel approach to reinforcing
(intensifying) own feelings and reproducing (simulating) the emotions felt by
the partner during online communication through wearable humanoid robot. The
core component, Affect Analysis Model, automatically recognizes nine emotions
from text. The detected emotion is stimulated by innovative haptic devices
integrated into the robot. The implemented system can considerably enhance the
emotionally immersive experience of real-time messaging. Users can not only
exchange messages but also emotionally and physically feel the presence of the
communication partner (e.g., family member, friend, or beloved person). Keywords: 3D world, affective user interfaces, haptic communication, haptic display,
instant messaging, online communication, tactile display, wearable humanoid
robot | |||
| KIBITZER: a wearable system for eye-gaze-based mobile urban exploration | | BIBAK | Full-Text | 9 | |
| Matthias Baldauf; Peter Fröhlich; Siegfried Hutter | |||
| Due to the vast amount of available georeferenced information novel
techniques to more intuitively and efficiently interact with such content are
increasingly required. In this paper, we introduce KIBITZER, a lightweight
wearable system that enables the browsing of urban surroundings for annotated
digital information. KIBITZER exploits its user's eye-gaze as natural indicator
of attention to identify objects-of-interest and offers speech- and non-speech
auditory feedback. Thus, it provides the user with a 6th sense for digital
georeferenced information. We present a description of our system's
architecture and the interaction technique and outline experiences from first
functional trials. Keywords: eye-gaze, mobile spatial interaction, wearable computing | |||
| Airwriting recognition using wearable motion sensors | | BIBA | Full-Text | 10 | |
| Christoph Amma; Dirk Gehrig; Tanja Schultz | |||
| In this work we present a wearable input device which enables the user to input text into a computer. The text is written into the air via character gestures, like using an imaginary blackboard. To allow hands-free operation, we designed and implemented a data glove, equipped with three gyroscopes and three accelerometers to measure hand motion. Data is sent wirelessly to the computer via Bluetooth. We use HMMs for character recognition and concatenated character models for word recognition. As features we apply normalized raw sensor signals. Experiments on single character and word recognition are performed to evaluate the end-to-end system. On a character database with 10 writers, we achieve an average writer-dependent character recognition rate of 94.8% and a writer-independent character recognition rate of 81.9%. Based on a small vocabulary of 652 words, we achieve a single-writer word recognition rate of 97.5%, a performance we deem is advisable for many applications. The final system is integrated into an online word recognition demonstration system to showcase its applicability. | |||
| Augmenting the driver's view with realtime safety-related information | | BIBAK | Full-Text | 11 | |
| Peter Fröhlich; Raimund Schatz; Peter Leitner; Matthias Baldauf; Stephan Mantler | |||
| In the last couple of years, in-vehicle information systems have advanced in
terms of design and technical sophistication. This trend manifests itself in
the current evolution of navigation devices towards advanced 3D visualizations
as well as real-time telematics services. We present important constituents for
the design space of realistic visualizations in the car and introduce
realization potentials in advanced vehicle-to-infrastructure application
scenarios. To evaluate this design space, we conducted a driving simulator
study, in which the in-car HMI was systematically manipulated with regard to
its representation of the outside world. The results show that in the context
of safety-related applications, realistic views provide higher perceived safety
than with traditional visualization styles, despite their higher visual
complexity. We also found that the more complex the safety recommendation the
HMI has to communicate, the more drivers perceive a realistic visualization as
a valuable support. In a comparative inquiry after the experiment, we found
that egocentric and bird's eye perspectives are preferred to top-down
perspectives for safety-related in-car safety information systems. Keywords: realistic visualization, telematics, user studies | |||
| An experimental augmented reality platform for assisted maritime navigation | | BIBAK | Full-Text | 12 | |
| Olivier Hugues; Jean-Marc Cieutat; Pascal Guitton | |||
| This paper deals with integrating a vision system with an efficient thermal
camera and a classical one in maritime navigation software based on a virtual
environment (VE). We then present an exploratory field of augmented reality
(AR) in situations of mobility and the different applications linked to work at
sea provided by adding this functionality. This work was carried out thanks to
a CIFRE agreement within the company MaxSea Int. Keywords: augmented reality, combining exteroceptive data, human factor, image
processing, mixed environment | |||
| Skier-ski system model and development of a computer simulation aiming to improve skier's performance and ski | | BIBAK | Full-Text | 13 | |
| François Roux; Gilles Dietrich; Aude-Clémence Doix | |||
| Background. Based on personal experience of ski teaching, ski training and
ski competing, we have noticed that some gaps exist between classical models
describing body-techniques and actual motor acts made by performing athletes.
The evolution of new parabolic shaped skis with new mechanical and geometric
characteristics increase these differences even more. Generally, scientific
research focuses on situations where skiers are separated from their skis.
Also, many specialized magazines, handbooks and papers print articles with
similar epistemology. In this paper, we describe the development of a
three-dimensional analysis to model the skier-skis' system. We subsequently
used the model to propose an evaluation template to coaches that includes eight
techniques and three observable consequences in order to make objective
evaluations of their athletes' body-techniques. Once the system is modeled, we
can develop a computer simulation in the form of a jumping jack, respecting
degrees of freedom of the model. We can manipulate movement of each body
segment or skis' gears' characteristics to detect performance variations. The
purpose of this project is to elaborate assumptions to improve performance and
propose experimental protocols to coaches to enable them to evaluate
performance. This computer simulation also involves board and wheeled sports.
Methods. Eleven elite alpine skiers participated. Video cameras were used to observe motor acts in alpine skiers in two tasks: slalom and giant slalom turns. Kinematic data were input into the 3D Vision software. Two on-board balances were used to measure the six components of ski-boots→skis torques. All data sources were then synchronized. Findings. We found correlations between force and torque measurements, the progression of center of pressure and the eight body-techniques. Based on these results, we created a technological model of the skier-ski system. Then, we have made a reading template and a model to coach young alpine skiers in clubs and world cup alpine skiers and, we have obtained results demonstrating the usefulness of our research. Interpretation. These results suggest that it is now possible to create a three-dimensional simulator of an alpine skier. This tool is able to compare competitors' body-techniques to detect the most performing body-techniques. Additionally, it is potentially helpful to consider and evaluate new techniques and ski characteristics. Keywords: computer simulation, elite skiing, skier-ski system, techniques reading
template | |||
| T.A.C: augmented reality system for collaborative tele-assistance in the field of maintenance through internet | | BIBAK | Full-Text | 14 | |
| Sébastien Bottecchia; Jean-Marc Cieutat; Jean-Pierre Jessel | |||
| In this paper we shall present the T.A.C.
(Télé-Assistance-Collaborative) system whose aim is to combine
remote collaboration and industrial maintenance. T.A.C. enables the copresence
of parties within the framework of a supervised maintenance task to be remotely
"simulated" thanks to augmented reality (AR) and audio-video communication. To
support such cooperation, we propose a simple way of interacting through our
O.A.P. paradigm and AR goggles specially developed for the occasion. The
handling of 3D items to reproduce gestures and an additional knowledge
management tool (e-portfolio, feedback, etc) also enables this solution to
satisfy the new needs of industry. Keywords: TeleAssistance, augmented reality, cognitive psychology, collaboration,
computer vision | |||
| Designing and evaluating advanced interactive experiences to increase visitor's stimulation in a museum | | BIBAK | Full-Text | 15 | |
| Bénédicte Schmitt; Cedric Bach; Emmanuel Dubois; Francis Duranthon | |||
| In this paper, we describe the design and a pilot study of two Mixed
Interactive Systems (MIS), interactive systems combining digital and physical
artifacts. These MIS aim at stimulating visitors of a Museum of Natural History
about a complex phenomenon. This phenomenon is the pond eutrophication that is
a breakdown of a dynamical equilibrium caused by human activities: this
breakdown results in a pond unfit for life. This paper discusses the
differences between these two MIS prototypes, the design process that lead to
their implementation and the dimensions used to evaluate these prototypes: user
experience (UX), usability of the MIS and the users' understanding of the
eutrophication phenomenon. Keywords: advanced interactive experience, co-design, eutrophication, mixed
interactive systems, museology | |||
| Partial matching of garment panel shapes with dynamic sketching design | | BIBA | Full-Text | 16 | |
| Shuang Liang; Rong-Hua Li; George Baciu; Eddie C. L. Chan; Dejun Zheng | |||
| Fashion industry and textile manufacturing in past decade, have been starting to reapply enhanced intelligent CAD process technologies. In this paper, we propose a partial panel matching system to facilitate the typical garment design process. This process provides recommendations to the designer during the panel design process and performs partial matching of the garment panel shapes. There are three main parts in our partial matching system. First, we make use of a Bézier-based sketch regularization to pre-process the panel sketch data. Second, we propose a set of bi-segment panel shape descriptors to describe and enrich the local features of the shape for partial matching. Finally, based on our previous work, we add an interactive sketching input environment to design garments. Experiment results show the effectiveness and efficiency of the proposed system. | |||
| Fur interface with bristling effect induced by vibration | | BIBAK | Full-Text | 17 | |
| Masahiro Furukawa; Yuji Uema; Maki Sugimoto; Masahiko Inami | |||
| Wearable computing technology is one of the methods that can augment the
information processing ability of humans. However, in this area, a soft surface
is often necessary to maximize the comfort and practicality of such wearable
devices. Thus in this paper, we propose a soft surface material, with an
organic bristling effect achieved through mechanical vibration, as a new user
interface. We have used fur in order to exhibit the visually rich
transformation induced by the bristling effect while also achieving the full
tactile experience and benefits of soft materials. Our method needs only a
layer of fur and simple vibration motors. The hairs of fur instantly bristle
with only horizontal mechanical vibration. The vibration is provided by a
simple vibration motor embedded below the fur material. This technology has
significant potential as garment textiles or to be utilized as a general soft
user interface. Keywords: computational clothing, computational fashion, pet robot, physical computer
interfaces, soft user interface, visual and haptic design | |||
| Evaluating cross-sensory perception of superimposing virtual color onto real drink: toward realization of pseudo-gustatory displays | | BIBAK | Full-Text | 18 | |
| Takuji Narumi; Munehiko Sato; Tomohiro Tanikawa; Michitaka Hirose | |||
| In this research, we aim to realize a gustatory display that enhances our
experience of enjoying food. However, generating a sense of taste is very
difficult because the human gustatory system is quite complicated and is not
yet fully understood. This is so because gustatory sensation is based on
chemical signals whereas visual and auditory sensations are based on physical
signals. In addition, the brain perceives flavor by combining the senses of
gustation, smell, sight, warmth, memory, etc. The aim of our research is to
apply the complexity of the gustatory system in order to realize a
pseudo-gustatory display that presents flavors by means of visual feedback.
This paper reports on the prototype system of such a display that enables us to
experience various tastes without changing their chemical composition through
the superimposition of virtual color. The fundamental thrust of our experiment
is to evaluate the influence of cross-sensory effects by superimposing virtual
color onto actual drinks and recording the responses of subjects who drink
them. On the basis of experimental results, we concluded that visual feedback
sufficiently affects our perception of flavor to justify the construction of
pseudo-gustatory displays. Keywords: cross-sensory perception, gustatory display, pseudo-gustation | |||
| The Reading Glove: designing interactions for object-based tangible storytelling | | BIBAK | Full-Text | 19 | |
| Joshua Tanenbaum; Karen Tanenbaum; Alissa Antle | |||
| In this paper we describe a prototype Tangible User Interface (TUI) for
interactive storytelling that explores the semantic properties of tangible
interactions using the fictional notion of psychometry as inspiration. We
propose an extension of Heidegger's notions of "ready-to-hand" and
"present-at-hand", which allows them to be applied to the narrative and
semantic aspects of an interaction. The Reading Glove allows interactors to
extract narrative "memories" from a collection of ten objects using natural
grasping and holding behaviors via a wearable interface. These memories are
presented in the form of recorded audio narration. We discuss the design
process and present some early results from an informal pilot study intended to
refine these design techniques for future tangible interactive narratives. Keywords: interactive narrative, object stories, tangible user interfaces, wearable
computing | |||
| Control of augmented reality information volume by glabellar fader | | BIBAK | Full-Text | 20 | |
| Hiromi Nakamura; Homei Miyashita | |||
| In this paper, we propose a device for controlling the volume of augmented
reality information by the glabellar movement. Our purpose is to avoid
increasing the sum of the amount of information during the perception of "Real
Space +Augmented Reality" by an intuitive and seamless control. For this, we
focused on the movement of the glabella (between the eyebrows) when the user
stare at objects as a trigger of information presentation. The system detects
the movement of the eyebrows by the amount of the light reflected by a
photo-reflector, and controlling information volume or the transparency of
objects in augmented reality space. Keywords: glabellar, information volume, photo reflector | |||
| Towards mobile/wearable device electrosmog reduction through careful network selection | | BIBAK | Full-Text | 21 | |
| Jean-Marc Seigneur; Xavier Titi; Tewfiq El Maliki | |||
| There is some concern regarding the effect of smart phones and other
wearable devices using wireless communication and worn by the users very
closely to their body. In this paper, we propose a new network switching
selection model and its algorithms that minimize the non-ionizing radiation of
these devices during use. We validate the model and its algorithms with a
proof-of-concept implementation on the Android platform. Keywords: electrosmog, wireless hand-over | |||
| Bouncing Star project: design and development of augmented sports application using a ball including electronic and wireless modules | | BIBAK | Full-Text | 22 | |
| Osamu Izuta; Toshiki Sato; Sachiko Kodama; Hideki Koike | |||
| In our project, we created a new ball, "Bouncing Star" (Hane-Boshi in
Japanese), comprised of electronic devices. We also created augmented sports
system using Bouncing Star and a computer program to support an interface
between the digital and the physical world. This program is able to recognize
the ball's state of motion (static, rolled, thrown, bound, etc.) by analyzing
data received through a wireless module. The program also tracks the ball's
position through image recognition techniques. On this system, we developed
augmented sports applications which integrate real time dynamic computer
graphics and responsive sounds which are synchronized with the ball's
characteristics of motion. Our project's goal is to establish a new dynamic
form of entertainment which can be realized through the combination of the ball
and digital technologies. Keywords: augmented sports, ball interface, computer-supported cooperative play, image
recognition, interactive surface, sensing technology, wireless module | |||
| On-line document registering and retrieving system for AR annotation overlay | | BIBAK | Full-Text | 23 | |
| Hideaki Uchiyama; Julien Pilet; Hideo Saito | |||
| We propose a system that registers and retrieves text documents to annotate
them on-line. The user registers a text document captured from a nearly top
view and adds virtual annotations. When the user thereafter captures the
document again, the system retrieves and displays the appropriate annotations,
in real-time and at the correct location. Registering and deleting documents is
done by user interaction. Our approach relies on LLAH, a hashing based method
for document image retrieval. At the on-line registering stage, our system
extracts keypoints from the input image and stores their descriptors computed
from their neighbors. After registration, our system can quickly find the
stored document corresponding to an input view by matching keypoints. From the
matches, our system estimates the geometrical relationship between the camera
and the document for accurately overlaying the annotations. In the experimental
results, we show that our system can achieve on-line and real-time
performances. Keywords: LLAH, Poes estimation, augmented reality, document retrieval, feature
matching | |||
| Augmenting human memory using personal lifelogs | | BIBAK | Full-Text | 24 | |
| Yi Chen; Gareth J. F. Jones | |||
| Memory is a key human facility to support life activities, including social
interactions, life management and problem solving. Unfortunately, our memory is
not perfect. Normal individuals will have occasional memory problems which can
be frustrating, while those with memory impairments can often experience a
greatly reduced quality of life. Augmenting memory has the potential to make
normal individuals more effective, and those with significant memory problems
to have a higher general quality of life. Current technologies are now making
it possible to automatically capture and store daily life experiences over an
extended period, potentially even over a lifetime. This type of data
collection, often referred to as a personal life log (PLL), can include data
such as continuously captured pictures or videos from a first person
perspective, scanned copies of archival material such as books, electronic
documents read or created, and emails and SMS messages sent and received, along
with context data of time of capture and access and location via GPS sensors.
PLLs offer the potential for memory augmentation. Existing work on PLLs has
focused on the technologies of data capture and retrieval, but little work has
been done to explore how these captured data and retrieval techniques can be
applied to actual use by normal people in supporting their memory. In this
paper, we explore the needs for augmenting human memory from normal people
based on the psychology literature on mechanisms about memory problems, and
discuss the possible functions that PLLs can provide to support these memory
augmentation needs. Based on this, we also suggest guidelines for data for
capture, retrieval needs and computer-based interface design. Finally we
introduce our work-in-process prototype PLL search system in the iCLIPS project
to give an example of augmenting human memory with PLLs and computer based
interfaces. Keywords: augmented human memory, context-aware retrieval, lifelogs, personal
information archives | |||
| Aided eyes: eye activity sensing for daily life | | BIBAK | Full-Text | 25 | |
| Yoshio Ishiguro; Adiyan Mujibiya; Takashi Miyaki; Jun Rekimoto | |||
| Our eyes collect a considerable amount of information when we use them to
look at objects. In particular, eye movement allows us to gaze at an object and
shows our level of interest in the object. In this research, we propose a
method that involves real-time measurement of eye movement for human memory
enhancement; the method employs gaze-indexed images captured using a video
camera that is attached to the user's glasses. We present a prototype system
with an infrared-based corneal limbus tracking method. Although the existing
eye tracker systems track eye movement with high accuracy, they are not
suitable for daily use because the mobility of these systems is incompatible
with a high sampling rate. Our prototype has small phototransistors, infrared
LEDs, and a video camera, which make it possible to attach the entire system to
the glasses. Additionally, the accuracy of this method is compensated by
combining image processing methods and contextual information, such as eye
direction, for information extraction. We develop an information extraction
system with real-time object recognition in the user's visual attention area by
using the prototype of an eye tracker and a head-mounted camera. We apply this
system to (1) fast object recognition by using a SURF descriptor that is
limited to the gaze area and (2) descriptor matching of a past-images database.
Face recognition by using haar-like object features and text logging by using
OCR technology is also implemented. The combination of a low-resolution camera
and a high-resolution, wide-angle camera is studied for high daily usability.
The possibility of gaze-guided computer vision is discussed in this paper, as
is the topic of communication by the photo transistor in the eye tracker and
the development of a sensor system that has a high transparency. Keywords: eye tracking, gaze information, information extracting for lifelog, lifelog
computing | |||