[1]
Recognizing Depression from Twitter Activity
Understanding Health through Online Behavior
/
Tsugawa, Sho
/
Kikuchi, Yusuke
/
Kishino, Fumio
/
Nakajima, Kosuke
/
Itoh, Yuichi
/
Ohsaki, Hiroyuki
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.3187-3196
© Copyright 2015 ACM
Summary: In this paper, we extensively evaluate the effectiveness of using a user's
social media activities for estimating degree of depression. As ground truth
data, we use the results of a web-based questionnaire for measuring degree of
depression of Twitter users. We extract several features from the activity
histories of Twitter users. By leveraging these features, we construct models
for estimating the presence of active depression. Through experiments, we show
that (1) features obtained from user activities can be used to predict
depression of users with an accuracy of 69%, (2) topics of tweets estimated
with a topic model are useful features, (3) approximately two months of
observation data are necessary for recognizing depression, and longer
observation periods do not contribute to improving the accuracy of estimation
for current depression; sometimes, longer periods worsen the accuracy.
[2]
FuSA touch display: a furry and scalable multi-touch display
Hardware
/
Nakajima, Kosuke
/
Itoh, Yuichi
/
Tsukitani, Takayuki
/
Fujita, Kazuyuki
/
Takashima, Kazuki
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2011 ACM International Conference on Interactive
Tabletops and Surfaces
2011-11-13
p.35-44
© Copyright 2011 ACM
Summary: We propose a furry and scalable multi-touch display called the "FuSA2 Touch
Display." The furry type of tactile sensation of this surface affords various
interactions such as stroking or clawing. The system utilizes plastic fiber
optic bundles to realize a furry-type texture. The system can show visual
feedback by projection and detects multi-touch input using a diffused
illumination technique. We employed the optical feature of plastic fiber optics
to integrate the input and output systems into such a simple configuration that
the display becomes scalable. We implemented a 24-inch display, evaluated the
visual feedback and touch detection features, and found that our implemented
display encourages users to interact with it in various actions.
[3]
FuSA2 touch display
DEMO
/
Nakajima, Kosuke
/
Itoh, Yuichi
/
Tsukitani, Takayuki
/
Fujita, Kazuyuki
/
Takashima, Kazuki
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2011 ACM International Conference on Interactive
Tabletops and Surfaces
2011-11-13
p.D5
© Copyright 2011 ACM
[4]
Ambient Suite: enhancing communication among multiple participants
Collaborative entertainment
/
Fujita, Kazuyuki
/
Itoh, Yuichi
/
Ohsaki, Hiroyuki
/
Ono, Naoaki
/
Kagawa, Keiichiro
/
Takashima, Kazuki
/
Tsugawa, Sho
/
Nakajima, Kosuke
/
Hayashi, Yusuke
/
Kishino, Fumio
Proceedings of the 2011 International Conference on Advances in Computer
Entertainment Technology
2011-11-08
p.25
© Copyright 2011 ACM
Summary: We propose a room-shaped information environment called Ambient Suite that
enhances communication among multiple participants. In Ambient Suite, the room
itself works as both sensors to estimate the conversation states of
participants and displays to present information to stimulate conversation.
Such nonverbal cues as utterances, positions, and gestures are measured to
sense participant states. The participants are surrounded by displays so that
various types of information can be given based on their states. Although this
system is adaptable to a wide range of situations where groups talk with each
other, our implementation assumed standing-party situations as a typical case.
Using this implementation, we experimentally evaluated the performance of
input, output, and whether our system can actually stimulate conversation. The
results showed that our system measured sensor data to recognize the
conversational states, presented information, and adequately encouraged
participant conversations.
[5]
Anchored navigation: coupling panning operation with zooming and tilting
based on the anchor point on a map
Navigation
/
Fujita, Kazuyuki
/
Itoh, Yuichi
/
Takashima, Kazuki
/
Kitamura, Yoshifumi
/
Tsukitani, Takayuki
/
Kishino, Fumio
Proceedings of the 2010 Conference on Graphics Interface
2010-05-31
p.233-240
© Copyright 2010 Authors
Summary: We propose two novel map navigation techniques, called Anchored Zoom (AZ)
and Anchored Zoom and Tilt (AZT). In these techniques, the zooming and tilting
of a virtual camera are automatically coupled with users' panning displacements
so that the anchor point determined by users always remains in a viewport. This
allows users to manipulate a viewport without mode-switching among pan, zoom,
and tilt while maintaining a sense of distance and direction from the anchor
point.
We conducted an experiment to evaluate AZ and AZT and compare them with Pan
& Zoom (PZ) [17] and Speed-dependent Automatic Zooming (SDAZ) [10] in
off-screen target acquisition tasks and spatial recognition tests. Results
showed that our proposed techniques were more effective than those of
competitors in reducing time to reach off-screen objects while maintaining
users' sense of distance and direction as well as PZ.
[6]
Tearable: haptic display that presents a sense of tearing real paper
Devices
/
Maekawa, Takuya
/
Itoh, Yuichi
/
Takamoto, Keisuke
/
Tamada, Kiyotaka
/
Maeda, Takashi
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2009 ACM Symposium on Virtual Reality Software and
Technology
2009-11-18
p.27-30
Keywords: haptic display
© Copyright 2009 ACM
Summary: We propose a novel interface called Tearable that allows users to
continuously experience the real sense of tearing paper. To provide such a real
sense, we measured the actual vibration data of tearing a piece of real paper
and analyzed them. Based on this data, we utilized hook-and-loop fasteners and
a DC motor for representing the sense of tearing. We compared the force given
by Tearable with that by a piece of real paper and recommended its
reproducibility and usability. In addition, we evaluated Tearable with
questionnaires after user experiences.
[7]
Funbrella: recording and replaying vibrations through an umbrella axis
Full papers: Sensing and sensation
/
Fujita, Kazuyuki
/
Itoh, Yuichi
/
Yoshida, Ai
/
Ozaki, Maya
/
Kikukawa, Tetsuya
/
Fukazawa, Ryo
/
Takashima, Kazuki
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2009 International Conference on Advances in Computer
Entertainment Technology
2009-10-29
p.66-71
© Copyright 2009 ACM
Summary: We propose an umbrella-like device called Funbrella that entertains people
with many types of rain by focusing on an umbrella as a user interface that
connects humans and rain. Generally, people experience rain with sound, sight,
or sometimes smell; however, in our proposed system, we focus on the vibration
perceived through an umbrella's handle so that people can feel the rain. We
implemented a vibration-giving mechanism with an extremely simple structure
based on a dynamic microphone and a dynamic speaker whose structures are almost
identical. With this structure, Funbrella records the vibrations caused by
raindrops and plays them. We implemented three applications: Crazy Rain,
Tele-rain, and Minibrella. A questionnaire study about Crazy Rain application
reveals that Funbrella is amusing enough for people regardless of age or gender
because Funbrella accurately reproduces rain.
[8]
Multi-modal Interface in Multi-Display Environment for Multi-users
Multimodal User Interfaces
/
Kitamura, Yoshifumi
/
Sakurai, Satoshi
/
Yamaguchi, Tokuo
/
Fukazawa, Ryo
/
Itoh, Yuichi
/
Kishino, Fumio
HCI International 2009: 13th International Conference on Human-Computer
Interaction, Part II: Novel Interaction Methods and Techniques
2009-07-19
v.2
p.66-74
Keywords: 3D user interfaces; CSCW; graphical user interfaces; perspective correction
Copyright © 2009 Springer-Verlag
Summary: Multi-display environments (MDEs) are becoming more and more common. By
introducing multi-modal interaction techniques such as gaze, body/hand and
gestures, we established a sophisticated and intuitive interface for MDEs where
the displays are stitched seamlessly and dynamically according to the users'
viewpoints. Each user can interact with the multiple displays as if she is in
front of an ordinary desktop GUI environment.
[9]
MADO interface: a window like a tangible user interface to look into the
virtual world
Tangible and embedded interaction -- in the lab and in the wild
/
Maekawa, Takuya
/
Itoh, Yuichi
/
Kawai, Norifumi
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 3rd International Conference on Tangible and Embedded
Interaction
2009-02-18
p.175-180
Keywords: 3D modeling, MADO interface, bi-directional interface, mixed reality,
real-time interaction, tangible user interface
© Copyright 2009 ACM
Summary: "MADO Interface" is a tangible user interface consisting of a compact
touch-screen display and physical blocks. "MADO" means "window" in Japanese,
and MADO Interface is utilized as the real window into the virtual world. Users
construct a physical object by simply combining electrical blocks. Then, by
connecting MADO Interface to the physical object, they can watch the virtual
model corresponding to the physical block configuration (shape, color, etc.)
The size and the viewpoint of the virtual model seen by the user depend on the
position of MADO Interface, maintaining the consistency between the physical
and virtual worlds. In addition, users can interact with the virtual model by
touching the display on MADO Interface. These features enable users to explore
the virtual world intuitively and powerfully.
[10]
Extracting camera-control requirements and camera movement generation in a
3D virtual environment
Technical track: Virtual environment
/
Hamazaki, Hirofumi
/
Kitaoka, Shinya
/
Ozaki, Maya
/
Kitamura, Yoshifumi
/
Lindeman, Robert W.
/
Kishino, Fumio
Proceedings of the 2008 International Conference on Advances in Computer
Entertainment Technology
2008-12-03
p.126-129
© Copyright 2008 ACM
Summary: This paper proposes a new method to generate smooth camera movement that is
collision-free in a three-dimensional virtual environment. It generates a set
of cells based on cell decomposition using a loose octree in order not to
intersect with polygons of the environment. The method defines a camera
movement space (also known as Configuration Space) which is a set of cells in
the virtual environment. In order to generate collision-free camera movement,
the method holds a path as a graph structure which is based on the adjacency
relationship of the cells, and makes the camera move on the graph. Furthermore,
by using a potential function for finding out the force that aims the camera at
the subject and a penalty function for finding out the force that restrains the
camera on the graph when the camera moves on the graph, we generate smooth
camera movement that captures the subject while avoiding obstacles. Several
results in static and dynamic environments are presented and discussed.
[11]
Video agent: interactive autonomous agents generated from real-world
creatures
Interaction techniques
/
Kitamura, Yoshifumi
/
Rong, Rong
/
Hirano, Yoshinori
/
Asai, Kazuhiro
/
Kishino, Fumio
Proceedings of the 2008 ACM Symposium on Virtual Reality Software and
Technology
2008-10-27
p.30-38
Keywords: characters, computer animation, fuzzy logic, image processing, interactive
multimedia content, video database
© Copyright 2008 ACM
Summary: We present a novel approach for interactive multimedia content creation that
establishes an interactive environment in cyberspace in which users interact
with autonomous agents generated from video images of real-world creatures.
Each agent has autonomy, personality traits, and behaviors that reflect the
results of various interactions determined by an emotional model with fuzzy
logic. After an agent's behavior is determined, a sequence of video images that
best match the determined behavior is retrieved from the database in which a
variety of video image sequences of the real creature's behaviors are stored.
The retrieved images are successively displayed on the cyberspace to make it
responsive. Thus the autonomous agent behaves continuously. In addition, an
explicit sketch-based method directly initiate the reactive behavior of the
agent without involving the emotional process. This paper describes the
algorithm that establishes such an interactive system. First, an image
processing algorithm to generate a video database is described. Then the
process of behavior generation using emotional models and sketch-based
instruction are introduced. Finally, two application examples are demonstrated:
video agents with humans and goldfish.
[12]
Interactive Multimedia Contents in the IllusionHole
How I Learned to Love the Bomb: Defcon and the Ethics of Computer Games
/
Yamaguchi, Tokuo
/
Asai, Kazuhiro
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2008 International Conference on Entertainment Computing
2008-09-25
p.116-121
Keywords: 3D user interface; entertainment computing; tabletop display; interactive;
CSCW; game; stereoscopic display
© Copyright 2008 IFIP
Summary: This paper proposes a system of interactive multimedia contents that allows
multiple users to participate in a face-to-face manner and share the same time
and space. It provides an interactive environment where multiple users can see
and manipulate stereoscopic animation with individual sound. Two application
examples are implemented; one is location-based content design and the other is
user-based content design. Both effectively use a unique feature of the
IllusionHole, i.e., a location-sensitive display device that provides a
stereoscopic image with multiple users around the table.
[13]
Acquisition of Off-Screen Object by Predictive Jumping
Novel Interaction Technique
/
Takashima, Kazuki
/
Subramanian, Sriram
/
Tsukitani, Takayuki
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2008 Asia Pacific Conference on Computer Human
Interaction
2008-07-06
p.301-310
© Copyright 2008 Springer-Verlag
Summary: We propose predictive jumping (PJ), a fast and efficient algorithm that
enables user navigation to off-screen targets. The algorithm is inspired by
Delphian Desktop [1] and the off-screen visualization technique -- Halo [2].
The Halos represented at the edge of the viewport help users estimate
off-screen target distance and encourage them to make a single fluid mouse
movement toward the target. Halfway through the user's motion, the system
predicts the user's intended target and quickly moves the cursor towards that
predicted off-screen location. In a pilot study we examine the user's ability
to select off-screen targets with predictive models based on user's pointing
kinematics for off-screen pointing with Halo. We establish a linear
relationship between peak velocity and target distance for PJ. We then
conducted a controlled experiment to evaluate PJ against other Halo-based
techniques, Hop [8] and Pan with Halo. The results of the study highlight the
effectiveness of PJ.
[14]
Effects of Avatar's Blinking Animation on Person Impressions
Faces and Web
/
Takashima, Kazuki
/
Omori, Yasuko
/
Yoshimoto, Yoshiharu
/
Itoh, Yuich
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2008 Conference on Graphics Interface
2008-05-28
p.169-176
© Copyright 2008 Canadian Information Processing Society
Summary: Blinking is one of the most important cues for forming person impressions.
We focus on the eye blinking rate of avatars and investigate its effect on
viewer subjective impressions. Two experiments are conducted. The stimulus
avatars included humans with generic reality (male and female), cartoon-style
humans (male and female), animals, and unidentified life forms that were
presented as a 20-second animation with various blink rates: 9, 12, 18, 24 and
36 blinks/min. Subjects rated their impressions of the presented stimulus
avatars on a seven-point semantic differential scale. The results showed a
significant effect of the avatar's blinking on viewer impressions and it was
larger with the human-style avatars than the others. The results also lead to
several implications and guidelines for the design of avatar representation.
Blink animation of 18 blinks/min with a human-style avatar produces the
friendliest impression. The higher blink rates, i.e., 36 blinks/min, give
inactive impressions while the lower blink rates, i.e., 9 blinks/min, give
intelligent impressions. Through these results, guidelines are derived for
managing attractiveness of avatar by changing the avatar's blinking rate.
[15]
Strategic Tabletop Negotiations
User and Usability Studies
/
Yamaguchi, Tokuo
/
Subramanian, Sriram
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of IFIP INTERACT'07: Human-Computer Interaction
2007-09-10
v.2
p.169-182
Keywords: Face-to-Face Collaboration; Digital Tabletops; Strategic Negotiations;
Collaborative Tables; Single Display Groupware
© Copyright 2007 IFIP
Summary: Strategic negotiations in digital tabletop displays have not been well
understood. There is little reported in the literature on how users strategize
when group priorities and individual priorities conflict and need to be
balanced for a successful collaboration. We conducted an observational study on
three digital tabletop systems and a real-world setup to investigate
similarities and differences in real-world and digital tabletop strategic
collaborations. Our results show that in the real world, strategic negotiation
involves three phases: identifying the right timing, using epistemic actions to
consider a task plan and evaluating the value of the negotiation. We repeated
the real-world experiments with different digital tabletops and found several
differences in the way users initiate and perform strategic negotiations.
[16]
Object deformation and force feedback for virtual chopsticks
Spatial tracking, haptics & hardware
/
Kitamura, Yoshifumi
/
Douko, Ken'ichi
/
Kitayama, Makoto
/
Kishino, Fumio
Proceedings of the 2005 ACM Symposium on Virtual Reality Software and
Technology
2005-11-07
p.211-219
Keywords: FEM, deformation, force feedback, object manipulation, virtual chopsticks,
virtual environment
© Copyright 2005 ACM
Summary: This paper proposes a virtual chopsticks system using force feedback and
object deformation with FEM (finite element model). The force feedback model is
established by using a leverage based on the correct chopsticks handling
manner, and the force is applied to the index and middle finger. The object
deformation is obtained in real-time by calculating inverse stiffness matrix
beforehand. We performed experiments to compare the hardness of virtual
objects. As a result, we found that a recognition rate of almost 100% can be
achieved between virtual objects where the logarithmic difference in hardness
is 0.4 or more, while lower recognition rates are obtained when the difference
in hardness is smaller than this.
[17]
Predictive interaction using the delphian desktop
Mouse taming
/
Asano, Takeshi
/
Sharlin, Ehud
/
Kitamura, Yoshifumi
/
Takashima, Kazuki
/
Kishino, Fumio
Proceedings of the 2005 ACM Symposium on User Interface Software and
Technology
2005-10-23
p.133-141
© Copyright 2005 ACM
Summary: This paper details the design and evaluation of the Delphian Desktop, a
mechanism for online spatial prediction of cursor movements in a
Windows-Icons-Menus-Pointers (WIMP) environment. Interaction with WIMP-based
interfaces often becomes a spatially challenging task when the physical
interaction mediators are the common mouse and a high resolution, physically
large display screen. These spatial challenges are especially evident in overly
crowded Windows desktops. The Delphian Desktop integrates simple yet effective
predictive spatial tracking and selection paradigms into ordinary WIMP
environments in order to simplify and ease pointing tasks. Predictions are
calculated by tracking cursor movements and estimating spatial intentions using
a computationally inexpensive online algorithm based on estimating the movement
direction and peak velocity. In testing the Delphian Desktop effectively
shortened pointing time to faraway icons, and reduced the overall physical
distance the mouse (and user hand) had to mechanically traverse.
[18]
A Display Table for Strategic Collaboration Preserving Private and Public
Information
Seamful/Seamless Interface
/
Kitamura, Yoshifumi
/
Osawa, Wataru
/
Yamaguchi, Tokuo
/
Takemura, Haruo
/
Kishino, Fumio
Proceedings of the 2005 International Conference on Entertainment Computing
2005-09-19
p.167-179
© Copyright 2005 IFIP
Summary: We propose a new display table that allows multiple users to interact with
both private and public information on a shared display in a face-to-face
co-located setting. With this table users can create, manage and share
information intuitively, strategically and cooperatively by naturally moving
around the display. Users can interactively control private and public
information space seamlessly according to their spatial location and motion. It
enables users to dynamically choose negotiation partners, create cooperative
relationships and strategically control the information they share and conceal.
We see the proposed system as especially suited for strategic cooperative tasks
in which participants collaborate while attempting to increase individual
benefits, such as various trading floor-like and auction scenarios.
[19]
A Computerized Interactive Toy: TSU.MI.KI
Posters and Demonstration
/
Itoh, Yuichi
/
Yamaguchi, Tokuo
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2005 International Conference on Entertainment Computing
2005-09-19
p.507-510
© Copyright 2005 IFIP
Summary: Young children often build various structures with wooden blocks; structures
that are often used for pretend play, subtly improving children's creativity
and imagination. Based on a traditional Japanese wooden block toy, Tsumiki, we
propose a novel interactive toy for children, named "TSU.MI.KI", maintaining
the physical assets of wooden blocks and enhancing them with automation.
"TSU.MI.KI" consists of a set of computerized blocks equipped with several
input/output devices. Children can tangibly interact with a virtual scenario by
manipulating and constructing structures from the physical blocks, and by using
input and output devices that are integrated into the blocks.
[20]
Agents from Reality
Posters and Demonstration
/
Asai, Kazuhiro
/
Hattori, Atsushi
/
Yamashita, Katsuya
/
Nishimoto, Takashi
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2005 International Conference on Entertainment Computing
2005-09-19
p.519-522
© Copyright 2005 IFIP
Summary: A fish tank is established in a cyberspace based on a real world in which
autonomous fish agents, generated from images captured in an actual world,
swim. The behavior of each fish is determined by an emotional model that
reflects personality according to encountered events and user interactions.
[21]
The soul of ActiveCube: implementing a flexible, multimodal,
three-dimensional spatial tangible interface
/
Watanabe, Ryoichi
/
Itoh, Yuichi
/
Asai, Masatsugu
/
Kitamura, Yoshifumi
/
Kishino, Fumio
/
Kikuchi, Hideo
Proceedings of the 2004 International Conference on Advances in Computer
Entertainment Technology
2004-09-02
p.173-180
© Copyright 2004 ACM
Summary: ActiveCube is a novel user interface which allows intuitive interaction with
computers. ActiveCube allows users to construct and interact with Three
Dimensional (3D) environments using physical cubes equipped with input/output
devices. Spatial, temporal and functional consistency is always maintained
between the physical object and its corresponding representation in the
computer. In this paper we detail the design and implementation of our system.
We describe the method we used to realize flexible 3D modeling by controlling
the recognition signals of each face in each cube. We also explain how we
integrated additional multimodal interaction options by a number of
sophisticated I/O devices and by the inclusion of a second microprocessor in
our cubes. We argue that ActiveCube, with its current real-time multimodal and
spatial capabilities, is ready to enable a large range of interactive
entertainment applications that were impossible to realize before.
[22]
Steering Law in an Environment of Spatially Coupled Style with Matters of
Pointer Size and Trajectory Width
Full Papers
/
Naito, Satoshi
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2004 Asia Pacific Conference on Computer Human
Interaction
2004-06-29
p.305-316
© Copyright 2004 Springer-Verlag
Summary: Steering law is an excellent performance model for trajectory-based tasks in
GUIs. However, since the original law was proposed, it has been examined only
in a graphical environment of spatially decoupled style. Moreover, pointer size
has been limited to a small one, and the trajectory width of the trajectory has
also been limited to a certain size. To solve this problem, in this paper we
discuss the extension of the original steering law in order to apply the law to
a wider range of environments. We prove the steering law in an environment of
spatially coupled style. We explore three conditions of the pointer and
trajectory: a sufficiently small pointer and a trajectory of certain width; a
pointer of certain size and a narrow trajectory, and, a pointer of certain size
and a trajectory of certain width. The experimental results show that the
steering law is valid in an environment of spatially coupled style.
[23]
Hands-on learning of computer programming in introductory stage using a
model railway layout
Late breaking posters
/
Noma, Haruo
/
Tetsutani, Nobuji
/
Sasamoto, Hirokazu
/
Itoh, Yuichi
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems
2004-04-24
v.2
p.1546
© Copyright 2004 ACM
[24]
Interactive modeling of trees by using growth simulation
Interaction techniques
/
Onishi, Katsuhiko
/
Hasuike, Shoichi
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of the 2003 ACM Symposium on Virtual Reality Software and
Technology
2003-10-01
p.66-72
Keywords: L-system, data structure, intuitive interaction, multimodal interface, tree
models, virtual reality
© Copyright 2003 ACM
Summary: We propose a real-time interactive system that enables users to generate,
manipulate and edit the shape model of a tree based on growth simulation by
directly indicating its global and spatial information. For this purpose,
three-dimensional (3D) spatial information is introduced to the well-known
L-system as an attribute of the growth simulation. Moreover, we propose an
efficient data structure of L-strings in order to speed up the process.
[25]
Manipulation of Viewpoints in 3D Environment Using Interlocked Motion of
Coordinate Pairs
2: 3D input device
/
Fukatsu, Shinji
/
Kitamura, Yoshifumi
/
Kishino, Fumio
Proceedings of IFIP INTERACT'03: Human-Computer Interaction
2003-09-01
p.327
© Copyright 2003 IFIP