| User-interface design, culture, and the future | | BIBAK | Full-Text | 15-27 | |
| Aaron Marcus | |||
| This paper introduces the cultural anthropologist Hofstede's culture
dimensions and considers how they might affect user-interface designs. Examples
from the Web illustrate the cultural dimensions User-interface designers have
identified basic components of user interfaces. An initial mapping of culture
dimensions to user-interface components seeks to help user-interface designers
cope with global product and service development. Ultimately, tools might
emerge to facilitate tuning designs per culture. Keywords: appearance, culture, design, globalization, interaction, localization,
mental models, metaphors, user interface | |||
| Reflections on symmetry | | BIBAK | Full-Text | 28-33 | |
| Harold Thimbleby | |||
| Symmetry is routinely used in visual design, but in fact is not just a
visual concept. This paper explores how deeper symmetries in user interface
implementations can be 'reflected' in the design of the user interface, and
make them easier to use. This deeper application of symmetry for user interface
design is related to affordance, and therefore makes that concept
constructively applicable. Recommendations for programming better user
interfaces are suggested.
"Symmetry, as wide or as narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order, beauty, and perfection." Hermann Weyl [16] Keywords: affordance, object orientation, statechart, symmetry, user interface design | |||
| Supporting the end users' views | | BIBAK | Full-Text | 34-42 | |
| David F. Redmiles | |||
| End users of software have the right to systems that are both useful and
usable, a property termed usability in the software and human-computer
interaction communities. Unfortunately, it is not obvious what methods or
techniques developers of software should adopt in order to achieve good
usability in a product. There are a confounding number of questions. How can
different points of view among end users be incorporated into a software
development process? What does it mean to treat software developers as end
users, namely of software tools? How do the limitations of software practice,
such as minimizing time to release, affect what information can be collected
and used to make usability decisions? This paper presents a variety of
possibilities for supporting all the end users' views in a software development
activity. Both tools and methods are suggested, roughly organized according to
the different activities in software development. Moreover, end users are
defined to be a variety of stakeholders in a software development project,
including at the very least the end users of a product but also developers who
are end users of software tools. Keywords: activity theory, cognitive theory, design, design environments, event
monitoring, human-computer interaction, knowledge-based systems, organizational
memory, social theory, software engineering, usability engineering | |||
| Artistically conveying peripheral information with the InfoCanvas | | BIBAK | Full-Text | 43-50 | |
| Todd Miller; John Stasko | |||
| The Internet and World Wide Web have made a tremendous amount of information
available to people today. Taking advantage of and managing this information,
however, is becoming increasingly challenging due to its volume and the variety
of sources available. We attempt to reduce this overload with the InfoCanvas,
an ambient display of a personalized, information-driven, visual collage.
Through a web-based interface, people identify information of interest,
associate a pictorial representation with it, and place the representation on a
virtual canvas. The end result is an information collage, displayed on a
secondary monitor or net appliance, that allows people to keep tabs on
information in a calm, unobtrusive manner. This paper presents details on how a
person can create and manage information with the InfoCanvas, and how we
provide such capabilities. Keywords: ambient display, information awareness, peripheral information display,
visualization | |||
| A framework for designing fisheye views to support multiple semantic contexts | | BIBAK | Full-Text | 51-58 | |
| Paul Janecek; Pearl Pu | |||
| In this paper we discuss the design and use of fisheye view techniques to
explore semantic relationships in information. Traditional fisheye and "focus +
context" techniques dynamically modify the visual rendering of data in response
to the changing interest of the user. "Interesting" information is shown in
more detail or visually emphasized, while less relevant information is shown in
less detail, de-emphasized, or filtered. These techniques are effective for
navigating through large sets of information in a constrained display, and for
discovering hidden relationships in a particular representation. An open area
of research with these techniques, however, is how to redefine interest as a
user's tasks and information needs change.
We are developing a framework for implementing fisheye views to support multiple semantic contexts. The framework is based on two components: Degree Of Interest functions, and visual emphasis algorithms to change the representation of information with respect to interest. The framework supports different contexts through the aggregation of multiple weighted distance metrics in the calculation of interest. Using this framework, we have developed a user-configurable interface for browsing tabular data that visually emphasizes objects with respect to different semantic contexts. Keywords: emphasis algorithms, focus + context techniques, information visualization,
semantic fisheye views | |||
| Zooming, multiple windows, and visual working memory | | BIBAK | Full-Text | 59-68 | |
| Matthew Plumlee; Colin Ware | |||
| Zooming and multiple windows are two techniques designed to address the
focus-in-context problem. We present a theoretical model of performance that
models the relative benefits of these techniques when used by humans for
completing a task involving comparisons between widely separated groups of
objects. The crux of the model is its cognitive component: the strength of
multiple windows comes in the way they aid visual working memory. The task to
which we apply our model is multiscale comparison, in which a user begins with
a known visual pattern and searches for an identical or similar pattern among
distracters. The model predicts that zooming should be better for navigating
between a few distant locations when demands on visual memory are low, but that
multiple windows are more efficient when demands on visual memory are higher,
or there are several distant locations that must be investigated. To evaluate
our model we conducted an experiment in which users performed a multiscale
comparison task using both zooming and multiple-window interfaces. The results
confirm the general predictions of our model. Keywords: focus-in-context, interaction design, multiple windows, multiscale,
multiscale comparison, visual working memory, zooming | |||
| What's happening?: promoting community awareness through opportunistic, peripheral interfaces | | BIBAK | Full-Text | 69-74 | |
| Qiang Alex Zhao; John T. Stasko | |||
| Maintaining an awareness of information about one's own community and its
members is viewed as being important, but is becoming more challenging today as
people are overwhelmed by so many different forms of information. This paper
describes the "What's Happening" suite of tools for helping convey relevant and
interesting community information to people in a manner that is minimally
distracting and disruptive, with little or no user set-up and interaction. The
tools are more lightweight than e-mail and Usenet news, and opportunistic in
providing information to people when they are not deeply focused on some other
task. Keywords: CSCW, community awareness, informal communication, multimedia, opportunistic
interfaces, peripheral interfaces | |||
| A user-tracing architecture for modeling interaction with the world wide web | | BIBAK | Full-Text | 75-83 | |
| Peter Pirolli; Wai-Tat Fu; Robert Reeder; Stuart K. Card | |||
| We have developed a methodology for studying and analyzing the psychology of
users performing ecologically valid WWW tasks. A user trace is a record of all
significant states and events in the user-WWW interaction based on eye tracking
data, application-level logs, and think-aloud protocols. A user-tracing
architecture has been implemented for developing simulation models of user-WWW
interaction and for comparing a simulation model (SNIF-ACT) against user-trace
data. The user tracing architecture compares each action of the SNIF-ACT
simulation directly against observed user actions. The model and architecture
have been used to successfully match detailed user trace data from four users
working on two tasks each. Keywords: ACT-R, SNIF-ACT, information foraging, user models, user tracing, world wide
web | |||
| A visual interface to a music database | | BIBAK | Full-Text | 85-88 | |
| Robert St. Amant; James E. Blair; Patrick Barry; Yinon Bentor; Christopher G. Healey | |||
| This paper describes a system for exploring and selecting entries from a
music database through a visualization interface. The system is designed for
deployment in situations in which the user's attention is a tightly limited
resource. The system combines research topics in intelligent user interfaces,
visualization techniques, and cognitive modeling. Informal evaluation of the
system has given us useful insights into the design tradeoffs that developers
may face when building visual interfaces for off-the-desktop applications. Keywords: cognitive modeling, driving, visualization | |||
| Managing layout constraints in a platform for customized multimedia content packaging | | BIBAK | Full-Text | 89-93 | |
| Alexander Kröner; Patrick Brandmeier; Thomas Rist | |||
| A promising approach to customize the delivery of multimedia content is
based on methods for compiling content packages from repositories of existing
media assets, such as text paragraphs and images. Since the authors of media
assets may have specified layout preferences for their assets neither knowing
in which package these assets eventually will occur, nor knowing the personal
layout preferences of all potential customers, layout conflicts, such as
incompatible style attributes, are preprogrammed when packages are compiled on
the fly by an automated system. In this contribution we present a
constraint-based approach for resolving layout conflicts in automatically
compiled content packages. Depending on the number and nature of the layout
constraints to be considered, many eligible layout styles may exist. In fact,
the exploration of a style solution space creates a problem on its own.
Therefore, we are also investigating clustering and visualization techniques to
assist a system administrator in the exploration of the solution space. The
work has been conducted in the context of the EU funded project IMAGEN which
aims at the development of an integrated set of tools for the customized
publication and distribution of multimedia content. Keywords: WWW interfaces, adaptive interfaces, hyper- and multimedia | |||
| What did they do? understanding clickstreams with the WebQuilt visualization system | | BIBAK | Full-Text | 94-102 | |
| Sarah J. Waterson; Jason I. Hong; Tim Sohn; James A. Landay; Jeffrey Heer; Tara Matthews | |||
| This paper describes the visual analysis tool WebQuilt, a web usability
logging and visualization system that helps web design teams record and analyze
usability tests. The logging portion of WebQuilt unobtrusively gathers
clickstream data as users complete specified tasks. This data is then
aggregated and presented as an interactive graph, where nodes of the graph are
images of the web pages visited, and arrows are the transitions between pages.
To aid analysis of the gathered usability test data, the WebQuilt visualization
provides filtering capabilities and semantic zooming, allowing the designer to
understand the test results at the gestalt view of the entire graph, and then
drill down to sub-paths and single pages. The visualization highlights
important usability issues, such as pages where users spent a lot of time,
pages where users get off track during the task, navigation patterns, and exit
pages, all within the context of a specific task. WebQuilt is designed to
conduct remote usability testing on a variety of Internet-enabled devices and
provide a way to identify potential usability problems when the tester cannot
be present to observe and record user actions. Keywords: log file analysis, remote usability evaluation, semantic zooming, usability
evaluation, web visualization | |||
| Op-Glyph: a tool for exploring op art representation of height and vector field data | | BIBAK | Full-Text | 103-107 | |
| Francis T. Marchese | |||
| We report our experiences with application of the optical art techniques of
Victor Vasarely and Bridget Riley to visualization of height field and vector
field data. The bold use of color and simple form in Op Art engages the
preattentive processing ability of the human visual system, facilitating a
nearly instantaneous perception of image properties without the need for
extended scrutiny of component parts. A software system called Op-Glyph was
constructed to illustrate the Op Art method for data visualization, providing a
user with extensive control over a visual representation's primitives,
including shape, size, and color. Initial results suggest that this glyph-based
approach to data visualization may be a viable alternative or complement to
more complex representation schemes, particularly in situations where there are
limited processing or graphical capabilities, such as with PDAs. Keywords: glyph, information visualization, non-photorealistic rendering, optical art | |||
| Matrix: concept animation and algorithm simulation system | | BIBAK | Full-Text | 109-114 | |
| Ari Korhonen; Lauri Malmi | |||
| Data structures and algorithms include abstract concepts and processes,
which people often find difficult to understand. Examples of these are complex
data types and procedural encoding of algorithms. Software visualization can
significantly help in solving the problem.
In this paper we describe the platform independent Matrix system which combines algorithm animation with algorithm simulation, where the user interacts directly with data structures through a graphical user interface. The simulation process created by the user can be stored and played back in terms of algorithm animation. In addition, existing library routines can be used for creating illustrations of advanced abstract data types, or for animating and simulating user's own algorithms. Moreover, Matrix provides an extensive set of visual concepts for algorithm animation. These concepts include visualizations for primitive types, arrays, lists, trees, and graphs. This set can be extended further by using arbitrarily nested visualizations. Keywords: algorithm animation, algorithm simulation, software visualization | |||
| Direct manipulation of pictorial items within web sites: a drag & drop approach to on-line interaction | | BIBAK | Full-Text | 115-118 | |
| Mauro Mosconi; Marco Porta; Federico Zanetti | |||
| We present here a working prototypal Web site where a new approach for
on-line interaction has been implemented and tested. Our intention is to
improve the usability of Web sites (particularly e-commerce ones) by letting
users directly interact with pictorial representations of objects (e.g. items
on sale) in a way that resembles their behavior in real-world stores. Our
discussion focuses both on the manifold usability benefits of the approach and
on practical implementation issues. Keywords: Shelves & Cart®, direct manipulation, e-commerce, multi-frame drag &
drop, usability | |||
| An environment for user interface softbots | | BIBAK | Full-Text | 119-122 | |
| Robert St. Amant; Ajay Dudani | |||
| A user interface softbot is a software agent that controls an interactive
system through the graphical user interface, relying on visual information from
the system rather than an application programming interface or access to source
code. Interface softbots have acted as autonomous agents in applications such
as drawing and data recording, and the core vision processing algorithms have
been incorporated into cognitive models for simple problem-solving tasks.
Building interface softbots is still a time-consuming task, unfortunately,
requiring experience with complex program components as well as the details of
the visual interface. We have developed a prototype development environment
that facilitates the development of interface softbots, streamlining the
programming process and making it more accessible to new developers. Keywords: agents, interface softbots, programming environments | |||
| A web-based annotation tool supporting e-learning | | BIBAK | Full-Text | 123-128 | |
| F. Bonifazi; S. Levialdi; P. Rizzo; R. Trinchese | |||
| A typical user, when learning, annotates text, figures and other contents,
so as to better highlight, memorize, and retrieve relevant information. A few
annotation programs exist but either change the contents of the document, or do
not support distance learning through the web. We report work-in-progress on a
<u>u</u>ser-<u>c</u>entered
<u>a</u>nnotation <u>t</u>ool (UCAT) which allows
students to annotate, following their personal styles, (using different icons,
colors and signed versions) any document belonging to authorware within a
course. We have chosen Amaya as the working environment since, belonging to the
World Wide Web Consortium (W3C), it complies with the semantic web
specifications on document formats, like RDF. An example of the deployment of
UCAT will be shown in the paper. Keywords: annotation, e-learning, www interfaces | |||
| Interactive visual tools for spatial multicriteria decision making | | BIBAK | Full-Text | 129-132 | |
| Gennady L. Andrienko; Natalia V. Andrienko | |||
| Spatial decision making is a complex cognitive process which requires
appropriate support by interactive maps and other computer graphics. We develop
tools to facilitate multicriteria evaluation of options by individuals as well
as tools for analysis of results of voting in group decision making. Spatial
distribution of options is represented by interactive map in combination with
analysis of multidimensional attribute characteristics of decision options in
statistical graphics. Keywords: interactive tools, multicriteria optimization, spatial decision support | |||
| Multimodal interface within a simulator of service robotic applications | | BIBAK | Full-Text | 133-137 | |
| Enzo Mumolo; Massimiliano Nolich; Gianni Vercelli | |||
| The use of a multimodal interfaces for human-robot communication is an open
issue, with a true market in service robotic applications. This paper describes
a service robotic simulator which uses a multimodal interface (audio-video,
speech-prosodic communication based) that demonstrates the usefulness of
integrating many "modes" of interaction. A typical service robotic architecture
is multi-layered: cognitive, deliberative, reactive layers are the most
referred terms in literature. The design of each level requires an accurate
testing methodology; the system development is thus accelerated and simplified
using a simulator. The paradigm we take into account consists of robots and
human operators continuously linked to a supervising centre. The operator's
natural mode of communication is dialogue-based: so far the system we propose
uses a prosody-based visuo-dialogical interface with the robot by means of
keywords and prosody extracted from spoken commands. The simulator is based on
this paradigm and allows to interact by means of conversational and visual
interfaces. The former is composed by: speech recognition, prosody detection,
keyword extraction, text to speech conversion; the latter by a visual interface
and VRML 3D visualization. Keywords: multimodal interface, prosody, robotic simulation, service robotics, vocal
dialogue | |||
| OZONE: a zoomable interface for navigating ontology information | | BIBAK | Full-Text | 139-143 | |
| Bongwon Suh; Benjamin B. Bederson | |||
| We present OZONE (Zoomable Ontology Navigator), for searching and browsing
ontological information. OZONE visualizes query conditions and provides
interactive, guided browsing for DAML (DARPA Agent Markup Language) ontologies
on the Web. To visually represent objects in DAML, we define a visual model for
its classes, properties and relationships between them. Properties can be
expanded into classes for query refinement. The visual query can be formulated
incrementally as users explore class and property structures interactively.
Zoomable interface techniques are employed for effective navigation and
usability. Keywords: DAML, browsing, jazz, ontology, www, zoomable user interface (ZUI) | |||
| Extending the metaphor GIS query language and environment to 3D domains | | BIBAK | Full-Text | 144-147 | |
| G. Tortora; L. Paolino; M. Sebillo; G. Vitiello; F. Pittarello | |||
| The aim of our research is to provide GIS users with a visual environment
where they can formulate spatial queries which implicitly capture the double
nature of geographical data. In particular, in this paper we propose an
extension to the MGISQL visual environment, where users may pose 3D queries
about those phenomena where the third dimension is a relevant feature for data
retrieving. The interaction between users and the visual environment is
performed by manipulating 3D geometaphors. The underlying algebra for spatial
operators is enriched accordingly. Visual queries are composed in a 3D
environment, called the Sensitive Cube, characterized by the 3D geometaphors,
visualized as 'floating objects'.
A prototype of the 3D MGISQL visual environment has been realized, which allows users to query an archaeological-geographical database, whose experimental data refer to a site located around the city of Salerno. Keywords: 3D manipulation, archaeological databases, geographical information systems,
visual environment, visual query languages | |||
| A method for the perceptual optimization of complex visualizations | | BIBAK | Full-Text | 148-155 | |
| Donald House; Colin Ware | |||
| A common problem in visualization applications is the display of one surface
overlying another. Unfortunately, it is extremely difficult to do this clearly
and effectively. Stereoscopic viewing can help, but in order for us to be able
to see both surfaces simultaneously, they must be textured, and the top surface
must be made partially transparent. There is also abundant evidence that all
textures are not equal in helping to reveal surface shape, but there are no
general guidelines describing the best set of textures to be used in this way.
What makes the problem difficult to perceptually optimize is that there are a
great many variables involved. Both foreground and background textures must be
specified in terms of their component colors, texture element shapes,
distributions, and sizes. Also to be specified is the degree of transparency
for the foreground texture components. Here we report on a novel approach to
creating perceptually optimal solutions to complex visualization problems and
we apply it to the overlapping surface problem as a test case. Our approach is
a three-stage process. In the first stage we create a parameterized method for
specifying a foreground and background pair of textures. In the second stage a
genetic algorithm is applied to a population of texture pairs using subject
judgments as a selection criterion. Over many trials effective texture pairs
evolve. The third stage involves characterizing and generalizing the examples
of effective textures. We detail this process and present some early results. Keywords: genetic algorithms, layered surface visualization, stereoscopic viewing | |||
| Drawing graphs with non-uniform vertices | | BIBAK | Full-Text | 157-166 | |
| David Harel; Yehuda Koren | |||
| The vertices of most graphs that appear in real applications are
non-uniform. They can be circles, ellipses, rectangles, or other geometric
elements of varying shapes and sizes. Unfortunately, current force directed
methods for laying out graphs are suitable mostly for graphs whose vertices are
zero-sized and dimensionless points. It turns out that naively extending these
methods to handle non-uniform vertices results in serious deficiencies in terms
of output quality and performance. In this paper we try to remedy this
situation by identifying the special characteristics and problematics of such
graphs and offering several algorithms for tackling them. The algorithms can be
viewed as carefully constructed extensions of force-directed methods, and their
output quality and performance are similar. Keywords: force directed optimization, graph drawing, vertex overlaps, visualization | |||
| by chance enhancing interaction with large data sets through statistical sampling | | BIBAK | Full-Text | 167-176 | |
| Alan Dix; Geoff Ellis | |||
| The use of random algorithms in many areas of computer science has enabled
the solution of otherwise intractable problems. In this paper we propose that
random sampling can make the visualisation of large datasets both more
computationally efficient and more perceptually effective. We review the
explicit uses of randomness and the related deterministic techniques in the
visualisation literature. We then discuss how sampling can augment existing
systems. Furthermore, we demonstrate a novel 2D zooming interface -- the Astral
Telescope Visualiser, a visualisation suggested and enabled by sampling. We
conclude by considering some general usability and technical issues raised by
sampling-based visualisation. Keywords: astral telescope visualiser, random sampling, sampling from databases, very
large data sets, visualisation | |||
| Supporting co-evolution of users and systems by the recognition of interaction patterns | | BIBAK | Full-Text | 177-186 | |
| Stefano Arondi; Pietro Baroni; Daniela Fogli; Piero Mussio | |||
| This paper presents an approach to support the designer of Visual
Interactive Systems (VISs) in adapting a VIS to the evolution of its users.
This process is called co-evolution of users and systems. The approach is based
on the identification of the patterns of interaction between the user and an
interactive system and on their use for the evolution of the system to
facilitate novel usages introduced by the user. The approach is focused on WIMP
systems and is based on the recently introduced PCL (Pictorial Computing
Laboratory) model of interaction, within which we provide a novel definition of
interaction pattern. The proposal assumes that the VIS is observed by an
external system called SIC (Supporting Interaction Co-evolution), which is in
charge of recording the interactions between the user and the VIS and of
analyzing the relevant interaction patterns. In particular, SIC exploits a
UML-based statechart specification of the VIS in order to associate observed
user activities with the states of the interactive process. This information
provides a useful basis for a variety of pattern recognition techniques. Two
techniques called usual state and recurrent sequence recognition are
illustrated and the results of a first experiment are discussed. Keywords: co-evolution, system observation, visual interface design, visual sentence | |||
| Group-based interface for content-based image retrieval | | BIBAK | Full-Text | 187-194 | |
| Munehiro Nakazato; Ljubomir Manola; Thomas S. Huang | |||
| In Content-based Image Retrieval (CBIR) systems, the Query-by-Example (QBE)
approach is commonly used. However, because of inevitable "semantic gaps"
between visual features and the user's concepts, trial-and-error query is
essential for successful retrieval. Unfortunately, traditional user interfaces
are not suitable for trying different combinations of query examples. This is
because in these systems, query specification and result display are done on
the same workspace. Once the user removes an image from the query examples, the
image may disappear from the user interface. In addition, it is difficult to
combine the result of different queries.
In this paper, we propose a new interface for Content-based image retrieval. In our system, the users can interactively compare different combinations of query examples by dragging and grouping images on the workspace (Query-by-Group.) Because the query results are displayed on another pane, the user can quickly review the results. Combining different queries is also easy. Furthermore, the concept of "image groups" is also applied to annotating and organizing a large number of images. Because the gestural operations of our system is similar to file operations of modern window-based operation systems, users can easily learn to use the system. Keywords: content-based image retrieval, digital photography, image database,
information retrieval | |||
| Real-time human motion analysis for human-machine interface | | BIBAK | Full-Text | 195-202 | |
| Rin-ichiro Taniguchi; Satoshi Yonemoto; Daisaku Arita | |||
| This paper presents real-time human motion analysis for human-machine
interface. In general, man-machine 'smart' interface requires real-time human
motion capturing systems without special devices or markers. Although
vision-based human motion capturing systems do not use such special devices and
markers, they are essentially unstable and can only acquire partial information
because of self-occlusion. When we analyze full-body motion, the problem
becomes more severer. Therefore, we have to introduce a robust pose estimation
strategy to deal with relatively poor results of image analysis. To solve this
problem, we have developed a method to estimate full-body human postures, where
an initial estimation is acquired by real-time inverse kinematics and, based on
the estimation, more accurate estimation is searched for referring to the
processed image. The key point is that our system can estimate full-body human
postures from limited perceptual cues such as positions of a head, hands and
feet, which can be stably acquired by silhouette contour analysis. Keywords: human motion analysis, multiview image analysis, real-time vision,
vision-based interaction | |||
| Which interaction technique works when?: floating palettes, marking menus and toolglasses support different task strategies | | BIBAK | Full-Text | 203-208 | |
| Wendy E. Mackay | |||
| We conducted an experiment that compared three post-WIMP interaction
techniques: floating palettes, marking menus and toolglasses, in a real-world
Coloured Petri-Net editor, CPN2000. We created six situations in which users
performed identical sets of actions with equally-complex nets, but with
different cognitive contexts. We found significant differences in performance
and preferences across interaction techniques. When a user is in a "copy"
context, floating palettes are more efficient. If the user is problem solving,
toolglasses or marking menus are preferred. No single interaction technique is
clearly superior: each has strengths in different contexts. Since a single
application must support different kinds of cognitive tasks, interaction
designers should consider integrating multiple interaction techniques, rather
than selecting only one. Keywords: cognitive context, coloured petri nets, floating palettes, interaction
techniques, marking menus, toolglasses | |||
| Patterns of eye gaze during rapid serial visual presentation | | BIBAK | Full-Text | 209-217 | |
| Oscar de Bruijn; Robert Spence | |||
| The technique of Rapid Serial Visual Presentation (RSVP), comparable with
the riffling of a book's pages to acquire an impression of its contents, has
considerable application potential, especially where display space is at a
premium. The design of RSVP applications, however, is not straightforward in
view of the many, and often conflicting, design decisions that must be taken.
Specifically, it is suspected that many of these decisions will impact on the
ability of users to effectively perceive the displayed content as far as
carrying out a task is concerned. This paper presents an exploratory study in
which we investigated the impact of a number of design decisions on users' eye
movements. Four RSVP modes were implemented that represent alternative design
decisions. Two of these modes were modeled after existing e-commerce
applications, and two have been the subject of our ongoing research for some
time. For each RSVP mode, a set of images was presented to two participants who
were required to respond to the appearance of a pre-viewed target image. In the
course of these presentations we recorded the participants' eye movements in
order to elicit information concerning potential perceptual difficulties. We
propose a novel graphical characterization of RSVP modes, which is appropriate
to their correlation with recorded eye gaze patterns, offer an interpretation
of the experimental data, and provide a motivation for further research into
RSVP. Keywords: dynamic visual interfaces, rapid serial visual presentation, space-time
trade-off, visual information browsing, visual interface design | |||
| Two-handed drawing on augmented desk system | | BIBAK | Full-Text | 219-222 | |
| Xinlei Chen; Hideki Koike; Yasuto Nakanishi; Kenji Oka; Yoichi Sato | |||
| This paper describes a two-handed drawing tool developed on our augmented
desk system. Using our real-time finger tracking method, a user can draw and
manipulate objects interactively by his/her own finger/hand. Based on the
former work on two-handed interaction, different roles are assigned to each
hand. The right hand is used to draw and to manipulate objects. Using gesture
recognition, primitive objects can be drawn by users' handwriting. On the other
hand, the left hand is used to manipulate menus and to assist the right hand.
By closing all left hand fingers, users can initiate the appearance of
structural radial menus around their left hands, and can select appropriate
items by using a left hand finger. The left hand is also used to assist in the
performance of drawing tasks, e.g., specifying the center of a circle or
top-left corner of a rectangle, or specifying the object to be copied. Keywords: augmented reality, computer vision, direct manipulation, finger/hand
recognition, gesture recognition, perceptive user interface, two-handed
interaction | |||
| Using 3D to visualise medical data | | BIBAK | Full-Text | 223-226 | |
| Monica Tavanti | |||
| Retrieving and managing medical information is a major problem for users,
due to the massive and heterogeneous production of available data. One way of
handling the complexity of this special information context could be to find
alternative ways of structuring medical data. In this paper a database
containing medical information about diseases is presented. The database
implements a new approach, using a clustered organization of the data, that
tries to provide consistency and homogeneity in the information so as to
effectively support physicians and experts in the medical field. The database
also implements a temporal and dynamic description of diseases, called the
"temporal disease simulator," which can easily be visualised through a friendly
three-dimensional interface. The paper summarises the specifications of the
database and illustrates the design process of the interface for the "temporal
disease simulator," describing its main features. Keywords: 3D models, cognitive artefacts, information visualisation, interface,
medical information | |||
| Virtual locomotion system for human-scale virtual environments | | BIBAK | Full-Text | 227-230 | |
| Laroussi Bouguila; Masahiro Ishii; Makoto Sato | |||
| This paper presents a new virtual locomotion interface based on
step-in-place action and a smart-turntable system. The interface provides a
turntable as walking platform, on top of which users will stand at its center,
and facing a large screen, to perform life-like walking actions that steer
their navigation through the virtual environment. Steering actions are tracked
seamlessly without attachment to the body through a set of pressure sensors
embedded within the turntable and a computer vision system. For instance, in
place stepping is treated as a gesture indicating the intention to move
forward. Rotation about the body's vertical axis is treated as a gesture
changing the walking direction. However, as large screens are usually limited
in size and do not allow a surrounding projection, a large turning action may
put users in a visual-less situation, which hamper considerably the
effectiveness of the walking experience. To avoid such case and keep users
always provided with sufficient visual feedback, the turntable will passively
and smoothly rotate in opposite direction of users' turning. Rotation speed and
acceleration of the turntable are well optimized to keep users well balanced
and easily withstand the passive rotation. The interface is shown to be easy
and simple to use in virtual environments equipped with large screen. Keywords: human-scale, surrounding projection, turntable, virtual environment, virtual
locomotion | |||
| Degree-of-interest trees: a component of an attention-reactive user interface | | BIBAK | Full-Text | 231-245 | |
| Stuart K. Card; David Nation | |||
| This paper proposes Degree-of-Interest trees. These trees use
degree-of-interest calculations and focus+context visualization methods,
together with bounding constraints, to fit within pre-established bounds. The
method is an instance of an emerging "attention-reactive" user interface whose
components are designed to snap together in bounded spaces. Keywords: DOI trees, attention-reactive user interfaces, degree-of-interest trees,
fisheye displays, focus+context, hierarchical display, information
visualization, tree | |||
| A visual data mining environment: metaqueries and association rules | | BIBAK | Full-Text | 247-250 | |
| Stephen Kimani; Tiziana Catarci; Giuseppe Santucci | |||
| There is a need for an overall framework that can support the entire
knowledge discovery process. Of special interest, is the role of visualization
in such a framework. This paper focuses on the exploitation of various visual
strategies with a view to discovering knowledge through metarules and
association rules. Keywords: association rules, metaqueries, visual interaction, visualization | |||
| SQLi: towards an interface description language for relational databases | | BIBAK | Full-Text | 251-256 | |
| Hasan M. Jamil; Rong Zhou | |||
| The advent of Internet and the increase in non-standard use of databases
demand support for interface design primitives for ad hoc and user-friendly
interface design. In this paper, we propose an interface description
sub-language, called the SQLi, as a non-intrusive extension of SQL to
facilitate ad hoc interface generation and data display with user preferred
overriding capabilities. In support of our proposed extension and as a
demonstration of its functional capabilities we introduce our system called
SQL/IDL form generator that can be used as an add-on to any relational system
as a query front end. We discuss the salient features of the tool and the
SQLi extension through an illustrative example and argue that our
proposed extension is adaptive enough to accommodate future changes and needs. Keywords: GUI, ad hoc interface management, declarative interface design | |||
| Visualization techniques for circular tabletop interfaces | | BIBAK | Full-Text | 257-265 | |
| Frédéric Vernier; Neal Lesh; Chia Shen | |||
| This paper presents visualization and layout schemes developed for a novel
circular user interface designed for a round, tabletop display. Since all the
displayed items are in a polar coordinate system, many interface and
visualization schemes must be revisited to account for this new layout of UI
elements. We discuss the direct implications of such a circular interface on
document orientation. We describe two types of fisheye deformation of the
circular layout and explain how to use them in a multi-person collaborative
interface. These two schemes provide a general layout framework for circular
interfaces. We have also designed a new visualization technique derived from
the particularities of the circular layout we have highlighted. In this
technique the user controls the layout of the elements of a hierarchical tree.
Our approach is to provide the user rich interaction possibilities to easily
and quickly produce a layout comparable to the hyperbolic view developed at
Xerox PARC. The visualization work presented in this paper is part of our
ongoing Personal Digital Historian (PDH) research project. The overall goal of
PDH is to investigate ways to effectively and intuitively organize, navigate,
browse, present and visualize digital data in an interactive multi-person
conversational setting. Keywords: circular interface, collaborative interface, fisheye view, tabletop, tree
visualization | |||
| Scope: providing awareness of multiple notifications at a glance | | BIBAK | Full-Text | 267-281 | |
| Maarten van Dantzich; Daniel Robbins; Eric Horvitz; Mary Czerwinski | |||
| We describe the design and functionality of the Scope, a glanceable
notification summarizer. The Scope is an information visualization designed to
unify notifications and minimize distractions. It allows users to remain aware
of notifications from multiple sources of information, including e-mail,
instant messaging, information alerts, and appointments. The design employs a
circular radar-like screen divided into sectors that group different kinds of
notifications. The more urgent a notification is, the more centrally it is
placed. Visual emphasis and annotation is used to reveal important properties
of notifications. Several natural gestures allow users to zoom in on particular
regions and to selectively drill down on items. We present key aspects of the
Scope design, review the results of an initial user study, and describe the
motivation and outcome of an iteration on the visual design. Keywords: alerting and notification systems, awareness, information visualization,
interruptions, notifications, peripheral displays | |||
| Mixing icons, geometric shapes and temporal axis to propose a visual tool for querying spatio-temporal databases | | BIBAK | Full-Text | 282-289 | |
| Christine Bonhomme; Marie-Aude Aufaure | |||
| This paper presents Lvis, a visual query language for Geographic Information
Systems (GIS) and for spatio-temporal databases. Visual queries are specified
by means of a combination of icons. These icons are used to represent both
object types and operators. Geometric shapes are used to represent spatial
objects and relations among them; Balloons and temporal axis are used to
represent temporal criteria. A visual approach has been chosen because it
offers numerous advantages for the representation of spatio-temporal queries.
Visual representations are in fact well-suited since they easily permit to
express the spatial nature of a query. Several research works dealing with this
issue have been proposed in the last ten years. Besides, visual querying is a
friendly and simple querying mode. It is the reason why it is well-adapted to
novice users. The paper introduces the spatio-temporal model of the language.
It gives some examples of queries to explain how geometric shapes, icons and
temporal axis are combined. Finally, it discusses the main issues tied to the
visual, psycho-cognitive and spatio-temporal considerations. Keywords: geographic information systems, metaphors, spatio-temporal data, visual
languages | |||
| Navigating Giga-Graphs | | BIBAK | Full-Text | 290-299 | |
| James Abello; Jeffrey Korn; Matthias Kreuseler | |||
| An effective way to process a graph that does not fit in RAM is to build a
hierarchical partition of its set of vertices. This hierarchy induces a
partition of the graph edge set. We use this partition to produce a macro view
of the graph. A screen embedding of this macro view is a Graph Sketch. We
describe the use of Rectangular FishEye Views to provide drill-down navigation
of graph sketches at different levels of detail including the graph edges data.
A higher level of detail of a sketch focus area is obtained by distorting the
lower detail context. Alternative visual representations can be used at
different sketch hierarchy levels. We provide two sketch screen embeddings. One
is tree-map based and the other is obtained by a special sequence of graph edge
contractions. We demonstrate the application of our current Unix/Windows
prototype to telecommunication graphs with edge sets ranging from 100 million
to 1 billion edges (Giga-Graphs). To our knowledge this is the first time that
focus within context techniques have been used successfully for the navigation
of external memory graphs. Keywords: external memory algorithms, fisheye views, graph sketches, hierarchies,
massive data sets, visualization | |||
| Knowledge-supported graphical illustration of texts | | BIBAK | Full-Text | 300-307 | |
| K. Hartmann; S. Schlechtweg; R. Helbing; Th. Strothotte | |||
| We introduce a new method to automatically and dynamically illustrate
arbitrary texts from a predefined application domain. We demonstrate this
method with two experimental systems (Text Illustrator and Agi3le) which
are designed to illustrate anatomy textbooks.
Both systems exploit a symbolic representation of the content of structured geometric models. In addition, the approach taken by the Agi3le-system is based on an ontology providing a formal representation of important concepts within the application domain as well as a thesaurus containing alternative linguistic and visual realizations for entities within the formal domain representation. The presented method is text-driven, i.e., an automated analysis of the morphologic, syntactic and semantic structures of noun phrases reveals the key concepts of a text portion to be illustrated. The specific relevance of entities within the formal representation is determined by a spreading activation approach. This allows to derive important parameters for a non-photorealistic rendering process: the selection of suitable geometric models, camera positions and presentation variables for individual geometric objects. Part-whole relations are considered to assign visual representations to elements of the formal domain representation. Presentation variables for objects in the 3D rendering are chosen to reflect the estimated relevance of their denotation. As a result, expressive non-photorealistic illustrations which are focussed on the key concepts of individually selected text passages are generated automatically. Finally, we present methods to integrate user interaction within both media, the text and the computer-generated illustration, in order to adjust the presentation to individual information seeking goals. Keywords: image-text coherence, non-photorealistic rendering, semantic networks,
spreading activation, text analysis, text illustration | |||
| New directions for the design of virtual reality interfaces to e-commerce sites | | BIBAK | Full-Text | 308-315 | |
| Luca Chittaro; Roberto Ranon | |||
| Virtual Reality (VR) interfaces to e-commerce sites have recently begun to
appear on the Internet, promising to make the e shopping experience more
natural, attractive, and fun for customers. Unfortunately, switching to a
desktop VR design for an e-commerce site is not trivial and does not guarantee
at all that the interface will be effective. In this paper, we first briefly
discuss the potential advantages of these interfaces, stressing the need for a
better approach to their design. Then, we present the directions we are
following to build more usable and effective VR stores, i.e.: (i) reformulating
design guidelines from real-world stores in the VR context, (ii) exploiting VR
to create user empowerments that meet both customer and merchant needs, and
(iii) personalizing the VR store to better reflect customer's taste,
preferences, and interests. For each of the three directions, we illustrate and
discuss a detailed case study. Keywords: 3D interfaces, e-commerce, navigation aids, virtual reality | |||
| VIP: a visual approach to user authentication | | BIBAK | Full-Text | 316-323 | |
| Antonella De Angeli; Mike Coutts; Lynne Coventry; Graham I. Johnson; David Cameron; Martin H. Fischer | |||
| This paper addresses knowledge-based authentication systems in self-service
technology, presenting the design and evaluation of the Visual Identification
Protocol (VIP). The basic idea behind it is to use pictures instead of numbers
as a means for user authentication. Three different authentication systems
based on images and visual memory were designed and compared with the
traditional Personal Identification Number (PIN) approach in a longitudinal
study involving 61 users. The experiment addressed performance criteria and
subjective evaluation. The study and associated design exploration revealed
important knowledge about users, their attitudes towards and behaviour with
novel authentication approaches using images. VIP was found to provide a
promising and easy-to-use alternative to the PIN. The visual code is easier to
remember, preferred by users and potentially more secure than the numeric code.
Results also provided guidelines to help designers make the best use of the
natural power of visual memory in security solutions. Keywords: security, usability, user authentication, visual memory | |||
| The HuGS platform: a toolkit for interactive optimization | | BIBAK | Full-Text | 324-330 | |
| Gunnar W. Klau; Neal Lesh; Joe Marks; Michael Mitzenmacher; Guy T. Schafer | |||
| In this paper we develop a generalized approach to visualizing and
controlling an optimization process. Our framework, called Human-Guided Search,
actively involves people in the process of optimization. We provide simple and
general visual metaphors that allow users to focus and constrain the
exploration of the search space. We demonstrate that these metaphors apply to a
wide variety of problems and optimization algorithms. Our software toolkit
supports rapid development of human-guided search systems.
Our approach addresses many often-neglected aspects of optimization that are critical to providing people with practical solutions to their optimization problems. Users need to understand and trust the generated solutions in order to effectively implement, justify, and modify them. Furthermore, it is often impossible for users to specify, in advance, all appropriate constraints and selection criteria for their problem. Thus, automatic methods can only find solutions that are optimal with regard to an invariably over-simplified problem description. In contrast, human-in-the-loop optimization allows people to find and better understand solutions that reflect their knowledge of real-world constraints. Finally, interactive optimization leverages people's abilities in areas in which humans currently outperform computers, such as visual perception, learning from experience, and strategic assessment. Given a good visualization of the problem, people can employ these skills to direct a computer search into the more promising regions of the search space. The software we describe is written in Java and is available under a free research license for research or educational purposes. Keywords: human-computer interaction, optimization, search | |||
| Biological storytelling: a software tool for biological information organization based upon narrative structure | | BIBAK | Full-Text | 331-341 | |
| Allan Kuchinsky; Kathy Graham; David Moh; Annette Adler; Ketan Babaria; Michael L. Creech | |||
| The main task of molecular biologists seeking to understand the molecular
basis of disease is identifying and interpreting the relationships of genes,
proteins, and pathways in living organisms. While emerging technologies have
provided powerful analysis tools to this end, they have also produced an
explosion of data, which biologists need to make sense of. We have built
software tools to support the synthesis activities of molecular biologists, in
particular the activities of organizing, retrieving, using, sharing, and
reusing diverse biological information. A key aspect of our approach, based
upon the findings of user studies, is the use of narrative structure as a
conceptual framework for developing and representing the "story" of how genes,
proteins, and other molecules interact in biological processes. Biological
stories are represented both textually and graphically within a simple
conceptual model of items, collections, and stories. Keywords: annotation, bioinformatics, computer-supported cooperative work, information
visualization | |||
| Elucidate: employing information visualisation to aid pedagogy for students | | BIBAK | Full-Text | 343-344 | |
| Andrew Hunter; Christopher Exton | |||
| Understanding the intricacies behind concurrency within object-oriented
programming languages has always been a challenge for undergraduate students.
While the lecture is a relatively passive learning experience for the student,
the use of software visualisation offers the chance to examine the concepts
covered in the lecture in an interactive, visual environment. Students can add
further dimensions and greater depth to their understanding previously hindered
by the pedagogy of this passive environment. Elucidate makes use of the JDI
architecture in the Java language to create its own environment that allows
students to execute any program within it. Elucidate utilises several
information workspaces, each presenting a different perspective about the
information, thus facilitating a students ability to employ it in a manner that
best allows them to construct their own understanding. Students are able to
navigate around multiple views, and through various levels of abstraction,
revealing the inner workings and sequence of events in what would otherwise be
a black-box program. Keywords: concurrency, tools, visualisation | |||
| Modeling biological reactivity: statecharts vs. Boolean logic | | BIBAK | Full-Text | 345-353 | |
| Naaman Kam; Irun R. Cohen; David Harel | |||
| Remarkable progress in various fields of biology is leading in the direction
of a complete map of the building blocks of biological systems. There is broad
agreement among researchers that 21st century biology will focus on attempting
to understand how component parts collaborate to create a whole. It is also
well agreed that this transition of biology from identifying the building
blocks (analysis) to integrating the parts into a whole (synthesis) should rely
on the language of mathematics. In a recent publication, we described the
results of a first attempt at confronting the above challenge using the visual
formalism of statecharts. We presented a detailed model for T cell activation
using statecharts within the general framework of object-oriented modeling. In
this work, we compare the statechart-based modeling approach to a Boolean
formalism presented by Thomas & D'Ari. This comparison was done by taking a
model for T cell activation and anergy, which was constructed by Kaufman et al.
using such a Boolean formalism, and translating it into the language of
statecharts. Comparing these two representations of the same phenomena allows
us to assess the advantages and disadvantages of each modeling approach. We
believe that the results of this work, together with the results of our
previous modeling work on T cell activation, should encourage the use of visual
formalisms such as statecharts for modeling complex biological systems.
A full version of this paper appeared in the proceedings of the Second International Conference on Systems Biology, Pasadena, CA, USA, 2001 [9]. Keywords: immunology, object oriented modeling, statecharts | |||
| Assessment of cost/benefit of interfaces evaluation techniques | | BIBAK | Full-Text | 355-356 | |
| Eliane Regina de Almeida Valiati; Marcelo Soares Pimenta | |||
| Literature presents a considerable number of techniques that may be used in
the process of evaluation of interfaces. Each technique has its own features,
involves the employment of different resources, and allows the obtainment of
distinct results depending on the way it is conducted. The present article aims
at assessing, by means of a set of experiments, the costs and benefits found
specifically in the application of three techniques: heuristic evaluation, user
tests, and recommendation conformity inspection. Keywords: cost/benefit, evaluation techniques, usability | |||
| Making agents gaze naturally -- does it work? | | BIBAK | Full-Text | 357-358 | |
| Ivo van Es; Dirk Heylen; Betsy van Dijk; Anton Nijholt | |||
| We investigated the effects of varying eye gaze behavior of an embodied
conversational agent on the quality of human-agent dialogues. In an experiment
we compared three versions of an agent: one with gaze behavior that is
typically found to occur in human-human dialogues, one with gaze that is fixed
most of the time, and a third version with random gaze behavior. The versions
were found to yield significant differences in efficiency of the dialogues and
in user satisfaction, amongst others. Keywords: conversational agents, gaze, non-verbal communication | |||
| A visual query language for large spatial databases | | BIBAK | Full-Text | 359-360 | |
| Andrew J. Morris; Alia I. Abdelmoty; Baher A. El-Geresy | |||
| In this paper a visual approach to querying in large spatial databases is
presented. A diagrammatic technique utilising a data flow metaphor is used to
express different kinds of spatial and non-spatial constraints. Basic filters
are designed to represent the various types of queries in such systems. Icons
for different types of spatial relations are used to denote the filters.
Different granularities of the relations are presented in a hierarchical
fashion when selecting the spatial constraints. Spatial joins and composite
spatial and non-spatial constraints are represented consistently in the
language. Keywords: spatial databases, visual query languages | |||
| Working together -- a VR based approach for cooperative digital design review | | BIBAK | Full-Text | 361-362 | |
| Christian Knöpfle | |||
| In this paper we will present an approach how to support collaborative work
between several experts in a design review based on CAD data. The design review
is an important part of the design process of a new product, because its
objective is to ensure the quality of the final product. During a design review
several experts discuss the current state of the work, trying to locate errors
in the design, develop improvements or find solutions for unsolved problems. It
is evident that the quality of the achieved results of such a review is
directly linked to the quality of the collaboration between the involved
experts. Apart from social issues, efficient collaboration is heavily
influenced by technical aspects, e.g. which kind of media is used. Since pure
digital data representations of the design is commonplace today, new metaphors
have to be developed which allow a group of people to work together
simultaneously and intuitively on the same data model.
In this paper we will introduce a virtual reality based approach to collaborative working in a design review scenario. Our work is based on a pen and paper paradigm, which allows people to sketch and draw their ideas as well as annotations on the digital model in the same way they would do it using a pen and a piece of paper. Furthermore we will show how private spaces can be realized and how the metaphor can be integrated in an overall framework, which is usable for digital design reviews. Keywords: VR, collaboration, design review, interaction, user interface | |||
| A visual interface for multi-person exploration of personal databases | | BIBAK | Full-Text | 363-364 | |
| Chia Shen; Frederic Vernier; Neal Lesh | |||
| The Personal Digital Historian (PDH) is an ongoing research project aimed at
allowing small groups of people co-present to casually browse, embellish, and
explore large collections of their personal data, such as pictures, video, or
more business-related items such as spreadsheets or PowerPoint slides. In this
interactive poster, we demonstrate our initial prototype system which is
designed for a tabletop display. The interface allows people to organize their
images along the four questions essential to storytelling: who?, when?, where?,
and what? Users are provided with a wide variety of flexible interaction
methods, including region of interest query specification with in-place
freeform stroke input, image-based book marking, suggestion generation via
automatic query relaxation, and output summarization. With this interface, the
users can enjoy their conversation while having the photos at their fingertips,
rather than being distracted by the effort of formulating queries. Keywords: multi-person interactive visual interface, tabletop display | |||
| An integrated approach to database visualization | | BIBAK | Full-Text | 365-366 | |
| Dennis P. Groth; Edward L. Robertson | |||
| We present an architecture that enables information visualization activities
within a database environment. Our approach presents an abstraction of this
transformation process, which we call mapping. The implementation of the
mapping process is controlled by the end-user through a Map, which can be used
to add order and scale to data. Keywords: architectures, database visualization, information interfaces and
presentation | |||
| Visual interaction design for tools to think with: interactive systems for designing linear information | | BIBAK | Full-Text | 367-371 | |
| Yasuhiro Yamamoto; Kumiyo Nakakoji; Atsushi Aoki | |||
| We have developed a series of tools, which use spatial positioning of
objects as a means of externalizations in designing linear information. They
include a tool for collage-style writing, a tool for notes summarization, a
tool for multimedia data analysis, and a tool for movie editing. With these
tools, linear information design is viewed as a concurrent process of framing
parts and determining the order of the parts. In designing these tools, visual
interaction design has been a center of our project. Our design priority has
been put in minimizing the user's cognitive load in creating and modifying
parts in a space, and manipulating them in the space. This paper first presents
a brief philosophy underneath the system development, and describes interaction
techniques used in the tools, such as how a user distinguishes objects
positioned in the space, how a user resizes the space by dragging objects
toward one of its edges, and how a user sees a trajectory of the objects the
user is moving in the space. Keywords: cognitive models, external representations, spatial positioning, the ART
(amplifying representational talkback) principle, visual interaction design | |||
| On evaluating information visualization techniques | | BIBAK | Full-Text | 373-374 | |
| Carla M. D. S. Freitas; Paulo R. G. Luzzardi; Ricardo A. Cava; Marco Winckler; Marcelo S. Pimenta; Luciana P. Nedel | |||
| Evaluating user interfaces is usually accomplished to detect design problems
in layout and interaction. One possible way to evaluate image quality in
computer graphics is visual inspection by experts. Information visualization
techniques are usually presented showing their use in experimental situations,
employing some kind of analysis. Nevertheless, few works have specifically
addressed the evaluation of such techniques. This work reports our results
towards the definition of criteria for evaluating information visualization
techniques, addressing the evaluation of visual representation and interaction
mechanisms as a first step. Keywords: evaluation criteria, information visualization techniques | |||
| Expressiveness of the data flow and data state models in visualization systems | | BIBA | Full-Text | 375-378 | |
| Ed H. Chi | |||
| Visualization can be viewed as a process that transforms raw data (value) into views. There has been two major category of data process models that have been proposed to model the visualization transformation process. This paper seeks to compare the Data Flow Models and the Data State Models. Specifically, it proves that, in terms of expressiveness, anything that can represented using the Data Flow Model can also be represented using the Data State Model, and vice versa. | |||