| Productivity as a metric for visual analytics: reflections on e-discovery | | BIBAK | Full-Text | 1 | |
| Sean M. McNee; Ben Arnette | |||
| Because visual analytics is not used in a vacuum, there are no cut-and-dry
metrics which can accurately evaluate visual analytic tools. These tools are
used inside of existing business processes, thus metrics to evaluate these
tools must measure the productivity of information workers on the data-centric
critical path of these business processes. In this paper, we argue for
process-centric visual analytic metrics grounded in the concept of information
worker productivity. We will place our discussion the context of legal
e-discovery, the business process within which Attenex operates and within
which we have demonstrated that visual analytic tools can increase productivity
dramatically. After discussing how productivity metrics for visual analytics
helped e-discovery, we make the argument that they can help any data-intensive
business process and discuss both how to create these metrics and apply them
successfully. Keywords: Attenex, e-discovery, information visualization, metrics, process,
productivity, visual analytics | |||
| Increasing the utility of quantitative empirical studies for meta-analysis | | BIBAK | Full-Text | 2 | |
| Heidi Lam; Tamara Munzner | |||
| Despite the long history and consistent use of quantitative empirical
methods to evaluate information visualization techniques and systems, our
understanding of interface use remains incomplete. While there are inherent
limitations to the method, such as the choice of task and data, we believe the
utility of study results can be enhanced if they were amenable to
meta-analysis. Based on our experience in extracting design guidelines from
existing quantitative studies, we recommend improvements to both study design
and reporting to promote meta-analysis: (1) Use comparable interfaces in terms
of visual elements, information content and amount displayed, levels of data
organization displayed, and interaction complexity; (2) Capture usage patterns
in addition to overall performance measurements to better identify design
tradeoffs; (3) Isolate and study interface factors instead of overall interface
performance; and (4) Report more study details, either within the publications,
or as supplementary materials. Keywords: information visualization evaluation, meta-analysis | |||
| Beyond time and error: a cognitive approach to the evaluation of graph drawings | | BIBAK | Full-Text | 3 | |
| Weidong Huang; Peter Eades; Seok-Hee Hong | |||
| Time and error are commonly used to measure the effectiveness of graph
drawings. However, such measures are limited in providing more fundamental
knowledge that is useful for general visualization design. We therefore apply a
cognitive approach in evaluations. This approach evaluates graph drawings from
a cognitive perspective, measuring more than just time and error. Three user
studies are conducted to demonstrate the usefulness of this approach. Keywords: cognitive load, cognitive process, evaluation, eye tracking, graph drawing,
questionnaire, visualization, visualization efficiency | |||
| Understanding and characterizing insights: how do people gain insights using information visualization? | | BIBAK | Full-Text | 4 | |
| Ji Soo Yi; Youn-ah Kang; John T. Stasko; Julie A. Jacko | |||
| Even though "providing insight" has been considered one of the main purposes
of information visualization (InfoVis), we feel that insight is still a
not-well-understood concept in this context. Inspired by research in
sensemaking, we realized the importance of the procedural aspects in
understanding insight. Thus, rather than asking "What is insight?" we instead
focus on "How do people gain insights?" In an effort to better understand and
characterize insight, we reviewed previous literature in InfoVis, seeking other
researchers' comments and views on this concept. We found that: 1) Insights are
often regarded as end results of using InfoVis and the procedures to gain
insight have been largely veiled; 2) Four largely distinctive processes of
gaining insight (Provide Overview, Adjust, Detect Pattern, and Match Mental
Model) have been discussed in the InfoVis literature; and 3) These different
processes provide some hints to understand the procedures in which insight can
be gained from InfoVis. We hope that our findings help researchers and
practitioners evaluate InfoVis systems and technologies in a more
insight-oriented way. Keywords: categorization, evaluation, information visualization, insight, sensemaking | |||
| Internalization, qualitative methods, and evaluation | | BIBAK | Full-Text | 5 | |
| Sarah Faisal; Brock Craft; Paul Cairns; Ann Blandford | |||
| Information Visualization (InfoVis) is at least in part defined by a process
that occurs within the subjective internal experience of the users of
visualization tools. Hence, users' interaction with these tools is seen as an
'experience'. Relying on standard quantitative usability measures evaluates the
interface. Yet, there is more to users' interaction with InfoVis tools than
merely the interface. Qualitative methods targets users' subjective
experiences. In this paper we demonstrate the potential benefits of qualitative
methods, more specifically Grounded Theory, for generating a theoretical
understanding of users' InfoVis experiences through discussing the results of a
qualitative study we conducted. The study was conducted in order to evaluate a
visualization of the academic literature domain, which we have designed and
built using a user-centered design approach. The study resulted in us
identifying categories that are essential to the InfoVis experience. This paper
argues that these categories can be used as a foundation for building an
InfoVis theory of interaction. Keywords: evaluation, information visualization, qualitative methods, usability | |||
| Grounded evaluation of information visualizations | | BIBAK | Full-Text | 6 | |
| Petra Isenberg; Torre Zuk; Christopher Collins; Sheelagh Carpendale | |||
| We introduce grounded evaluation as a process that attempts to ensure that
the evaluation of an information visualization tool is situated within the
context of its intended use. We discuss the process and scope of grounded
evaluation in general, and then describe how qualitative inquiry may be a
beneficial approach as part of this process. We advocate for increased
attention to the field of qualitative inquiry early in the information
visualization development life cycle, as it tries to achieve a richer
understanding by using a more holistic approach considering the interplay
between factors that influence visualizations, their development, and their
use. We present three case studies in which we successfully used observational
techniques to inform our understanding of the visual analytics process in
groups, medical diagnostic reasoning, and visualization use among computational
linguists. Keywords: evaluation, information visualization | |||
| Qualitative analysis of visualization: a building design field study | | BIBAK | Full-Text | 7 | |
| Melanie Tory; Sheryl Staub-French | |||
| We conducted an ethnographic field study examining the ways in which
building design teams used visual representations of data to coordinate their
work. Here we describe our experience with this field study approach, including
both quantitative and qualitative analysis of field study data. Conducting a
field study enabled us to effectively examine real work practice of a diverse
team of experts, which would have been nearly impossible in a laboratory study.
We also found that structured qualitative analysis methods provided deeper
insight into our results than our initial quantitative approach. Our experience
suggests that field studies and qualitative analysis could have substantial
benefit in visualization and could nicely complement existing quantitative
laboratory studies. Keywords: ethnographic field study, evaluation, long-term study, qualitative analysis,
visualization | |||
| Creating realistic, scenario-based synthetic data for test and evaluation of information analytics software | | BIBAK | Full-Text | 8 | |
| Mark A. Whiting; Jereme Haack; Carrie Varley | |||
| We describe the Threat Stream Generator, a method and a toolset for creating
realistic, synthetic test data for information analytics applications. Finding
or creating useful test data sets is difficult for a team focused on creating
solutions to information analysis problems. First, real data that might be
considered good for testing analytic applications may not be available or may
be classified. In the latter case, tool builders will not have the clearances
needed to use, or even see, the data. Second, analysts' time is scarce and
obtaining the needed characteristics of real data from them to create a test
data set is difficult. Finally, generating good test data is challenging.
Commercial data generators are focused on large database testing, not
information analytics tool testing. Our distinctive contribution is that we
embed known ground truth in a test data set, so that tool developers and others
will be able to determine the effectiveness of their software and how they are
progressing in their support for information analysts. Our automated methods
also significantly decrease data set development time. We review our approach
to scenario development, threat insertion strategies, data set development, and
data set evaluation. We also discuss our recent successes in using our data in
open analytic competitions. Keywords: data generator, evaluation, information visualization, visual analytics | |||
| Using multi-dimensional in-depth long-term case studies for information visualization evaluation | | BIBAK | Full-Text | 9 | |
| Eliane R. A. Valiati; Carla M. D. S. Freitas; Marcelo S. Pimenta | |||
| Information visualization is meant to support the analysis and comprehension
of (often large) datasets through techniques intended to show/enhance features,
patterns, clusters and trends, not always visible even when using a graphical
representation. During the development of information visualization techniques
the designer has to take into account the users' tasks to choose the graphical
metaphor as well as the interactive methods to be provided. Testing and
evaluating the usability of information visualization techniques are still a
research question, and methodologies based on real or experimental users often
yield significant results. To be comprehensive, however, experiments with users
must rely on a set of tasks that covers the situations a real user will face
when using the visualization tool. The present work reports and discusses the
results of three case studies conducted as Multi-dimensional In-depth Long-term
Case studies. The case studies were carried out to investigate MILCs-based
usability evaluation methods for visualization tools. Keywords: information visualization, taxonomy of tasks, usability evaluation | |||
| The long-term evaluation of Fisherman in a partial-attention environment | | BIBAK | Full-Text | 10 | |
| Xiaobin Shen; Andrew Vande Moere; Peter Eades; Seokhee Hong | |||
| Ambient display is a specific subfield of information visualization that
only uses partial visual and cognitive attention of its users. Conducting an
evaluation while drawing partial user attention is a challenging problem. Many
normal information visualization evaluation methods (full attention) may not
suit the evaluation of ambient displays.
Inspired by concepts in the social and behavioral science, we categorize the evaluation of ambient displays into two methodologies: intrusive and non-intrusive. The major difference between these two approaches is the level of user involvement, as an intrusive evaluation requires a higher user involvement than a non-intrusive evaluation. Based on our long-term (5 months) non-intrusive evaluation of Fisherman presented in [16], this paper provides a detailed discussion of the actual technical and experimental setup of unobtrusively measurement of user gaze over a long period by using a face-tracking camera and IR sensors. In addition, this paper also demonstrates a solution to the ethical problem of using video cameras to collect data in a semi-public place. Finally, a quantitative term of "interest" measurement with three remarks is also addressed. Keywords: ambient displays, human computer interaction, information visualization,
intrusive evaluation | |||