| AVI and the art system: interactive works at the Venice Biennale | | BIBAK | Full-Text | 3-6 | |
| Riccardo Rabagliati | |||
| Interactive works of art having a digital basis are still confined to
specialized events; very few of them are yet represented in the main
contemporary art museum and international shows of art. This paper analyses
some of the works presented in the Venice Biennale art exhibition. Keywords: audio-visual installation, digital art, interactive experience | |||
| Distributed intelligence: extending the power of the unaided, individual human mind | | BIBA | Full-Text | 7-14 | |
| Gerhard Fischer | |||
| The history of the human race is one of increasing intellectual capability.
Since the time of our early ancestors, our brains have gotten no bigger;
nevertheless, there has been a steady accretion of new tools for intellectual
work (including advanced visual interfaces) and an increasing distribution of
complex activities among many minds. Despite this transcendence of human
cognition beyond what is "inside" a person's head, most studies and frameworks
on cognition have disregarded the social, physical, and artifactual
surroundings in which cognition and human activity take place.
Distributed intelligence provides an effective theoretical framework for understanding what humans can achieve and how artifacts and tools can be designed and evaluated to empower human beings and to change tasks. This paper presents and discusses the conceptual frameworks and systems that we have developed over the last decade to create effective socio-technical environments supporting distributed intelligence. | |||
| From mainframes to picture frames: charting the rapid evolution of visual interfaces | | BIBA | Full-Text | 15 | |
| Elizabeth Mynatt | |||
| The past four decades are witness to tremendous change in the technical capabilities, industry techniques, and popular expectations underlying visual interaction between people and computers. Nevertheless we are on the cusp of a more encompassing revolution driven by new expectations of increasingly personal computing experiences. In this talk I will illustrate this nascent relationship between people and computation driven by the emergence of people as equally, and interchangeably, consumers and producers of the computing experience. Although buoyed by achievements in ubiquitous and pervasive computing technologies, this revolution is fundamentally about how a blossoming literacy in computing technologies empowers new forms of communication, reflection and decision making. | |||
| Bubble radar: efficient pen-based interaction | | BIBAK | Full-Text | 19-26 | |
| Dzmitry Aliakseyeu; Miguel A. Nacenta; Sriram Subramanian; Carl Gutwin | |||
| The rapid increase in display sizes and resolutions has led to the
re-emergence of many pen-based interaction systems like tabletop and wall
display environments. Pointing in these environments is an important task, but
techniques have not exploited the manipulation of control and display
parameters to the extent seen in desktop environments. We have overcome these
in the design of a new pen-based interaction technique -- Bubble Radar. Bubble
Radar allows users to reach both specific targets and empty space, and supports
dynamic switching between selecting and placing. The technique is based on
combining the benefits of a successful pen-based pointing technique, the Radar
View, with a successful desktop object pointing technique -- the Bubble Cursor.
We tested the new technique in a user study and found that it was significantly
faster than existing techniques, both for overall pointing and for targeting
specific objects. Keywords: interaction techniques, large-display systems, multi-display systems, object
pointing, reaching | |||
| Evaluating the effects of fluid interface components on tabletop collaboration | | BIBAK | Full-Text | 27-34 | |
| Uta Hinrichs; Sheelagh Carpendale; Stacey D. Scott | |||
| Tabletop displays provide exciting opportunities to support individual and
collaborative activities such as planning, organizing, and storyboarding. It
has been previously suggested that continuous flow of interface items can ease
information access and exploration on a tabletop workspace, yet this concept
has not been adequately studied. This paper presents an exploratory user study
of Interface Currents, a reconfigurable and mobile tabletop interface component
that offers a controllable flow for interface items placed on its surface. Our
study shows that Interface Currents supported information access and sharing on
a tabletop workspace. The study findings also demonstrate that mobility,
flexibility, and general adjustability of Interface Currents are important
factors in providing interface support for variations in task and group
interactions. Keywords: computer supported collaborative work, interface evaluation, tabletop
displays, visual interface design | |||
| Improving interfaces for managing applications in multiple-device environments | | BIBAK | Full-Text | 35-42 | |
| Jacob T. Biehl; Brian P. Bailey | |||
| Productive collaboration in a multiple-device environment (MDE) requires an
effective interface for efficiently managing applications among devices. Though
many interfaces exist, there is little empirical understanding of how they
affect collaboration. This paper reports results from a user study comparing
how well three classes of interfaces; textual, map, and iconic, support
application management during realistic, collaborative activities in an MDE.
From empirical results, observations, and an analysis of how users interacted
with each interface, we produced a set of design lessons for improving
management interfaces. The lessons were demonstrated within the iconic
interface, but they are just as applicable to other interfaces. This work
contributes further understanding of how to design effective management
interfaces for MDEs. Keywords: collaboration, iconic interface, multi-device environment | |||
| Mixed reality: a model of mixed interaction | | BIBAK | Full-Text | 43-50 | |
| Céline Coutrix; Laurence Nigay | |||
| Mixed reality systems seek to smoothly link the physical and data processing
(digital) environments. Although mixed reality systems are becoming more
prevalent, we still do not have a clear understanding of this interaction
paradigm. Addressing this problem, this article introduces a new interaction
model called Mixed Interaction model. It adopts a unified point of view on
mixed reality systems by considering the interaction modalities and forms of
multimodality that are involved for defining mixed environments. This article
presents the model and its foundations. We then study its unifying and
descriptive power by comparing it with existing classification schemes. We
finally focus on the generative and evaluative power of the Mixed Interaction
model by applying it to design and compare alternative interaction techniques
in the context of RAZZLE, a mobile mixed reality game for which the goal of the
mobile player is to collect digital jigsaw pieces localized in space. Keywords: augmented reality-virtuality, instrumental model, interaction modality,
interaction model, mixed reality, multimodality | |||
| Programming rich interactions using the hierarchical state machine toolkit | | BIBAK | Full-Text | 51-58 | |
| Renaud Blanch; Michel Beaudouin-Lafon | |||
| Structured graphics models such as Scalable Vector Graphics (SVG) enable
designers to create visually rich graphics for user interfaces. Unfortunately
current programming tools make it difficult to implement advanced interaction
techniques for these interfaces. This paper presents the Hierarchical State
Machine Toolkit (HsmTk), a toolkit targeting the development of rich
interactions. The key aspect of the toolkit is to consider interactions as
first-class objects and to specify them with hierarchical state machines. This
approach makes the resulting behaviors self-contained, easy to reuse and easy
to modify. Interactions can be attached to graphical elements without knowing
their detailed structure, supporting the parallel refinement of the graphics
and the interaction. Keywords: advanced interaction techniques, hierarchical state machines, post-WIMP
interaction, scalable vector graphics, software architecture, structured
graphics | |||
| Splitting rules for graceful degradation of user interfaces | | BIBAK | Full-Text | 59-66 | |
| Murielle Florins; Francisco Montero Simarro; Jean Vanderdonckt; Benjamin Michotte | |||
| This paper presents a series of new algorithms for paginating interaction
spaces (i.e.; windows, dialog boxes, web pages...) based on a multi-layer
specification in a user interface description language. We first describe how
an interaction space can be split using information from the presentation layer
(Concrete User Interface). We then demonstrate how information from higher
levels of abstraction (Abstract User Interface, Task model) can be used to
produce a pagination that is more meaningful from the task's viewpoint than
other techniques. The pagination relies on a set of explicit splitting rules
that can be applied as the first step in a graceful degradation. These
splitting rules are implemented as an interface builder plug-in which
automatically generates code under the designer's control. Keywords: design, graceful degradation, multiplatform systems, pagination, splitting
rules | |||
| A taxonomy of ambient information systems: four patterns of design | | BIBAK | Full-Text | 67-74 | |
| Zachary Pousman; John Stasko | |||
| Researchers have explored the design of ambient information systems across a
wide range of physical and screen-based media. This work has yielded rich
examples of design approaches to the problem of presenting information about a
user's world in a way that is not distracting, but is aesthetically pleasing,
and tangible to varying degrees. Despite these successes, accumulating
theoretical and craft knowledge has been stymied by the lack of a unified
vocabulary to describe these systems and a consequent lack of a framework for
understanding their design attributes. We argue that this area would
significantly benefit from consensus about the design space of ambient
information systems and the design attributes that define and distinguish
existing approaches. We present a definition of ambient information systems and
a taxonomy across four design dimensions: Information Capacity, Notification
Level, Representational Fidelity, and Aesthetic Emphasis. Our analysis has
uncovered four patterns of system design and points to unexplored regions of
the design space, which may motivate future work in the field. Keywords: ambient display, design guidelines, notification system, peripheral display,
taxonomy, ubiquitous computing | |||
| An approach to remote direct pointing using gray-code | | BIBAK | Full-Text | 75-78 | |
| Makio Ishihara; Yukio Ishihara | |||
| In this study, we apply gray-code to a remote direct pointing system.
Gray-code is a method for automatic projection calibration. Gray-code binary
patterns are projected to discover the locations of objects within the
projector's perspective. In addition to this main feature, gray-code is capable
of identifying a location within the projector's perspective from wherever the
gray-code binary patters can be seen. We take this advantage of gray-code to
build a remote direct pointing system. We build a prototype of the system that
helps remote users draw directly onto remote objects. In the prototype, users
see remote objects through cameras and draw on the objects simply by
positioning the pointer on the images from the cameras. This property helps
remote users get involved in remote environments. We describe the design of the
prototype and also show an example of the prototype in use. The remote pen
enables remote users to draw directly onto a remote desk or note. Keywords: augmented reality, gray-code, monoscopic displays, remote direct pointing,
user interface | |||
| Catenaccio: interactive information retrieval system through drawing | | BIBAK | Full-Text | 79-82 | |
| Hiroaki Tobita | |||
| The Catenaccio system integrates information retrieval with sketch
manipulations. The system is designed especially for pen-based computing and
allows users to retrieve information by simple pen manipulations such as
drawing a picture. When a user draws a circle and writes a keyword, information
nodes related to the keyword are collected automatically inside the circle. In
addition, the user can create a Venn diagram by repeatedly drawing circles and
keywords to form more complex queries. Thus, the user can retrieve information
both interactively and visually without complex manipulations. Moreover, the
sketch interaction is so simple that it is possible to combine it with other
types of data such as images and real-world information for information
retrieval. In this paper, we describe our Catenaccio system and how it can be
effectively applied. Keywords: information retrieval, interactive system, sketch manipulations, venn
diagram | |||
| Flow selection: a time-based selection and operation technique for sketching tools | | BIBAK | Full-Text | 83-86 | |
| Gabe Johnson; Mark D. Gross; Ellen Yi-Luen Do | |||
| Flow selection is a time-based modeless selection and operation technique
for freehand drawing and sketch tools. We offer flow selection as a modeless
technique to address the observation that modal selection requires too much
cognitive effort and causes breakdowns in creative flow. Flow selection
provides input to a new class of operations by assigning increasing, fractional
selection strengths to objects over time. We discuss the current prototype
system and possible applications for this novel technique for interacting with
sketches. Keywords: flow selection, mode, modeless interaction, pen, sketch, stylus, time-based
selection | |||
| iLayer: MLD in an operating system interface | | BIBAK | Full-Text | 87-90 | |
| Linn Gustavsson Christiernin; Rickard Bäckman; Mikael Gidmark; Ann Persson | |||
| In this paper we try to solve the challenge of implementing Multi-Layered
Design (MLD) at a system level. A fully implemented prototype is presented
where a MLD interface is created in Mac OS X. Earlier studies on the MLD
concept have been performed on desktop applications or web-systems, but this
study is the first made on an operating system. To handle MLD in large
applications with versified users groups and changing conditions over time, we
created an administrative tool where the layer structure can be manipulated and
the contents changed. Keywords: interface, multi-layered design, operating system, practical implementation | |||
| Improving scalability and awareness in iconic interfaces for multiple-device environments | | BIBAK | Full-Text | 91-94 | |
| Jacob T. Biehl; Brian P. Bailey | |||
| Iconic interfaces offer a promising interaction metaphor for effectively
managing applications in multi-device environments. However, current
implementations scale poorly for even a modest number of applications and do
not allow users to maintain adequate awareness of the workspace. To overcome
these limitations, we have designed new interaction techniques and prototyped
them within a new iconic interface. Our interface uses zooming and
animation-based interactions to improve scalability and uses application icons
and portal views with real-time updates to enhance awareness. Results from a
user study confirm the efficacy of these techniques. These techniques can be
used to improve the broader class of iconic and portal-based interfaces. Keywords: iconic interface, multi-device environment | |||
| Laser pointer interaction techniques using peripheral areas of screens | | BIBAK | Full-Text | 95-98 | |
| Buntarou Shizuki; Takaomi Hisamatsu; Shin Takahashi; Jiro Tanaka | |||
| This paper presents new interaction techniques that use a laser pointer to
directly manipulate applications displayed on a large screen. The techniques
are based on goal crossing, and the key is that the goals of crossing are the
four peripheral screen areas, which are extremely large. This makes it very
easy for users to execute commands, and the crossing-based interaction enables
users to execute fast and continuous commands. Keywords: computer-based presentation, goal crossing, interaction techniques, laser
pointers, pointing | |||
| On the visualization of large-sized ontologies | | BIBA | Full-Text | 99-102 | |
| Yannis Tzitzikas; Jean-Luc Hainaut | |||
| The visualization of ontologies and metadata is a challenging issue with several applications not only in the Semantic Web but also in Software Engineering, Database Design and Artificial Intelligence. This paper aims at identifying and analyzing the more principal aspects of this problem, surveying some of the work that has been done so far, and at proposing novel ideas that are worth further research and investigation. In particular, it describes the main factors that determine whether an ontology diagram layout is satisfying or not and focuses on the visualization requirements of large-sized ontology diagrams. | |||
| Table-centric interactive spaces for real-time collaboration | | BIBAK | Full-Text | 103-107 | |
| Daniel Wigdor; Chia Shen; Clifton Forlines; Ravin Balakrishnan | |||
| Tables have historically played a key role in many real-time collaborative
environments, often referred to as "war rooms". Today, these environments have
been transformed by computational technology into spaces with large vertical
displays surrounded by numerous desktop computers. However, despite significant
research activity in the area of tabletop computing, very little is known about
how to best integrate a digital tabletop into these multi-surface environments.
In this paper, we identify various design requirements for the implementation
of a system intended to support such an environment. We then present a set of
designs that demonstrate how an interactive tabletop can be used in a real-time
operations center to facilitate collaborative situation-assessment and
decision-making. Keywords: groupware, interactive spaces, real-time collaboration, tabletop interaction | |||
| Video editing based on object movement and camera motion | | BIBAK | Full-Text | 108-111 | |
| Yang Wang; Masahito Hirakawa | |||
| The advancement of computer technology makes video devices/equipments
powerful and inexpensive, and thereby the number of applications that can
effectively utilize digital videos is increasing.
In this paper, the authors propose a new type of video editing, which is based on the movement of objects. A video shot is automatically edited so that the selected objects are placed and kept at the center of the frames to make the resultant video more attractive. Ideally this is interpreted as applying pan, tilt, and/or zoom operations into a source video as post-conditions in video editing. Implementation issues for realization of this facility are also presented in this paper. Keywords: camera motion, moving object tracking, multimedia computing, video editing | |||
| Fluid DTMouse: better mouse support for touch-based interactions | | BIBAK | Full-Text | 112-115 | |
| Alan Esenther; Kathy Ryall | |||
| Although computer mice have evolved physically (i.e., new form factors,
multiple buttons, scroll-wheels), their basic metaphor remains the same: a
single-point of interaction, with modifiers used to control the interaction.
Many of today's novel input devices, however, do not directly (or easily) map
to mouse interactions. For example, when using one's finger(s) or hand directly
on a touchable display surface, a simple touch movement could be interpreted as
either a mouse-over or a drag, depending on whether the left mouse button is
intended to be depressed at the time. But how does one convey the state of the
left mouse button with a single touch? And how does one fluidly switch between
states? The problem is confounded by the lack of precision input when using a
single finger as the mouse cursor, since a finger has a much larger "footprint"
than a single pixel cursor hotspot. In this paper we introduce our solution,
Fluid DTMouse, which has been used to improve the usability of touch tables
with legacy (mouse-based) applications. Our technique is applicable to any
direct-touch input device that can detect multiple points of contact. Our
solution solves problems of smoothly specifying and switching between modes,
addressing issues with the stability of the cursor, and facilitating precision
input. Keywords: mouse emulation, multi-touch, tabletop interfaces, visual interaction | |||
| Appropriating and assessing heuristics for mobile computing | | BIBAK | Full-Text | 119-126 | |
| Enrico Bertini; Silvia Gabrielli; Stephen Kimani | |||
| Mobile computing presents formidable challenges not only to the design of
applications but also to each and every phase of the systems lifecycle. In
particular, the HCI community is still struggling with the challenges that
mobile computing poses to evaluation. Expert-based evaluation techniques are
well known and they do enable a relatively quick and easy evaluation. Heuristic
evaluation, in particular, has been widely applied and investigated, most
likely due to its efficiency in detecting most of usability flaws at front of a
rather limited investment of time and human resources in the evaluation.
However, the capacity of expert-based techniques to capture contextual factors
in mobile computing is a major concern. In this paper, we report an effort for
realizing usability heuristics appropriate for mobile computing. The effort
intends to capture contextual requirements while still drawing from the
inexpensive and flexible nature of heuristic-based techniques. This work has
been carried out in the context of a research project task geared toward
developing a heuristic-based evaluation methodology for mobile computing. This
paper describes the methodology that we adopted toward realizing mobile
heuristics. It also reports a study that we carried out in order to assess the
relevance of the realized mobile heuristics by comparing their performance with
that of the standard/traditional usability heuristics. The study yielded
positive results in terms of the number of usability flaws identified and the
severity ranking assigned. Keywords: heuristic evaluation, mobile computing, usability heuristics | |||
| Mobility agents: guiding and tracking public transportation users | | BIBAK | Full-Text | 127-134 | |
| Alexander Repenning; Andri Ioannidou | |||
| Increasingly, public transportation systems are equipped with Global
Positioning Systems (GPS) connected to control centers through wireless
networks. Controllers use this infrastructure to schedule and optimize
operations and avoid organizational problems such as bunching. We have employed
this existing infrastructure to compute highly personalized information and
deliver it on PDAs and cell phones. In addition to guiding people using public
transportation by showing them which bus they should take to reach specific
destinations, we track their location to create spatial awareness to a
community of users. An application of this technology, called Mobility Agents,
has been created and tested for people with cognitive disabilities. About 7% of
the U. S. population has a form of cognitive disability. Cognitive disabilities
are limitations of the ability to perceive, recognize, understand, interpret,
and respond to information. The ability to use public transportation can
dramatically increase the independence of this population. The Mobility Agents
system provides multimodal prompts to a traveler on handheld devices helping
with the recognition of the "right" bus, for instance. At the same time, it
communicates to a caregiver the location of the traveler and trip status. This
article describes our findings at several levels. At a technical level, it
outlines pragmatic issues including display issues, GPS reliability and
networking latency arising from using handheld devices in the field. At a
cognitive level, we describe the need to customize information to address
different degrees and combinations of cognitive disabilities. At a user
interface level, we describe the use of different mission status interface
approaches ranging from 3D real-time visualizations to SMS and instant
messaging-based text interfaces. Keywords: agent-based architectures, ambient intelligence, geographic information
systems, location aware services, multimodal interfaces, ubiquitous computing,
wireless computing | |||
| Supporting end-user debugging: what do users want to know? | | BIBAK | Full-Text | 135-142 | |
| Cory Kissinger; Margaret Burnett; Simone Stumpf; Neeraja Subrahmaniyan; Laura Beckwith; Sherry Yang; Mary Beth Rosson | |||
| Although researchers have begun to explicitly support end-user programmers'
debugging by providing information to help them find bugs, there is little
research addressing the right content to communicate to these users. The
specific semantic content of these debugging communications matters because, if
the users are not actually seeking the information the system is providing,
they are not likely to attend to it. This paper reports a formative empirical
study that sheds light on what end users actually want to know in the course of
debugging a spreadsheet, given the availability of a set of interactive visual
testing and debugging features. Our results provide in sights into end-user
debuggers' information gaps, and further suggest opportunities to improve
end-user debugging systems' support for the things end-user debuggers actually
want to know. Keywords: end-user debugging, end-user development, end-user programming, end-user
software engineering, online help | |||
| Supporting interaction and co-evolution of users and systems | | BIBAK | Full-Text | 143-150 | |
| Maria Francesca Costabile; Antonio Piccinno; Daniela Fogli; Andrea Marcante | |||
| Interactive systems supporting people activities, even those designed for a
specific application domain, should be very flexible, i.e., they should be
easily adaptable to specific needs of the user communities. They should even
allow users to personalize the system to better fit with their evolving needs.
This paper presents an original model of the interaction and co-evolution
processes occurring between humans and interactive systems and discusses an
approach to design systems that supports such processes. The approach is based
on the "artisan's workshop" metaphor and foresees the participatory design of
an interactive system as a network of workshops customized to different user
communities and connected one another by communication paths. Such paths allow
end users and members of the design team to trigger and actuate the
co-evolution. The feasibility of the methodology is illustrated through a case
study in the medical domain. Keywords: co-evolution, interaction model, participatory design, usability | |||
| Annotation as a support to user interaction for content enhancement in digital libraries | | BIBAK | Full-Text | 151-154 | |
| Maristella Agosti; Nicola Ferro; Emanuele Panizzi; Rosa Trinchese | |||
| This work describes the interface design and interaction of a generic
annotation service for Digital Library Management Systems (DLMSs), called
Digital Library Annotation Service (DiLAS), that has been designed and is
currently undergoing development and user test in the framework of the DELOS
European Network of Excellence. The objective of DiLAS is to design and develop
an architecture and a framework able to support and evaluate a generic
annotation service, i.e. a service that can be easily used into different DLMSs
enhancing their User Interfaces (UIs) in order to offer to Digital Library (DL)
users a set of uniform, user-tested (under certain required conditions), and
recognizable functionalities. Keywords: annotation, annotation service, digital library, digital library management
system, multimedia document, user interface | |||
| iFlip: a metaphor for in-vehicle information systems | | BIBAK | Full-Text | 155-158 | |
| Verena Broy; Frank Althoff; Gudrun Klinker | |||
| After the successful transfer of hierarchical menu-structures from the
computer domain to an automotive environment, it is time to discuss the
potential of 3D metaphors to meet the strong requirements for in-vehicle
information systems (IVIS). The idea is to increase learnability, efficiency
and joy of use of IVIS by providing a 3D interaction concept that is based on
cognitive capabilities of humans. We present a 3D interaction metaphor, iFlip,
which consists of displaying information on the reverse side of thin
interaction objects and a preview to current submenu states.
A comparison with a traditional list-based 2D menu for IVIS has shown that iFlip fulfills automotive requirements and can even enhance usability and likeability of IVIS. Keywords: 3D interaction, 3D metaphor, automotive infotainment, spatial memory,
vision-based UI | |||
| Interacting with piles of artifacts on digital tables | | BIBAK | Full-Text | 159-162 | |
| Dzmitry Aliakseyeu; Sriram Subramanian; Andrés Lucero; Carl Gutwin | |||
| Designers and architects regularly use piles to organize visual artifacts.
Recent efforts have now made it possible for users to create piles in digital
systems as well. However, there is still little understanding of how users
should interact with digital piles. In this paper we investigate this issue. We
first identify three tasks that must be supported by a digital pile --
navigation, reorganization, and repositioning. We then present three
interaction techniques -- called DragDeck, HoverDeck, and ExpandPile that meet
these requirements. The techniques allow users to easily browse the piles, and
also allow them to move elements between and within piles in an ad-hoc manner.
In a user study that compared the different interaction techniques, we found
that ExpandPile was significantly faster than the other techniques over all
tasks. There were differences, however, in individual tasks. We discuss the
benefits and limitations of the different techniques and identify several
situations where each of them could prove useful. Keywords: digital piles, interaction techniques, pen input, tabletop | |||
| Stylus based text input using expanding CIRRIN | | BIBAK | Full-Text | 163-166 | |
| Jared Cechanowicz; Steven Dawson; Matt Victor; Sriram Subramanian | |||
| CIRRIN [3] is a stylus based text input technique for mobile devices with a
touch sensitive display. In this paper we explore the benefit of expanding the
letters of CIRRIN to reduce the overall difficulty of selecting a letter. We
adapted the existing CIRRIN to expand the characters as the stylus approached
it to create a new text entry technique called expanding CIRRIN. In a small
user study we compared the standard CIRRIN and expanding CIRRIN for different
sentences. Our results indicate that expanding CIRRIN increases error rates and
text input times. We observed that expanding the letters often made the stylus
enter the CIRRIN ring adjacent to the intended letter, thereby increasing error
rates. We discuss the implications of these results, and possible applications
of expanding targets with other text input techniques such as the Metropolis
[7] soft keyboard. Keywords: CIRRIN, Fitts' law, expanding targets, stylus, text entry, touch sensitive
display | |||
| Syntax analysis for diagram editors: a constraint satisfaction problem | | BIBA | Full-Text | 167-170 | |
| Mark Minas | |||
| Visual language syntax can be specified by grammars or meta-models. Grammars are more complicated to build than meta-models, but allow for parsing of visual sentences which is necessary for building free-hand editors. Parsing has not yet been considered for meta-model-based specifications. Such visual editors support only structured editing so far. This paper shows that the syntax analysis problem ("parsing") for meta-model-based language specifications can be transformed into a constraint satisfaction problem and solved that way. This approach, therefore, allows for easy free-hand editing and, at the same time, easy meta-model-based language specifications. | |||
| VLMigrator: a tool for migrating legacy video lectures to multimedia learning objects | | BIBAK | Full-Text | 171-174 | |
| Andrea De Lucia; Rita Francese; Ignazio Passero; Genoveffa Tortora | |||
| In this paper we propose a tool, named VLMigrator, for interactively
restructuring a lecture and the associated Powerpoint presentation into one or
more multimedia Learning Objects. It also enables to fill the Learning Object
metadata by automatically extracting information from the Powerpoint
presentation. To easily perform these tasks, the VLMigrator interface exploits
continuous semantic zooming and visual contextualization of information. Keywords: E-learning, learning object, multimedia video lectures, reengineering,
semantic zooming | |||
| Design and evaluation of a shoulder-surfing resistant graphical password scheme | | BIBAK | Full-Text | 177-184 | |
| Susan Wiedenbeck; Jim Waters; Leonardo Sobrado; Jean-Camille Birget | |||
| When users input their passwords in a public place, they may be at risk of
attackers stealing their password. An attacker can capture a password by direct
observation or by recording the individual's authentication session. This is
referred to as shoulder-surfing and is a known risk, of special concern when
authenticating in public places. Until recently, the only defense against
shoulder-surfing has been vigilance on the part of the user. This paper reports
on the design and evaluation of a game-like graphical method of authentication
that is resistant to shoulder-surfing. The Convex Hull Click (CHC) scheme
allows a user to prove knowledge of the graphical password safely in an
insecure location because users never have to click directly on their password
images. Usability testing of the CHC scheme showed that novice users were able
to enter their graphical password accurately and to remember it over time.
However, the protection against shoulder-surfing comes at the price of longer
time to carry out the authentication. Keywords: authentication, convex hull click scheme, graphical passwords, password
security, shoulder-surfing, usable security | |||
| Evaluating the semantic memory of web interactions in the xMem project | | BIBAK | Full-Text | 185-192 | |
| Francesca Rizzo; Florian Daniel; Maristella Matera; Sharon Albertario; Anna Nibioli | |||
| As the amount of information on the World Wide Web continues to grow,
efficient hypertext navigation mechanisms are becoming crucial. Among them,
effective history mechanisms play an important role. We therefore decided to
provide a new method to access users' navigation histories, called xMem
(Extended Memory Navigation), building on semantic-based and associative
accesses, so as to imitate some of the features of the human memory. Such a
memory may give users better understanding of the context of their searches,
intermixing semantic aspects with the temporal dimension.
The paper presents the experimental study conducted on the xMem approach to revisit the Web interaction history. Two controlled experiments have been performed with the aim to evaluate the effectiveness of the xMem history mechanism with respect to traditional Web browser histories. The results from the first experiment show a clear advantage, in terms of the time needed to complete a retrieving task, for the subjects that used the xMem prototype. Accordingly, users found retrieving previously visited pages with xMem more satisfying than using Web interaction histories sorted by the only time dimension. The results from the second experiment show the relevance in the process of information retrieval of clusters and keywords semantically related to the context of the search. Keywords: experimental evaluation, human factors, hypertext navigation, information
retrieving, usability, web history mechanisms, web interaction history, world
wide web | |||
| An evaluation of depth perception on volumetric displays | | BIBAK | Full-Text | 193-200 | |
| Tovi Grossman; Ravin Balakrishnan | |||
| We present an experiment that compares volumetric displays to existing 3D
display techniques in three tasks that require users to perceive depth in 3D
scenes. Because they generate imagery in true 3D space, volumetric displays
allow viewers to use their natural physiological mechanisms for depth
perception, without requiring special hardware such as head trackers or shutter
glasses. However, it is unclear from the literature as to whether these
displays are actually better than the status-quo for enabling the perception of
3D scenes, thus motivating the present study. Our results show that volumetric
displays enable significantly better user performance in a simple depth
judgment task, and better performance in a collision judgment task, but in its
current form does not enhance user comprehension of more complex 3D scenes. Keywords: depth perception, evaluation, volumetric display | |||
| Exploring the design space for adaptive graphical user interfaces | | BIBAK | Full-Text | 201-208 | |
| Krzysztof Z. Gajos; Mary Czerwinski; Desney S. Tan; Daniel S. Weld | |||
| For decades, researchers have presented different adaptive user interfaces
and discussed the pros and cons of adaptation on task performance and
satisfaction. Little research, however, has been directed at isolating and
understanding those aspects of adaptive interfaces which make some of them
successful and others not. We have designed and implemented three adaptive
graphical interfaces and evaluated them in two experiments along with a
non-adaptive baseline. In this paper we synthesize our results with previous
work and discuss how different design choices and interactions affect the
success of adaptive graphical user interfaces. Keywords: adaptive interfaces, user study | |||
| Exploring visual feedback of change conflict in a distributed 3D environment | | BIBAK | Full-Text | 209-216 | |
| Mark S. Hancock; John David Miller; Saul Greenberg; Sheelagh Carpendale | |||
| Teams that are geographically distributed often share information both in
real-time and asynchronously. When such sharing is through groupware, change
conflicts can arise when people pursue parallel and competing actions on the
same information. This leads to problems in how the systems and its users
maintain a consistent view of shared information across distance and time. We
explore change awareness of conflicts in a three-dimensional distributed shared
space. Our user study compares the use of visual feedback to an optimistic
concurrency control strategy for both synchronous and asynchronous distributed
groupware. Our feedback provides a means for synchronous users to recognize and
resolve real-time changes, and for asynchronous users to view and resolve
changes when switching from an offline to online mode of work. Results of our
study suggest that the visual feedback serves as a useful feedthrough mechanism
in the synchronous case, but that asynchronous users may be overwhelmed by the
quantity of changes if they come online after many changes have been made. Keywords: asynchronous, change conflict, distributed collaboration, divergence,
synchronous, visual feedback | |||
| An integrated task-based framework for the design and evaluation of visualizations to support preferential choice | | BIBAK | Full-Text | 217-224 | |
| Jeanette Bautista; Giuseppe Carenini | |||
| In previous work, we proposed ValueCharts, a set of visualizations and
interactive techniques to support the inspection of linear models of
preferences. We now identify the need to consider the decision process in its
entirety, and to redesign ValueCharts in order to support all phases of
preferential choice. In this paper, we present our task-based approach to the
redesign of ValueCharts grounded in recent findings from both Decision Analysis
and Information Visualization. We propose a set of domain-independent tasks for
the design and evaluation of interactive visualizations for preferential
choice. We use the resulting framework as a basis for an analytical evaluation
of ValueCharts and alternative approaches. We conclude with a detailed
discussion of the redesign of our system based on our analysis. Keywords: preferential choice, task analysis, visualization techniques | |||
| Investigating user tolerance for errors in vision-enabled gesture-based interactions | | BIBAK | Full-Text | 225-232 | |
| Maria Karam; m. c. schraefel | |||
| In this paper, we describe our investigation into user tolerance of
recognition errors during hand gesture interactions with visual displays. The
study is based on our proposed interaction model for investigating gesture
based interactions, focusing on three elements: Interaction context, system
performance and user goals. This Wizard of Oz experiment investigates how
recognition system accuracy rates and task characteristics in both desktop and
ubiquitous computing scenarios can influence user tolerance for gesture
interactions. Results suggest that interaction context is a greater influence
on user tolerance than system performance alone, where recognition error rates
can potentially reach 40% before users will abandon gestures and use an
alternate interaction mode in a ubiquitous computing scenario. Results also
suggest that in a desktop scenario, traditional input methods are more
appropriate than gestures. Keywords: Wizard of Oz, gestures, secondary tasks, semaphoric gestures, ubiquitous
computing | |||
| Usability of overview-supported zooming on small screens with regard to individual differences in spatial ability | | BIBAK | Full-Text | 233-240 | |
| Thorsten Büring; Jens Gerken; Harald Reiterer | |||
| While zoomable user interfaces can improve the usability of applications by
easing data access, a drawback is that some users tend to become lost after
they have zoomed in. Previous studies indicate that this effect could be
related to individual differences in spatial ability. To overcome such
orientation problems, many desktop applications feature an additional overview
window showing a miniature of the entire information space. Small devices,
however, have a very limited screen real estate and incorporating an overview
window often means pruning the size of the detail view considerably. Given this
context, we report the results of a user study in which 24 participants solved
search tasks by using two zoomable scatterplot applications on a PDA -- one of
the applications featured an overview, the other relied solely on the detail
view. In contrast to similar studies for desktop applications, there was no
significant difference in user preference between the interfaces. On the other
hand, participants solved search tasks faster without the overview. This
indicates that, on small screens, a larger detail view can outweigh the
benefits gained from an overview window. Individual differences in spatial
ability did not have a significant effect on task-completion times although
results suggest that participants with higher spatial ability were slowed down
by the overview more than low spatial-ability users. Keywords: overview plus detail, scatterplot, small screen, spatial ability, zoom | |||
| Allowing camera tilts for document navigation in the standard GUI: a discussion and an experiment | | BIBAK | Full-Text | 241-244 | |
| Yves Guiard; Olivier Chapuis; Yangzhou Du; Michel Beaudouin-Lafon | |||
| The current GUI is like a flight simulator whose camera points fixedly at
right angle to the document, thus preventing users from looking ahead while
navigating. We argue that perspective viewing of usual planar documents can
help navigation. We analyze the scale implosion problem that arises with tilted
cameras and we report the data of a formal experiment on document navigation
with perspective views. Keywords: camera tilt, fitts' law, multiscale document navigation, perspective viewing | |||
| Can spatial mnemonics accelerate the learning of text input chords? | | BIBAK | Full-Text | 245-249 | |
| Frode Eika Sandnes | |||
| This study addresses to what degree spatial mnemonics can be used to assist
users to memorise or infer a set of text input chords. Users mentally visualise
the appearance of each character as a 3x3 pixel grid. This grid is input as a
sequence of three chords using one, two or three fingers to construct each
chord. Experiments show that users are able to use the strategy after a few
minutes of instruction, and that some subjects enter text without help after
three hours of practice. Further, the experiments show that text can be input
at a mean rate of 5.9 words per minute (9.9 words per minute for the fastest
subject) after 3 hours of practice. On the downside, the approach suffers from
a relatively high error rate of about 10% as subjects often resort to trial and
error when recalling character patterns. Keywords: chording, limited visual feedback, miniature device, mobile text entry,
spatial mnemonics, visually impaired users | |||
| Globalisation vs. localisation in e-commerce: cultural-aware interaction design | | BIBAK | Full-Text | 250-253 | |
| Antonella De Angeli; Leantros Kyriakoullis | |||
| Online shopping is the product of consumer assessment of the technological
medium and the e-vendor. Previous research has evinced a number of interface
features which are believed to be associated with trust building in e-commerce.
In this paper we address issues of cross-cultural validity of these 'trust
attributes' by comparing the relative importance given to them in two European
nations (UK and Cyprus) which are characterized by different cultural values
such as uncertainty avoidance (the way cultures deal with risk) and
individualism/collectivism (the relative importance given to groups vs.
individuals). A large-scale survey study suggested a strong cultural bias in
the evaluation of trust attributes. The implications of these findings for
interface design and localization are discussed. Keywords: culture, on-line shopping, trust | |||
| Hypothesis oriented cluster analysis in data mining by visualization | | BIBAK | Full-Text | 254-257 | |
| Ke-Bing Zhang; Mehmet A. Orgun; Kang Zhang; Yihao Zhang | |||
| Cluster analysis is an important technique that has been used in data
mining. However, cluster analysis provides numerical feedback making it hard
for users to understand the results better; and also most of the clustering
algorithms are not suitable for dealing with arbitrarily shaped data
distributions of datasets. While visualization techniques have been proven to
be effective in data mining, their use in cluster analysis is still a major
challenge, especially in data mining applications with high-dimensional and
huge datasets. This paper introduces a novel approach, Hypothesis Oriented
Verification and Validation by Visualization, named HOV{sup:3}, which projects
datasets based on given hypotheses by visualization in 2D space. Since
HOV{sup:3} approach is more goal-oriented, it can assist the user in
discovering more precise cluster information from high-dimensional datasets
efficiently and effectively. Keywords: cluster analysis, high-dimensional data visualization, visual data mining | |||
| Implicit brushing and target snapping: data exploration and sense-making on large displays | | BIBAK | Full-Text | 258-261 | |
| Xiaohua Sun; Patrick Chiu; Jeffrey Huang; Maribeth Back; Wolf Polak | |||
| During grouping tasks for data exploration and sense-making, the criteria
are normally not well-defined. When users are bringing together data objects
thought to be similar in some way, implicit brushing continually detects for
groups on the freeform workspace, analyzes the groups' text content or
metadata, and draws attention to related data by displaying visual hints and
animation. This provides helpful tips for further grouping, group meaning
refinement and structure discovery. The sense-making process is further
enhanced by retrieving relevant information from a database or network during
the brushing. Closely related to implicit brushing, target snapping provides a
useful means to move a data object to one of its related groups on a large
display. Natural dynamics and smooth animations also help to prevent
distractions and allow users to concentrate on the grouping and thinking tasks.
Two different prototype applications, note grouping for brainstorming and photo
browsing, demonstrate the general applicability of the technique. Keywords: grouping, information visualization, large displays, sense-making, visual
interfaces | |||
| Navigation by zooming in boxes: preliminary evaluation | | BIBA | Full-Text | 262-265 | |
| Tania Di Mascio; Ivano Salvatore; Laura Tarantino | |||
| Our intent is to validate the adoption of an enclosure-based visualization technique of hierarchical structures for the presentation of web sites, according to a paradigm that (1) abandons the concept of web page collection and (2) replaces the link-traversal based navigation with a zoom-based navigation. In particular we re-visit the box-in-box technique (originally introduced to visualize objects in a knowledge base), based on recursive containment among labeled boxes, where panning and zooming operations allow to move the visibility window over the structure. The paper presents the main features of "boxed" web sites, sketches system characteristics and architecture, and discusses results of a preliminary evaluation study based on the comparison between a traditional version and a boxed version of the same site. | |||
| The plot, the clutter, the sampling and its lens: occlusion measures for automatic clutter reduction | | BIBAK | Full-Text | 266-269 | |
| Geoffrey Ellis; Alan Dix | |||
| Previous work has demonstrated the use of random sampling in visualising
large data sets and the practicality of a sampling lens in enabling
focus+context viewing. Autosampling was proposed as a mechanism to maintain
constant density within the lens without user intervention. However, this
requires rapid calculation of density or clutter. This paper defines clutter in
terms of the occlusion of plotted points and evaluates three possible occlusion
metrics that can be used with parallel coordinate plots. An empirical study
showed the relationship between these metrics was independent of location and
could be explained with a surprisingly simple probabilistic model. Keywords: clutter, density reduction, information visualisation, lens, occlusion,
overplotting, random sampling, sampling | |||
| Preserving the mental map in interactive graph interfaces | | BIBAK | Full-Text | 270-273 | |
| Manuel Freire; Pilar Rodríguez | |||
| Graphs provide good representations for many domains. Interactive
graph-based interfaces are desireable to browse and edit data for these
domains. However, as graphs increase in size, interactive interfaces risk
information overload and low responsiveness. Focus+context approaches overcome
these problems by presenting abridged views of the graph. Users can then
navigate among views with a level-of-detail mechanism. If jumps from each view
to the next are easy to follow, users will gain a good mental map of the whole
graph; otherwise, they may become disoriented.
In this work, we identify three factors that affect mental map preservation during navigation of interactive focus+context graphs: the predictability of navigational actions, the degree of change from one view to the next, and the traceability of changes once they occur. Strategies for preserving user orientation are classified according to these factors, and new strategies developed for the CLOVER visualization environment are presented. Keywords: focus+context, graph visualization | |||
| Understanding the whethers, hows, and whys of divisible interfaces | | BIBAK | Full-Text | 274-277 | |
| Heather M. Hutchings; Jeffrey S. Pierce | |||
| Users are increasingly shifting from interacting with a single, personal
computer to interacting across multiple, heterogeneous devices. We present
results from a pair of studies investigating specifically how and why users
might divide an application's interface across devices in private,
semi-private, and public environments. Our results suggest that users are
interested in dividing interfaces in all of these environments. While the types
of divisions and reasons for dividing varied across users and environments,
common themes were that users divided interfaces to improve interaction, to
share information, and to balance usability and privacy. Based on our results,
we present implications for the design of divisible interfaces. Keywords: divisible interfaces, multi-device interfaces, paper prototyping | |||
| A tool to support usability inspection | | BIBAK | Full-Text | 278-281 | |
| Carmelo Ardito; Rosa Lanzilotti; Paolo Buono; Antonio Piccinno | |||
| SUIT (Systematic Usability Inspection Tool) is an Internet-based tool that
supports the evaluators during the usability inspection of software
applications. SUIT makes it possible to reach inspectors everywhere, guiding
them in their activities. Differently from other tools that have been proposed
in literature, SUIT not only supports the activities of a single evaluator, but
permits to manage a team of evaluators who can perform peer reviews of their
inspection works and merge their individual reports in a single document on
which they agree. Keywords: inspection, usability evaluation, web-based tool | |||
| CHAMBRE: integrating multimedia and virtual tools | | BIBAK | Full-Text | 285-292 | |
| Paolo Bottoni; Stefano Faralli; Anna Labella; Alessio Malizia; Claudio Scozzafava | |||
| Current research in interaction aims at defining new types of multimedia and
multimodal experience, at enriching everyday objects and environments with the
ability to capture user actions and intentions, and at integrating real and
virtual sources of information, typically exploiting the visual channel. These
forms of interaction usually require dedicated architectures, often relying on
different component models, and with rigid types of configuration. We present
an approach to the integration of real and virtual world sensors and effectors,
and of traditional multimedia environments within a single component-based
architecture. Environments in this architecture are defined as networks of
plugins, each equipped with computational, presentation and communication
capabilities. Examples of integrated environments produceable with this
architecture are given. Keywords: multimedia-multimodal interaction, plugins, virtual sensors | |||
| CHEF: a user centered perspective for Cultural Heritage Enterprise Frameworks | | BIBAK | Full-Text | 293-301 | |
| Franca Garzotto; Luca Megale | |||
| An enterprise framework denotes a "reusable, "semi-complete" application
skeleton that can be easily adapted to produce custom applications in a
specific business domain. CHEF is an enterprise framework for multi-device
hypermedia applications in cultural heritage. Its goal is to reduce the cost of
application development and to improve the quality of the final product.
Differently from existing frameworks, which are typically conceived as tools
for programmers, CHEF adopts an end-user development approach. It has been
built for and with "domain experts" (cultural heritage specialists). It
provides a set of user-friendly tools that hide the implementation complexity
and can be used, by domain experts with no technical know-how, to
design-by-reuse their hypermedia, to instantiate their designs with the proper
contents, and to deliver the final application on different platforms
(web-enabled desktop, PDA, CD-ROM). Keywords: cultural heritage, dynamic web generation, end-user development, enterprise
framework, hypermedia, multi-device application | |||
| History Unwired: mobile narrative in historic cities | | BIBAK | Full-Text | 302-305 | |
| Michael Epstein; Silvia Vergani | |||
| History Unwired (HU, see http://web.mit.edu/frontiers) is a multi-year
investigation of the narrative uses of mobile technology in historic cities. In
2004-2005 a team of researchers from MIT and University of Venice IUAV worked
with local artists, citizens, and academics to develop a walking tour through
one of Venice's more hidden neighborhoods, delivered over location-aware,
multimedia phones and PDAs. The tour was presented at the 2005 Biennale of
Contemporary Art and takes visitors around one of the lesser-traveled
neighborhoods of Venice: Castello. The tour was tested on over 200 users, over
half of whom filled out extensive surveys. In this paper we present the results
of these surveys focusing on the how different types of physical and
sociological spaces complemented the audio, video, interactive media and
positioning capabilities of the handhelds. First we provide some background
information on tourism and local culture in Venice. We then describe the
narrative and technical structure of the History Unwired walking tour. We then
go into the use of mobile media in closed, semi-open, and commercial spaces in
Castello. Keywords: augmented-reality, cultural technology, mixed-reality, mobile media, mobile
technology, pda walks, tourism | |||
| A semantic approach to build personalized interfaces in the cultural heritage domain | | BIBAK | Full-Text | 306-309 | |
| S. Valtolina; P. Mazzoleni; S. Franzoni; E. Bertino | |||
| In this paper we present a system we have built to disseminate cultural
heritage distributed across multiple museums. Our system addresses the
requirements of two categories of users: the end users that need to access
information according to their interests and interaction preferences, and the
domain experts and museum curators that need to develop thematic tours
providing end users with a better understanding of the single artefact or
collection. In our approach we make use of a semantic representation of the
given heritage domain in order to build multiple visual interfaces, called
"Virtual Wings" (VWs). Such interfaces allow users to navigate through data
available from digital archives and thematic tours and to create their own
personalized virtual visits. An interactive application integrating
personalized digital guides (using PDAs) and 360 panoramic images is the
example of VW presented. Keywords: interactive interfaces, interfaces for cultural heritage, visual interface
design, visual querying | |||
| Visual comparison and exploration of natural history collections | | BIBAK | Full-Text | 310-313 | |
| Martin Graham; Jessie Kennedy; Laura Downey | |||
| Natural history museum collections contain a wealth of specimen level data
that is now opening up for digital access. However, current interfaces to
access and manipulate this data are standard text-based query mechanisms,
giving no leeway for exploratory investigation of the collections. By adapting
previous work on multiple taxonomies we allow visual comparison of related
museum collections to discover areas of overlap, naming errors, and unique
sections of a collection, indicating areas of specialisation for individual
collections and the complementarities of the set formed by the collections as a
whole. Keywords: animation, multiple tree visualization, natural history collections,
taxonomy | |||
| MADCOW: a visual interface for annotating web pages | | BIBAK | Full-Text | 314-317 | |
| Paolo Bottoni; Stefano Levialdi; Anna Labella; Emanuele Panizzi; Rosa Trinchese; Laura Gigli | |||
| The use of the Web and the diffusion of knowledge management systems makes
it possible to base discussions upon a vast set of documents, many of which
also include links to multimedia material, such as images or videos. This
perspective could be exploited by allowing a team to collaborate by exchanging
and retrieving annotated multimedia documents (text, images, audio and video).
We designed and developed a digital annotation system, MADCOW, to assist users
in constructing, disseminating, and retrieving multimedia annotations of
documents, supporting collaborative activities to build a web of
decision-related documents. We made a strong effort in designing the user
interface and we tested it with 24 users. We describe a scenario in which
annotation plays a crucial role, where the object of the collaboration is a
politically and artistically important palace of Rome, for which the
availability of images and historical documentation is fundamental in order to
take informed decisions. We demonstrate the MADCOW interface and its use in the
restoration team. The annotations can be used to support teamwork as well as to
offer the public some reasoned integration and guide to the available material. Keywords: annotation, multimedia, user interface, user interfaces | |||
| DentroTrento: a virtual walk across history | | BIBAK | Full-Text | 318-321 | |
| Giuseppe Conti; Stefano Piffer; Gabrio Girardi; Raffaele De Amicis; Giuliana Ucelli | |||
| This paper illustrates the results of the DentroTrento project which
promotes historical, artistic and cultural heritage in the area of Trentino
through the use of Virtual Reality technologies. The project's goal was to
implement an user friendly system which could be used by visitors of an
archaeological site thus fostering a process of cultural enrichment. The
importance of the project, commissioned by the authority for Cultural Heritage,
partially resides in the peculiarity of the site's premises, located below a
square in Trento among the theatre's foundations. The interface developed
allows users speaking different languages to share simultaneously the
experience of virtual tour across time. Keywords: cultural heritage, user interfaces, virtual reality | |||
| Playing music: an installation based on Xenakis' musical games | | BIBAK | Full-Text | 322-325 | |
| Marco Liuni; Davide Morelli | |||
| Iannis Xenakis' works Duel and Strategie are two music games: sounds play
the role of moves in a match where the players are the two orchestra
conductors. They decide which part of the score is to be played in answer to
the opposite conductor's choice, looking at a game matrix which contains the
values of every couple of moves.
Playing Music is an installation driven by software implementing the same logic as Duel. Each player makes his moves with simple physical actions, recognized by the software using a camera, and the score is projected on a screen so that the audience can easily understand the rules, after a few moves. Keywords: audio strategy, installation, mapping, music game, theory of games, xenakis | |||
| Authoring interfaces with combined use of graphics and voice for both stationary and mobile devices | | BIBAK | Full-Text | 329-335 | |
| Fabio Paternò; Federico Giammarino | |||
| The technological evolution is making multimodal technology available to the
mass market with increased reliability. However, developing multimodal
interfaces is still difficult and there is a lack of authoring tools for this
purpose, especially when multi-device environments are addressed. In this
paper, we present a method and a supporting tool for authoring user interfaces
with various ways to combine graphics and voice in multi-device environments.
The tool is based on the use of logical descriptions and provides designers and
developers with support to manage the underlying complexity, make and modify
design choices, and exploit the possibilities offered by multimodality. Keywords: authoring environments, graphical and vocal modalities, multimodal
interfaces, web, x+v | |||
| Enabling interaction with single user applications through speech and gestures on a multi-user tabletop | | BIBAK | Full-Text | 336-343 | |
| Edward Tse; Chia Shen; Saul Greenberg; Clifton Forlines | |||
| Co-located collaborators often work over physical tabletops with rich
geospatial information. Previous research shows that people use gestures and
speech as they interact with artefacts on the table and communicate with one
another. With the advent of large multi-touch surfaces, developers are now
applying this knowledge to create appropriate technical innovations in digital
table design. Yet they are limited by the difficulty of building a truly useful
collaborative application from the ground up. In this paper, we circumvent this
difficulty by: (a) building a multimodal speech and gesture engine around the
Diamond Touch multi-user surface, and (b) wrapping existing, widely-used
off-the-shelf single-user interactive spatial applications with a multimodal
interface created from this engine. Through case studies of two quite different
geospatial systems -- Google Earth and Warcraft III -- we show the new
functionalities, feasibility and limitations of leveraging such single-user
applications within a multi user, multimodal tabletop. This research informs
the design of future multimodal tabletop applications that can exploit
single-user software conveniently available in the market. We also contribute
(1) a set of technical and behavioural affordances of multimodal interaction on
a tabletop, and (2) lessons learnt from the limitations of single user
applications. Keywords: computer supported cooperative work, multimodal speech and gesture
interfaces, tabletop interaction, visual-spatial displays | |||
| MAge-AniM: a system for visual modeling of embodied agent animations and their replay on mobile devices | | BIBAK | Full-Text | 344-351 | |
| Luca Chittaro; Fabio Buttussi; Daniele Nadalutti | |||
| Embodied agents are employed in several applications (e.g. computer-based
presentations, help systems, e-learning and training, sign language
communication for the deaf), but the process of developing them is still
complex (e.g., modeling animations is one of the difficult and time-consuming
tasks). Moreover, although mobile devices have recently reached a performance
level that allows them to manage 3D graphics, most embodied agents run on
desktop computers only. The aim of our research is twofold: (i) proposing a
tool that allows novice users to approach the animation modeling process of 3D
anthropomorphic agents in a simple way, and (ii) proposing a 3D player to
display these animated agents on PDAs. Besides discussing in detail the
proposed system, the paper reports about its informal evaluation and two of its
applications: sign language animation for the deaf and mobile fitness training. Keywords: embodied agents, mobile devices, visual animation modeling | |||
| The prospects for unrestricted speech input for TV content search | | BIBAK | Full-Text | 352-359 | |
| Kent Wittenburg; Tom Lanning; Derek Schwenke; Hal Shubin; Anthony Vetro | |||
| The need for effective search for television content is growing as the
number of choices for TV viewing and/or recording explodes. In this paper we
describe a preliminary prototype of a multimodal Speech-In List-Out (SILO)
interface in which users' input is unrestricted by vocabulary or grammar. We
report on usability testing with a sample of six users. The prototype enables
search through video content metadata downloaded from an electronic program
guide (EPG) service. Our setup for testing included adding a microphone to a TV
remote control and running an application on a PC whose visual interface was
displayed on a TV. Keywords: electronic program guides, information retrieval, multi-modal interfaces,
speech interfaces, television interfaces | |||
| The ASPICE project: inclusive design for the motor disabled | | BIBAK | Full-Text | 360-363 | |
| F. Aloise; F. Cincotti; F. Babiloni; M. G. Marciani; D. Morelli; S. Paolucci; G. Oriolo; A. Cherubini; F. Sciarra; F. Mangiola; A. Melpignano; F. Davide; D. Mattia | |||
| The ASPICE project aims at the development of a system which allows the
neuromotor disabled persons to improve or recover their mobility (directly or
by emulation) and communication within the surrounding environment. The system
pivots around a software controller running on a personal computer, which
offers a proper interface to communicate through input interfaces matched with
the individual's residual abilities.
This system links to the concept of user-centered interface promoted by human-computer interaction researchers. Each person has a "singular disability", thus the system must provide the possibility to use an adaptive interface customized to their own ability and requirements, which stem from contingent factors or simple preferences, depending on the user and his or her life stage, task, and environment. At this time, the system is under clinical validation, that will provide assessment through patients' feedback and guidelines for customized system installation. Keywords: ambient intelligence, brain-computer interfaces, robotic navigation, severe
motor impairment, technologies for independent life | |||
| Improving access of elderly people to real environments: a semantic based approach | | BIBAK | Full-Text | 364-368 | |
| Fabio Pittarello; Alessandro De Faveri | |||
| Access to real environments is often conditioned by a number of issues,
including the skills of the user (i.e. affected by aging, physical and
psychological deficiencies, etc.) and the complexity of the real environment
itself. This work proposes an approach for helping users with different skills,
including elderly people, to navigate through complex real scenes; such
approach is based on the semantic description of the objects and zones that
characterize the environment itself and takes advantage of an implementation
architecture based on web standards for generating navigational support. A case
study related to the creation of a guided tour through the indoor and outdoor
locations of the city of Venice, accessible through a multimodal web browser,
is presented. Keywords: XHTML + voice profile, elderly people, multimodality, navigation, semantic
3D environments | |||
| Oral messages improve visual search | | BIBAK | Full-Text | 369-372 | |
| Suzanne Kieffer; Noëlle Carbonell | |||
| Input multimodality combining speech and hand gestures has motivated
numerous usability studies. Contrastingly, issues relating to the design and
ergonomic evaluation of multimodal output messages combining speech with visual
modalities have not yet been addressed extensively.
The experimental study presented here addresses one of these issues. Its aim is to assess the actual efficiency and usability of oral system messages including some brief spatial information for helping users to locate objects on crowded displays rapidly and without effort. Target presentation mode, scene spatial structure and task difficulty were chosen as independent variables. Two conditions were defined: the visual target presentation mode (VP condition) and the multimodal target presentation mode (MP condition). Each participant carried out two blocks of visual search tasks (120 tasks per block, and one block per condition). Scene target presentation mode, scene structure and task difficulty were found to be significant factors. Multimodal target presentation mode proved to be more efficient than visual target presentation. In addition, participants expressed very positive judgments on multimodal target presentations which were preferred to visual presentations by a majority of participants. Besides, the contribution of spatial messages to visual search speed and accuracy was influenced by scene spatial structure and task difficulty. First, messages improved search efficiency to a lesser extent for 2D array layouts than for some other symmetrical layouts, although the use of 2D arrays for displaying pictures is currently prevailing. Second, message usefulness increased with task difficulty. Most of these results are statistically significant. Keywords: experimental evaluation, multimodal system messages, speech and graphics,
usability study, visual search, visual target spotting | |||
| Ubiquitous graphics: combining hand-held and wall-size displays to interact with large images | | BIBAK | Full-Text | 373-377 | |
| Johan Sanneblad; Lars Erik Holmquist | |||
| Ubiquitous Graphics addresses the problem of interacting with very large
computer graphics images, for instance an online map or a large digitized
painting. It uses a combination of mobile and stationary displays to show both
overview and detail. The main image is displayed using a projector or other
large traditional display. To access details, the user holds a mobile device in
front of the stationary display. Using ultrasonic tracking the smaller display
is aligned with the overview, giving access to a corresponding portion of the
image in higher resolution. Alternatively the system provides "magic lens"
functionality that can show additional information. Users may add free-form
annotations and pre-defined graphical objects by interacting directly with the
mobile device. In a user study, subjects drew better descriptive maps using the
system than an ordinary map application. The system is robust and was
demonstrated to several thousand people in a week-long public exhibit. Keywords: magic lenses, mobile computing, peephole displays, position aware displays,
ubiquitous computing | |||
| A comparison of static and moving presentation modes for image collections | | BIBAK | Full-Text | 381-388 | |
| Katy Cooper; Oscar de Bruijn; Robert Spence; Mark Witkowski | |||
| In both professional and personal contexts, a common activity is the search
for a target image among a collection of images. The presentation of that
collection to a user can assume a wide variety of forms, and it would help
interaction designers to be aware of the comparative properties of available
presentation modes. A property of major interest is the percentage of correct
identification of the presence or absence of the target image within the
collection; another is users' acceptance of a presentation mode. Several modes
of Rapid Serial Visual Presentation (RSVP) are compared for effectiveness in a
number of image identification tasks, and with regard to user acceptance and
stated preference.
Presentation modes have been classified as static or moving. For a selected representative group of three static and three moving modes, for three image presentation times and for three tasks of increasing complexity, we report experimental results which in most cases establish, with a high degree of statistical confidence, that -- over the range of independent variables investigated -- (a) static modes are more successful with regard to identification success than moving modes; (b) static modes are far more preferred than moving ones; (c) identification success generally increases with increase in presentation time per image; (d) for mixed and tile modes, identification success is relatively insensitive to image presentation time; and (e) success rate decreases with increase in task complexity except, notably, for slide-show and mixed modes. Evidence from eye-gaze records suggests that the eye-gaze strategy adopted by a subject exerts a very strong influence on both identification success and mode preference. Conclusions are drawn about guidance that can be offered to an interaction designer. Keywords: eye-gaze tracking, rapid serial visual presentation (RSVP), user preference | |||
| Contrasting portraits of email practices: visual approaches to reflection and analysis | | BIBAK | Full-Text | 389-395 | |
| Adam Perer; Marc A. Smith | |||
| Over time, many people accumulate extensive email repositories that contain
detailed information about their personal communication patterns and
relationships. We present three visualizations that capture hierarchical,
correlational, and temporal patterns present in user's email repositories.
These patterns are difficult to discover using traditional interfaces and are
valuable for navigation and reflection on social relationships and
communication history. We interviewed users with diverse email habits and found
that they were able to interpret these images and could find interesting
features that were not evident to them through their standard email interfaces.
The images also capture a wide range of variation in email practices. These
results suggest that information visualizations of personal communications have
value for end-users and analysts alike. Keywords: email, information visualization, personal communication | |||
| Improving list revisitation with ListMaps | | BIBAK | Full-Text | 396-403 | |
| Carl Gutwin; Andy Cockburn | |||
| Selecting items from lists is a common task in many applications.
Alphabetically-sorted listboxes are the most common interface widget used to
accomplish this selection, but although general they can be slow and
frustrating to use, particularly when the lists are long. In addition, when the
user regularly revisits a small set of items, listboxes provide little support
for increased performance through experience. To address these shortcomings, we
developed a new list selection device called a ListMap, which organizes list
items into a space-filling array of buttons. Items never move in a ListMap,
which allows people to make use of spatial memory to find common items more
quickly. We carried out a study to compare selection of font names from a set
of 220 fonts using both ListMaps and standard listboxes. We found that although
listboxes are faster for unknown items, revisitation leads to significant
performance gains for the ListMap. Keywords: list selection, listboxes, listmaps, revisitation | |||
| Line graph explorer: scalable display of line graphs using Focus+Context | | BIBAK | Full-Text | 404-411 | |
| Robert Kincaid; Heidi Lam | |||
| Scientific measurements are often depicted as line graphs. State-of-the-art
high throughput systems in life sciences, telemetry and electronics measurement
rapidly generate hundreds to thousands of such graphs. Despite the increasing
volume and ubiquity of such data, few software systems provide efficient
interactive management, navigation and exploratory analysis of large line graph
collections. To address these issues, we have developed Line Graph Explorer
(LGE). LGE is a novel and visually scalable line graph management system that
supports facile navigation and interactive visual analysis. LGE provides a
compact overview of the entire collection by encoding the y-dimension of
individual line graphs with color instead of space, thus enabling the analyst
to see major common features and alignments of the data. Using Focus+Context
techniques, LGE provides interactions for viewing selected compressed graphs in
detail as standard line graphs without losing a sense of the general pattern
and major features of the collection. To further enhance visualization and
pattern discovery, LGE provides sorting and clustering of line graphs based on
similarity of selected graph features. Sequential sorting by associated line
graph metadata is also supported. We illustrate the features and use of LGE
with examples from meteorology and biology. Keywords: Focus+Context, line graph | |||
| Querying spatio-temporal databases using a visual environment | | BIBAK | Full-Text | 412-419 | |
| Valéria M. B. Cavalcanti; Ulrich Schiel; Cláudio de Souza Baptista | |||
| Visual Query Systems (VQS) are becoming a very attractive field of research,
especially for advanced database systems such as spatial and temporal ones.
However, most of the visual query proposals support either spatial or temporal
data but not both. This paper presents a new VQS which supports querying
spatio-temporal data. The main idea is to provide a web-based, user-friendly,
and visual environment for querying generic spatio-temporal databases.
Therefore end users do not need to worry about neither data schemas nor query
language syntax. The proposed VQS is based on well-established standards such
as SQL and OpenGIS, and it is flexible to be used in many database systems with
support for spatial data. Keywords: spatio-temporal databases, visual query databases, visual query interfaces | |||
| Task-at-hand interface for change detection in stock market data | | BIBA | Full-Text | 420-427 | |
| Carmen Sanz Merino; Mike Sips; Daniel A. Keim; Christian Panse; Robert Spence | |||
| Companies trading stocks need to store information on stock prices over specific time intervals, which results in very large databases. Large quantities of numerical data (thousands of records) are virtually impossible to understand quickly and require the use of a visual model, since that is the fastest way for a human brain to absorb those enormous collections of data. However, little work has been done on verifying which visualizations are more suitable to represent these data sets. Such work is of crucial importance, since it enables us to identify those useful visual models and, in addition, opens our minds to new research possibilities. This paper presents an empirical study of different visualizations, that have been employed for stock market data, by comparing the results obtained by all studied techniques in typical exploratory data analysis tasks. This work provides several research contributions to the design of advanced visual data exploration interfaces. | |||
| Tumble! Splat! helping users access and manipulate occluded content in 2D drawings | | BIBAK | Full-Text | 428-435 | |
| Gonzalo Ramos; George Robertson; Mary Czerwinski; Desney Tan; Patrick Baudisch; Ken Hinckley; Maneesh Agrawala | |||
| Accessing and manipulating occluded content in layered 2D drawings can be
difficult. This paper characterizes a design space of techniques that
facilitate access to occluded content. In addition, we introduce two new tools,
Tumbler and Splatter, which represent unexplored areas of the design space.
Finally, we present results of a study that contrasts these two tools against
the traditional scene index used in most drawing applications. Results show
that Splatter is comparable to and can be better than the scene index. Our
findings allow us to understand the inherent design tradeoffs, and to identify
areas for further improvement. Keywords: 2D drawing, interaction technique, layer management, occlusion | |||
| Animated visualization of time-varying 2D flows using error diffusion | | BIBAK | Full-Text | 436-439 | |
| Alejo Hausner | |||
| This paper presents a fast glyph-placement algorithm for visualization of
time-varying 2D flow. The method can be used to place many kinds of glyphs.
Here it is applied to two in particular: arrows in a hedgehog diagram and
streak lines. It works by overpopulating images with glyphs, and then
decimating them. The decimation phase uses error diffusion, but extends this
halftoning technique to solve the problem of coloring a collection of shapes
which do not lie on a raster grid. Because error diffusion is a greedy
algorithm, the method avoids iterative adjustments of glyph positions, and is
fast. When used to visualize static flow fields, the resulting images are free
of grid and clustering artifacts. It can be extended to visualize time-varying
flow fields, by modifying the error diffusion algorithm further to maintain
coherence between frames in an animation. Keywords: error diffusion, flow visualization, time-varying flow | |||
| Browsing large collections of images through unconventional visualization techniques | | BIBAK | Full-Text | 440-444 | |
| Marco Porta | |||
| In this paper we describe some alternative methods intended for rapid and
effective browsing of large collections of images. Specifically, we address the
user who, not having a clear idea about what to search, needs to explore the
entire image database to identify what he or she likes. The purpose of our
approaches is to find techniques characterized by good trade-offs between
browsing time and quality of the exploration. Keywords: image browsing, image database, image presentation | |||
| Browsing through 3D representations of unstructured picture collections: an empirical study | | BIBAK | Full-Text | 445-448 | |
| Olivier Christmann; Noëlle Carbonell | |||
| The paper presents a 3D interactive representation of fairly large picture
collections which facilitates browsing through unstructured sets of icons or
pictures. Implementation of this representation implies choosing between two
visualization strategies: users may either manipulate the view (OV) or be
immersed in it (IV). The paper first presents this representation, then
describes an empirical study (17 participants) aimed at assessing the utility
and usability of each view. Subjective judgements in questionnaires and
debriefings were varied: 7 participants preferred the IV view, 4 the OV one,
and 6 could not choose between the two. Visual acuity and visual exploration
strategies seem to have exerted a greater influence on participants'
preferences than task performance or feeling of immersion. Keywords: 3D visualization, immersive virtual reality, manipulation of 3D objects,
photograph viewers, picture browsing, usability studies | |||
| Euclidean representation of 3D electronic institutions: automatic generation | | BIBAK | Full-Text | 449-452 | |
| Anton Bogdanovych; Sara Drago | |||
| In this paper we present the 3D Electronic Institutions metaphor and show
how it can be used for the specification of highly secure Virtual Worlds and
how 3D Virtual Worlds can be automatically generated from this specification.
To achieve the generation task we propose an algorithm for automatic
transformation of the Performative Structure graph into a 3D Virtual World,
using the rectangular dualization technique. The nodes of the initial graph are
transformed into rooms, the connecting arcs between nodes determine which rooms
have to be placed next to each other and define the positions of the doors
connecting those rooms. The proposed algorithm is sufficiently general to be
used for transforming any planar graph into a 3D Virtual World. Keywords: 3D electronic institutions, rectangular dualization | |||
| Exploring augmented reality visualizations | | BIBAK | Full-Text | 453-456 | |
| Antti Aaltonen; Juha Lehikoinen | |||
| Augmented Reality (AR) enhances our perception of reality by overlaying a
digital creation on real world. AR information is often considered visual and
most of it is associated with real world objects. This poses several challenges
in, e. g., aligning the real and virtual worlds. Another essential -- yet not
widely studied -- aspect is a more theoretical treatment of AR visualization.
In this paper, we take the first steps towards understanding AR visualizations
in more detail. We study existing AR visualizations based on well-known
visualization techniques Zoom and Pan; Overview and Detail; and Focus+Context,
and use these techniques to characterize AR visualizations in general. We claim
that our approach covers the majority of visualization schemes applicable to
AR, and is a useful method for understanding the fundamentals of AR
visualizations in general. Keywords: augmented reality, focus+context, information visualization, overview and
detail, pan and zoom | |||
| Navigation in degree of interest trees | | BIBAK | Full-Text | 457-462 | |
| Raluca Budiu; Peter Pirolli; Michael Fleetwood | |||
| We present an experiment that compares how people perform search tasks in a
degree-of-interest browser and in a Windows-Explorer-like browser. Our results
show that, whereas users do attend to more information in the DOI browser, they
do not complete the task faster than in an Explorer-like browser. However, in
both types of browser, users are faster to complete high information scent
search tasks than low information scent tasks. We present an ACT-R
computational model of the search task in the DOI browser. The model describes
how a visual search strategy may combine with semantic aspects of processing,
as captured by information scent. We also describe a way of automatically
estimating information scent in an ontological hierarchy by querying a large
corpus (in our case, Google's corpus). Keywords: ACT-R, DOI trees, information scent, information visualization, user models,
user studies | |||
| Putting pictures in context | | BIBAK | Full-Text | 463-466 | |
| Lars-Erik Janlert | |||
| Visual likeness as a way of linking image to referent will need complement
by other methods when mobile computing raises demands for situation-sensitive
images. The issue of visual interface metaphors is reviewed, and the first
steps to make a structured inventory of situational linking of images are
taken, suggesting further exploration of metonymy and temporality. Keywords: interface metaphor, mobile computing, situational context, ubiquitous
computing, visual interface, visual linking | |||
| TAPRAV: a tool for exploring workload aligned to models of task execution | | BIBAK | Full-Text | 467-470 | |
| Brian P. Bailey; Chris W. Busbey | |||
| Existing analysis tools are not sufficient for exploring pupillary response,
as the data typically needs to be explored in relation to the corresponding
task's execution. To address this need, we have developed an interactive
visualization tool called TAPRAV. Key components include (i) a visualization of
the pupillary response aligned to the model of task execution, useful for
making sense of the overall data set; (ii) an interactive overview+detail
metaphor, enabling rapid inspection of details; (iii) synchronization with the
video of screen interaction, providing awareness of the state of the task; and
(iv) interaction supporting discovery driven analysis. Keywords: mental workload, pupil size, task models, visualization | |||
| View projection animation for occlusion reduction | | BIBAK | Full-Text | 471-475 | |
| Niklas Elmqvist; Philippas Tsigas | |||
| Inter-object occlusion is inherent to 3D environments and is one of the
challenges of using 3D instead of 2D computer graphics for information
visualization. In this paper, we examine this occlusion problem by building a
theoretical framework of its causes and components. As a result of this
analysis, we present an interaction technique for view projection animation
that reduces inter-object occlusion in 3D environments without modifying the
geometrical properties of the objects themselves. The technique provides smooth
on-demand animation between parallel and perspective projection modes as well
as online manipulation of view parameters, allowing the user to quickly and
easily adapt the view to avoid occlusion. A user study indicates that the
technique significantly improves object discovery over normal perspective
views. We have also implemented a prototype of the technique in the Blender 3D
modeller. Keywords: 3D visualization, occlusion reduction, view projection | |||
| Visual editing of animated algorithms: the Leonardo Web builder | | BIBAK | Full-Text | 476-479 | |
| Vincenzo Bonifaci; Camil Demetrescu; Irene Finocchi; Luigi Laura | |||
| Leonardo Web is a collection of tools to animate algorithms. Animations can
be generated with a visual editor or directly as a trace of an algorithm's
execution. They can be visualized via a small Java player, available as an
applet or as a standalone application; the player supports bidirectional
continuous and step-by-step execution. Furthermore the system allows to export
the animations in several formats, including Macromedia Flash, Microsoft
PowerPoint and animated GIF.In this paper we discuss the design issues of one
of the component of the visual editor of Leonardo Web, called the Builder, that
can be used to design an animation from scratch as well as to refine
batch-generated ones. Keywords: algorithms animation, visual interfaces | |||
| Visualization of hand gestures for pervasive computing environments | | BIBAK | Full-Text | 480-483 | |
| Sanna Kallio; Juha Kela; Jani Mäntyjärvi; Johan Plomp | |||
| Visualization method is proposed as an additional feature for
accelerometer-based gesture control. The motivation for visualization of
gesture control is justified and the challenges related to visualization are
presented. The gesture control is based on Hidden Markov Models. This paper
describes basic concepts of the gesture visualization and studies how well the
developed visualization method can animate hand movement performed during the
gesture control. The results indicate that visualization clearly provides
information about the performed gesture, and it could be utilized in providing
essential feedback and guidance to the user in future gesture control
applications. Keywords: accelerometers, gesture control, gesture recognition, gesture visualization,
human computer interaction, mobile devices, user feedback | |||
| Visualization of patient data at different temporal granularities on mobile devices | | BIBAK | Full-Text | 484-487 | |
| Luca Chittaro | |||
| The capability of accessing, analyzing and possibly updating patients'
medical records from anywhere through a mobile device in the hands of
clinicians and nurses is considered to be a particularly promising application.
Information Visualization has explored interactive visual formats to help users
in analyzing patient records, but they are meant for the desktop context. This
paper begins to explore the problem of visualizing patient record data with the
limited display and interaction capabilities of mobile devices, focusing on
common PDAs and temporal data. Keywords: information visualization, mobile devices, patient records | |||
| Visualizing a temporally-enhanced ontology | | BIBAK | Full-Text | 488-491 | |
| Katifori Akrivi; Vassilakis Costas; Lepouras Georgios; Daradimos Ilias; Halatsis Constantin | |||
| Most ontology development methodologies and tools for ontology management
deal with ontology snapshots, i.e. they model and manage only the most recent
version of ontologies, which is inadequate for contexts where the history of
the ontology is of interest, such as historical archives. This work presents a
modeling for entity and relationship timelines in the Protégé
tool, complemented with a visualization plug-in, which enables users to examine
entity evolution along the timeline. Keywords: entity timeline, human-computer interaction, temporally enhanced ontology,
visualization method | |||
| A wearable interface for visualizing coauthor networks toward building a sustainable research community | | BIBAK | Full-Text | 492-495 | |
| Susumu Kuriyama; Masao Ohira; Hiroshi lgaki; Ken-ichi Matsumoto | |||
| In this paper, we introduce SCACS, a Social Context-Aware Communication
System that facilitates face-to-face communications between old-timers and
newcomers in a research community. SCACS provides users with information on
coauthor relationships collocutors have in order to help users understand
collocutors' research background and relations to own. While the system works
so as to help newcomers get better understandings on the research community by
meeting old-timers -- central to the community, it also works to recruit
newcomers who might bring new ideas and research topics, in order to make the
community sustainable. One of the contributions of the paper is to show an
example of a fusion of social networking and ubiquitous computing technologies,
which have attracted a considerable amount of attentions in the last few years.
In contrast to exploiting social interactions in real world to enhance
experiences of social networking services in virtual world, SCACS collects
information on social networks (e.g., coauthor relationships networks) from
virtual spaces (that is, databases), and then visualizes them to facilitate
face-to-face communications among people in physical environments through using
wearable interfaces. Instead of providing users with complex social network
graphs, SCACS transforms network graphs into tree maps so that users are able
to better understand the community. Keywords: community support, icebreaker, legitimate peripheral participation, social
networking, ubiquitous computing | |||
| Coordinating views in the InfoVis toolkit | | BIBAK | Full-Text | 496-499 | |
| Raquel M. Pillat; Carla M. D. S. Freitas | |||
| Multidimensional data sets can be visualized in a variety of forms provided
by many techniques described in the literature. Depending on specific tasks,
users might need to analyze the same data set using different representations.
Moreover, they might need to interact with a view and have the results shown
also in the others. This paper presents a system to provide multiple
coordinated views of multidimensional data. It activates the coordination of
techniques provided by InfoVis as well as visualizations we implemented using
its basic resources. We allow users to set which visualizations they want to
coordinate through a diagram representing the different visualizations. We
present the user-driven coordination scheme and extensions we made in InfoVis
to allow coordinated views through different interaction tools. Keywords: coordinated views, information visualization | |||
| Mavigator: a flexible graphical user interface to data sources | | BIBAK | Full-Text | 500-503 | |
| Mariusz Trzaska; Kazimierz Subieta | |||
| We present Mavigator, a prototype of a graphical user interface to
databases. The system is dedicated to naive users (computer non-professionals)
and allows them to retrieve information from any data source, including
object-oriented and XML-oriented databases. The system extends its core
functionalities by the Active Extensions (AE) module, which assumes a trade-off
between simplicity of user retrieval interfaces and complexity of output
formatting functions. In AE the latter are to be done by a programmer using a
fully-fledged programming language (currently C#). Thus the retrieved data can
be post-processed or presented in any conceivable visual form. Another novel
feature of the Mavigator is the Virtual Schemas module, which allows
customization of a database schema, in particular, changing some names, adding
new associations or hiding some classes. Keywords: GUI, HCI, baskets, database views, graphical query interface, information
browsing, information retrieval, navigation | |||
| NAVRNA: visualization -- exploration -- editing of RNA | | BIBAK | Full-Text | 504-507 | |
| Gilles Bailly; Laurence Nigay; David Auber | |||
| In this paper we describe NAVRNA, an interactive system that enables
biologists or researchers in bioinformatics to visualize, explore and edit RNA
molecules. The key characteristics of NAVRNA are (1) to exploit multiple
display surfaces (2) to enable the manipulation of both the 2D view of RNA
called secondary structure, as well as the 3D view of RNA called tertiary
structure while maintaining consistency between the two views, (3) to enable
co-located synchronous collaborative manipulation of the RNA structures and (4)
to provide two-handed interaction techniques for navigating and editing RNA
structures and in particular a two-handed technique for bending the structure. Keywords: RNA structure, distributed display environment, tabletop display, two-handed
interaction, visualization | |||
| System for enhanced exploration and querying | | BIBAK | Full-Text | 508-511 | |
| Markku Rontu; Ari Korhonen; Lauri Malmi | |||
| This paper introduces SEEQ -- a System for Enhanced Exploration and
Querying. It is a visual query system for databases that uses a diagrammatic
visualization for most user interaction. The database schema is displayed as a
graph of the data model including classes, associations and attributes. The
user formulates the query in terms of direct manipulation as a graph of the
schema objects, additional operators and constants. The output of the query is
visualized as a graph of instances and constants or in some other format that
is appropriate for the data. SEEQ can operate on arbitrary relational data base
provided that the schema is in XML format. Keywords: database, visual query system, visualization | |||
| A visual tool to support technical analysis of stock market data | | BIBAK | Full-Text | 512-515 | |
| Andre Suslik Spritzer; Carla M. D. S. Freitas | |||
| A stock market investor relies on two schools of analysis of market behavior
to determine a trading strategy: Technical Analysis and Fundamental Analysis.
Fundamental Analysis is based on the study of the fundamental data of a company
and more directed to long-term investments, not taking into account small
short-term price oscillations. On the other hand, Technical Analysis is mostly
used for mid and short-term investments and is based on the study of past price
behavior through the use of statistical tools and price history charts, taking
into consideration the hypothesis that prices form patterns and reflect all the
relevant information about a company and about the psychology of other
investors. The application of Technical Analysis requires a computational
system capable of displaying price history charts and providing tools to be
used with them. This paper presents a prototype for a portable, extensible and
easy-to-use tool for desktop/laptop and handheld computers that provides the
investor with techniques for the visualization of stock market data. Classical
visualization techniques and tools, such as Line, Bar, Candlestick and Point
and Figure Charts, as well as extra tools, such as candlestick pattern
recognition, are available as built-in functions, but new tools and
visualizations can be easily added. The software was built with the .NET and
.NET Compact frameworks and utilizes XML to store the data set. Keywords: information visualization, stock market, technical analysis | |||