| Designing interaction, not interfaces | | BIBAK | Full-Text | 15-22 | |
| Michel Beaudouin-Lafon | |||
| Although the power of personal computers has increased 1000-fold over the
past 20 years, user interfaces remain essentially the same. Innovations in HCI
research, particularly novel interaction techniques, are rarely incorporated
into products. In this paper I argue that the only way to significantly improve
user interfaces is to shift the research focus from designing interfaces to
designing interaction. This requires powerful interaction models, a better
understanding of both the sensory-motor details of interaction and a broader
view of interaction in the context of use. It also requires novel interaction
architectures that address reinterpretability, resilience and scalability. Keywords: design principles, instrumental interaction, interaction architecture,
interaction model, interaction paradigm, situated interaction | |||
| Stitching: pen gestures that span multiple displays | | BIBAK | Full-Text | 23-31 | |
| Ken Hinckley; Gonzalo Ramos; François Guimbretière; Patrick Baudisch; Marc Smith | |||
| Stitching is a new interaction technique that allows users to combine
pen-operated mobile devices with wireless networking by using pen gestures that
span multiple displays. To stitch, a user starts moving the pen on one screen,
crosses over the bezel, and finishes the stroke on the screen of a nearby
device. Properties of each portion of the pen stroke are observed by the
participating devices, synchronized via wireless network communication, and
recognized as a unitary act performed by one user, thus binding together the
devices. We identify the general requirements of stitching and describe a
prototype photo sharing application that uses stitching to allow users to copy
images from one tablet to another that is nearby, expand an image across
multiple screens, establish a persistent shared workspace, or use one tablet to
present images that a user selects from another tablet. We also discuss design
issues that arise from proxemics, that is, the sociological implications of
users collaborating in close quarters. Keywords: co-located collaboration, mobile devices, pen computing, proxemics,
spontaneous device sharing, synchronous gestures | |||
| Display space usage and window management operation comparisons between single monitor and multiple monitor users | | BIBAK | Full-Text | 32-39 | |
| Dugald Ralph Hutchings; Greg Smith; Brian Meyers; Mary Czerwinski; George Robertson | |||
| The continuing trend toward greater processing power, larger storage, and in
particular increased display surface by using multiple monitor supports
increased multi-tasking by the computer user. The concomitant increase in
desktop complexity has the potential to push the overhead of window management
to frustrating and counterproductive new levels. It is difficult to adequately
design for multiple monitor systems without understanding how multiple monitor
users differ from, or are similar to, single monitor users. Therefore, we
deployed a tool to a group of single monitor and multiple monitor users to log
window management activity. Analysis of the data collected from this tool
revealed that usage of interaction components may change with an increase in
number of monitors, and window visibility can be a useful measure of user
display space management activity, especially for multiple monitor users. The
results from this analysis begin to fill a gap in research about real-world
window management practices. Keywords: UI logs, automation, multiple monitors, space management, user interaction,
window management | |||
| A visual tool for tracing users' behavior in Virtual Environments | | BIBAK | Full-Text | 40-47 | |
| Luca Chittaro; Lucio Ieronutti | |||
| Although some guidelines (e.g., based on architectural principles) have been
proposed for designing Virtual Environments (VEs), several usability problems
can be identified only by studying the behavior of real users in VEs. This
paper proposes a tool, called VU-Flow, that is able to automatically record
usage data of VEs and then visualize it in formats that make it easy for the VE
designer to visually detect peculiar users' behaviors and thus better
understand the effects of her design choices. In particular, the visualizations
concern: i) the detailed paths followed by single users or groups of users in
the VE, ii) areas of maximum (or minimum) users' flow, iii) the parts of the
environment more seen (or less seen) by users, iv) detailed replay of users
visits. We show examples of how these visualizations allow one to visually
detect useful information such as the interests of users, navigation problems,
users' visiting style. Although this paper describes how VU-Flow can be used in
the context of VEs, it is interesting to note that the tool can be also applied
to the study of users of location-aware mobile devices in physical
environments. Keywords: evaluation tools, users' flow, virtual environments, virtual reality | |||
| More than the sum of its members: challenges for group recommender systems | | BIBA | Full-Text | 48-54 | |
| Anthony Jameson | |||
| Systems that recommend items to a group of two or more users raise a number of challenging issues that are so far only partly understood. This paper identifies four of these issues and points out that they have been dealt with to only a limited extent in the group recommender systems that have been developed so far. The issues are especially important in settings where group members specify their preferences explicitly and where they are not able to engage in face-to-face interaction. We illustrate some of the solutions discussed with reference to the TRAVEL DECISION FORUM prototype. The issues concern (a) the design of suitable preference elicitation and aggregation methods, in particular nonmanipulable aggregation mechanisms; and (b) ways of making members aware of each other's preferences and motivational orientations, such as the use of animated representatives of group members. | |||
| MADCOW: a multimedia digital annotation system | | BIBAK | Full-Text | 55-62 | |
| Paolo Bottoni; Roberta Civica; Stefano Levialdi; Laura Orso; Emanuele Panizzi; Rosa Trinchese | |||
| Digital annotation of multimedia documents adds information to a document
(e.g. a web page) or parts of it (a multimedia object such as an image or a
video stream contained in the document). Digital annotations can be kept
private or shared among different users over the internet, allowing discussions
and cooperative work. We study the possibility of annotating multimedia
documents with objects which are in turn of multimedial nature. Annotations can
refer to whole documents or single portions thereof, as usual, but also to
multi-objects, i.e. groups of objects contained in a single document. We
designed and developed a new digital annotation system organized in a
client-server architecture, where the client is a plug-in for a standard web
browser and the servers are repositories of annotations to which different
clients can login. Annotations can be retrieved and filtered, and one can
choose different annotation servers for a document. We present a
platform-independent design for such a system, and illustrate a specific
implementation for Microsoft Internet Explorer on the client side and on
JSP/MySQL for the server side. Keywords: annotation, multimedia document, plug-ins | |||
| Integrating expanding annotations with a 3D explosion probe | | BIBAK | Full-Text | 63-70 | |
| Henry Sonnet; Sheelagh Carpendale; Thomas Strothotte | |||
| Understanding complex 3D virtual models can be difficult, especially when
the model has interior components not initially visible and ancillary text. We
describe new techniques for the interactive exploration of 3D models.
Specifically, in addition to traditional viewing operations, we present new
text extrusion techniques combined with techniques that create an interactive
explosion diagram. In our approach, scrollable text annotations that are
associated with the various parts of the model can be revealed dynamically,
either in part or in full, by moving the mouse cursor within annotation trigger
areas. Strong visual connections between model parts and the associated text
are included in order to aid comprehension. Furthermore, the model parts can be
separated, creating interactive explosion diagrams. Using a 3D probe, occluding
objects can be interactively moved apart and then returned to their initial
locations. Displayed annotations are kept readable despite model manipulations.
Hence, our techniques provide textual context within the spatial context of the
3D model. Keywords: 3D model exploration, expanding annotations, explosion diagram, interaction
design | |||
| Using games to investigate movement for graph comprehension | | BIBAK | Full-Text | 71-79 | |
| John Bovey; Florence Benoy; Peter Rodgers | |||
| We describe the results of empirical investigations that explore the
effectiveness of moving graph diagrams to improve the comprehension of their
structure. The investigations involved subjects playing a game that required
understanding the structure of a number of graphs. The use of a game as the
task was intended to motivate the exploration of the graph by the subjects. The
results show that movement can be beneficial when there is node-node or
node-edge occlusion in the graph diagram but can have a detrimental effect when
there is no occlusion, particularly if the diagram is small. We believe the
positive result should generalise to other graph exploration tasks, and that
graph movement is likely be useful as an additional graph exploration tool. Keywords: diagram comprehension, graph drawing, graph movement | |||
| Usability of E-learning tools | | BIBAK | Full-Text | 80-84 | |
| C. Ardito; M. De Marsico; R. Lanzilotti; S. Levialdi; T. Roselli; V. Rossano; M. Tersigni | |||
| The new challenge for designers and HCI researchers is to develop software
tools for effective e-learning. Learner-Centered Design (LCD) provides
guidelines to make new learning domains accessible in an educationally
productive manner. A number of new issues have been raised because of the new
"vehicle" for education. Effective e-learning systems should include
sophisticated and advanced functions, yet their interface should hide their
complexity, providing an easy and flexible interaction suited to catch
students' interest. In particular, personalization and integration of learning
paths and communication media should be provided.
It is first necessary to dwell upon the difference between attributes for platforms (containers) and for educational modules provided by a platform (contents). In both cases, it is hard to go deeply into pedagogical issues of the provided knowledge content. This work is a first step towards identifying specific usability attributes for e-learning systems, capturing the peculiar features of this kind of applications. We report about a preliminary users study involving a group of e-students, observed during their interaction with an e-learning system in a real situation. We then propose to adapt to the e-learning domain the so called SUE (Systematic Usability Evaluation) inspection, providing evaluation patterns able to drive inspectors' activities in the evaluation of an e-learning tool. Keywords: e-learning, learner centered design, usability evaluation | |||
| Scalable Fabric: flexible task management | | BIBAK | Full-Text | 85-89 | |
| George Robertson; Eric Horvitz; Mary Czerwinski; Patrick Baudisch; Dugald Ralph Hutchings; Brian Meyers; Daniel Robbins; Greg Smith | |||
| Our studies have shown that as displays become larger, users leave more
windows open for easy multitasking. A larger number of windows, however, may
increase the time that users spend arranging and switching between tasks. We
present Scalable Fabric, a task management system designed to address problems
with the proliferation of open windows on the PC desktop. Scalable Fabric
couples window management with a flexible visual representation to provide a
focus-plus-context solution to desktop complexity. Users interact with windows
in a central focus region of the display in a normal manner, but when a user
moves a window into the periphery, it shrinks in size, getting smaller as it
nears the edge of the display. The window "minimize" action is redefined to
return the window to its preferred location in the periphery, allowing windows
to remain visible when not in use. Windows in the periphery may be grouped
together into named tasks, and task switching is accomplished with a single
mouse click. The spatial arrangement of tasks leverages human spatial memory to
make task switching easier. We review the evolution of Scalable Fabric over
three design iterations, including discussion of results from two user studies
that were performed to compare the experience with Scalable Fabric to that of
the Microsoft Windows XP TaskBar. Keywords: interaction, scaling, spatial memory, task management | |||
| Scaffolding visually cluttered web pages to facilitate accessibility | | BIBAK | Full-Text | 90-93 | |
| Alison Lee | |||
| Increasingly, rich and dynamic content and abundant links are making Web
pages visually cluttered and widening the accessibility divide for the disabled
and people with impairments. The adaptations approach of transforming Web pages
has enabled users with diverse abilities to access a Web page. However, the
challenge remains for these users to work with a Web page, particularly among
people with minimal Web experience and cognitive limitations. We propose that
scaffolding can allow users to learn certain skills that help them function
online with greater autonomy. In the case of visually cluttered Web pages,
several accessibility scaffoldings were created to enable users to learn where
core content begins, how text flows in a part of a Web page, and what the
overall structure of a Web page is. These scaffoldings expose the elements,
pathways, and organization of a Web page that enable users to interpret and
grasp the structure of a Web page. We present the concept of an accessibility
scaffolding, the designs of the scaffoldings for visually cluttered pages, and
user feedback from people who work with our target end-users. Keywords: GUI button interface, Web, accessibility, design space, dynamic diversity,
interaction design, page segmentation, recorder interface, scaffolding, speak
text | |||
| Scene-Driver: reusing broadcast animation content for engaging, narratively coherent games | | BIBAK | Full-Text | 94-97 | |
| Annika Wolff; Paul Mulholland; Zdenek Zdrahal | |||
| Scene-Driver is a software toolkit for the reuse of broadcast animation
content to provide new engaging experiences for children. It has been developed
and tested using content from the children's television series "Tiny Planets".
Scene-Driver can be used to produce variations on a domino-like game. When
playing, the child selects from a set of tiles that depict, for example,
characters from the series. The child manipulates the direction of a story in
the Tiny Planet world by their choice of tile. The successful selection of a
tile will result in a scene from the show being played. A scene is defined as a
section from an episode which has certain self-contained narrative elements
such as conflict introduction, conflict resolution or comedic event. A
scene-supervisor uses these descriptions to ensure that as well as having all
the properties prescribed by the child's choice of tile, the scenes are
presented in a coherent order according to certain plot and directorial
principles. Inter-scene continuity is provided in the form of transition scenes
which depict the departure and arrival of relevant characters between one scene
and the next. Preliminary evaluations have demonstrated the potential of
Scene-Driver to produce engaging and usable games based on broadcast content
for young children. Keywords: AI planning algorithms, animated interfaces, child directed interface,
interactive narrative, interface evaluation, visual interaction | |||
| On electronic annotation and its implementation | | BIBAK | Full-Text | 98-102 | |
| Daniela Fogli; Giuseppe Fresta; Piero Mussio | |||
| Electronic document and electronic annotation evolve and complement the
traditional document and annotation in recording, developing and making
available community knowledge. This paper discusses electronic annotation and
its importance as a tool for two way exchange of ideas among humans pursuing a
common goal. The discussion is performed by presenting an example in the Earth
science field. Keywords: characteristic structure, electronic annotation, electronic document,
virtual entity | |||
| Visualization of music performance as an aid to listener's comprehension | | BIBA | Full-Text | 103-106 | |
| Rumi Hiraga; Noriyuki Matsuda | |||
| We present a new method for visualizing musical expressions with a special focus on the three major elements of tempo change, dynamics change, and articulation. We have represented tempo change as a horizontal interval delimited by vertical lines, while dynamics change and articulation within the interval are represented by the height and width of a bar, respectively. Then we grouped local expression into several groups by k-means clustering based on the values of the elements. The resulting groups represented the emotional expression in a performance that is controlled by the rhythmic and melodic structure, which controls the gray scale of the graphical components. We ran a pilot experiment to test the effectiveness of our method using two matching tasks and a questionnaire. In the first task, we used the same section of music, played by two different interpretations, while in the second task, two different sections of a performance were used. The results of the test seem to support the present approach, although there is still room for further improvement that will reflect the subtleties in performance. | |||
| The challenge of information visualization evaluation | | BIBAK | Full-Text | 109-116 | |
| Catherine Plaisant | |||
| As the field of information visualization matures, the tools and ideas
described in our research publications are reaching users. The reports of
usability studies and controlled experiments are helpful to understand the
potential and limitations of our tools, but we need to consider other
evaluation approaches that take into account the long exploratory nature of
users tasks, the value of potential discoveries or the benefits of overall
awareness. We need better metrics and benchmark repositories to compare tools,
and we should also seek reports of successful adoption and demonstrated
utility. Keywords: adoption, evaluation, return on investment, technology transfer, usability,
usefulness, user studies, visualization | |||
| View size and pointing difficulty in multi-scale navigation | | BIBAK | Full-Text | 117-124 | |
| Yves Guiard; Michel Beaudouin-Lafon; Julien Bastin; Dennis Pasveer; Shumin Zhai | |||
| Using a new taxonomy of pointing tasks which includes view pointing beside
traditional cursor pointing, we introduce the concept of multi-scale pointing.
Analyzing the impact of view size, we demonstrate theoretically and
experimentally that (1) the time needed to reach a remotely located target in a
multi-scale interface still obeys Fitts' law and (2) the bandwidth of the
interaction (i.e., the inverse of Fitts' law slope) is proportional to view
size, a relationship bounded by an early ceiling effect. We discuss these
results with special reference to navigation in miniaturized and enlarged
interfaces. Keywords: Fitts' law, analysis methods, empirical methods, input and interaction
technologies, quantitative | |||
| Usability studies on a visualisation for parallel display and control of alternative scenarios | | BIBAK | Full-Text | 125-132 | |
| Aran Lunzer; Kasper Hornbæk | |||
| Many applications require comparison between alternative scenarios; most
support it poorly. A subjunctive interface supports comparison through its
facilities for parallel setup, viewing and control of scenarios. To evaluate
the usability and benefits of these facilities, we ran experiments in which
subjects used both a simple and a subjunctive interface to make comparisons in
a census data set. In the first experiment, subjects reported higher
satisfaction and lower workload with the subjunctive interface, and relied less
on interim marks on paper. Subjects also used fewer interface actions. However,
we found no reduction in task completion time, mainly because some subjects
encountered problems in using the facilities for setting up and controlling
scenarios. Based on a detailed analysis of subjects' actions we redesigned the
subjunctive interface to alleviate frequent problems, such as accidentally
adjusting only one scenario when the intention was to adjust them all. At the
end of a second, five-session experiment, users of this redesigned interface
completed tasks 27% more quickly than with the simple interface. Keywords: information visualisation, iterative design and evaluation, subjunctive
interfaces, usability study | |||
| Fishnet, a fisheye web browser with search term popouts: a comparative evaluation with overview and linear view | | BIBAK | Full-Text | 133-140 | |
| Patrick Baudisch; Bongshin Lee; Libby Hanna | |||
| Fishnet is a web browser that always displays web pages in their entirety,
independent of their size. Fishnet accomplishes this by using a fisheye view,
i.e. by showing a focus region at readable scale while spatially compressing
page content above and below that region. Fishnet offers search term
highlighting, and assures that those terms are readable by using "popouts".
This allows users to visually scan search results within the entire page
without scrolling.
The scope of this paper is twofold. First, we present fishnet as a novel way of viewing the results of highlighted search and we discuss the design space. Second, we present a user study that helps practitioners determine which visualization technique -- fisheye view, overview, or regular linear view -- to pick for which type of visual search scenario. Keywords: fisheye view, popouts, search terms, web browser | |||
| Image presentation in space and time: errors, preferences and eye-gaze activity | | BIBAK | Full-Text | 141-149 | |
| Bob Spence; Mark Witkowski; Catherine Fawcett; Brock Craft; Oscar de Bruijn | |||
| Rapid Serial Visual Presentation (RSVP) is a technique that allows images to
be presented sequentially in the time-domain, thereby offering an alternative
to the conventional concurrent display of images in the space domain. Such an
alternative offers potential advantages where display area is at a premium.
However, notwithstanding the flexibility to employ either or both domains for
presentation purposes, little is known about the alternatives suited to
specific tasks undertaken by a user. As a consequence there is a pressing need
to provide guidance for the interaction designer faced with these alternatives.
We investigated the task of identifying the presence or absence of a previously viewed image within a collection of images, a requirement of many real activities. In experiments with subjects, the collection of images was presented in three modes (1) 'slide show' RSVP mode; (2) concurrently and statically -- 'static mode'; and (3) a 'mixed' mode. Each mode employed the same display area and the same total presentation time, together regarded as primary resources available to the interaction designer. For each presentation mode, the outcome identified error profiles and subject preferences. Eye-gaze studies detected distinctive differences between the three presentation modes. Keywords: Rapid Serial Visual Presentation, eye-gaze tracking, space-time trade-off,
visual information browsing, visual interface design | |||
| ValueCharts: analyzing linear models expressing preferences and evaluations | | BIBAK | Full-Text | 150-157 | |
| Giuseppe Carenini; John Loyd | |||
| In this paper we propose ValueCharts, a set of visualizations and
interactive techniques intended to support decision-makers in inspecting linear
models of preferences and evaluation. Linear models are popular decision-making
tools for individuals, groups and organizations. In Decision Analysis, they
help the decision-maker analyze preferential choices under conflicting
objectives. In Economics and the Social Sciences, similar models are devised to
rank entities according to an evaluative index of interest. The fundamental
goal of building models expressing preferences and evaluations is to help the
decision-maker organize all the information relevant to a decision into a
structure that can be effectively analyzed. However, as models and their domain
of application grow in complexity, model analysis can become a very challenging
task. We claim that ValueCharts will make the inspection and application of
these models more natural and effective. We support our claim by showing how
ValueCharts effectively enable a set of basic tasks that we argue are at the
core of analyzing and understanding linear models of preferences and
evaluation. Keywords: decision analysis, evaluative index, information visualization, interactive
techniques, linear model, preference model | |||
| Exploring and examining assessment data via a matrix visualisation | | BIBAK | Full-Text | 158-162 | |
| Martin Graham; Jessie Kennedy | |||
| OPAL (Online PArtner Lens) is an application designed to match project
requirements with suitable teams and individuals, and as part of its matching
process features an evaluation mechanism designed to elicit measures of trust
between potential partners. We describe a matrix-style visualisation that
displays these hierarchically structured assessments between sets of OPAL users
to allow them to select potential partners. The main feature of the matrix
visualisation is the ability for users to assess the context of a specific
assessment as the visualisation not only reveals simple related statistics for
the two users concerned, but also overlays summaries of related assessor and
candidate evaluations as compact and ordered 'value bars' when the user
examines information in the matrix. This enables the user to better decide
whether a given assessment is in line with what would be expected from an
assessor's and candidate's history, or whether it indicates a specifically
localised interplay between the two users. Other features include a simple
focus+context effect that can reveal the tree-like structure and details of
assessments, and filtering assessments by their position in the matrix or by
particular assessment attributes. Keywords: directed graph, filter, focus+context, information visualisation, matrix | |||
| A graph-based interface to complex hypermedia structure visualization | | BIBAK | Full-Text | 163-166 | |
| Manuel Freire; Pilar Rodríguez | |||
| Complex hypermedia structures can be difficult to author and maintain,
especially when the usual hierarchic representation cannot capture important
relations. We propose a graph-based direct manipulation interface that uses
multiple focus+context techniques to avoid display clutter and information
overload. A semantical fisheye lens based on hierarchical clustering allows the
user to work on high-level abstracts of the structure. Navigation through the
resulting graph is animated in order to avoid loss of orientation, with a
force-directed algorithm in charge of generating successive layouts. Multiple
views can be generated over the same data, each with independent settings for
filtering, clustering and degree of zoom.
While these techniques are all well-known in the literature, it is their combination and application to the field of hypermedia authoring that constitutes a powerful tool for the development of next-generation hyperspaces. A generic framework, CLOVER, and two specific applications for existing hypermedia systems have been implemented. Keywords: focus+context, graph visualization, hypermedia | |||
| Focus dependent multi-level graph clustering | | BIBAK | Full-Text | 167-170 | |
| François Boutin; Mountaz Hascoët | |||
| In this paper we propose a structure-based clustering technique that
transforms a given graph into a specific double tree structure called
multi-level outline tree. Each meta-node of the tree -- that represents a
subset of nodes -- is itself hierarchically clustered. So, a meta-node is
considered as a tree root of included clusters.
The main originality of our approach is to account for the user focus in the clustering process to provide views from different perspectives. Multi-level outline trees are computed in linear time and easy to explore. We think that our technique is well suited to investigate various graphs like Web graphs or citation graphs. Keywords: graph clustering, graph drawing, multi-scale visualization | |||
| KNAVE II: the definition and implementation of an intelligent tool for visualization and exploration of time-oriented clinical data | | BIBAK | Full-Text | 171-174 | |
| Dina Goren-Bar; Yuval Shahar; Maya Galperin-Aizenberg; David Boaz; Gil Tahan | |||
| KNAVE-II is an intelligent interface to a distributed web-based architecture
that enables users (e.g., physicians) to query, visualize and explore clinical
time-oriented databases. Based on prior studies, we have defined a set of
requirements for provision of a service for interactive exploration of time
oriented clinical data. The main requirements include the visualization,
interactive exploration and explanation of both raw data and multiple levels of
concepts abstracted from these data; the exploration of clinical data at
different levels of temporal granularity along both absolute (calendar-based)
and relative (clinically meaningful) time-lines; the exploration and dynamic
visualization of the effects of simulated hypothetical modifications of raw
data on the derived concepts; and the provision of generic services (such as
statistics, documentation, fast search and retrieval of clinically significant
concepts, amongst others). KNAVE-II has been implemented and is currently
evaluated by expert clinicians in several medical domains, such as oncology,
involving monitoring of chronic patients. Keywords: clinical systems, human-computer interaction, intelligent visualization,
knowledge-based systems, medical informatics, temporal abstraction, temporal
reasoning | |||
| Temporal Thumbnails: rapid visualization of time-based viewing data | | BIBAK | Full-Text | 175-178 | |
| Michael Tsang; Nigel Morris; Ravin Balakrishnan | |||
| We introduce the concept of the Temporal Thumbnail, used to quickly convey
information about the amount of time spent viewing specific areas of a virtual
3D model. Temporal Thumbnails allow for large amounts of time-based information
collected from model viewing sessions to be rapidly visualized by collapsing
the time dimension onto the space of the model, creating a characteristic
impression of the overall interaction. We describe three techniques that
implement the Temporal Thumbnail concept and present a study comparing these
techniques to more traditional video and storyboard representations. The
results suggest that Temporal Thumbnails have potential as an effective
technique for quickly analyzing large amounts of viewing data. Practical and
theoretical issues for visualization and representation are also discussed. Keywords: representation refinement, temporal thumbnail, viewing analysis,
visualization | |||
| CircleView: a new approach for visualizing time-related multidimensional data sets | | BIBAK | Full-Text | 179-182 | |
| Daniel A. Keim; Jörn Schneidewind; Mike Sips | |||
| This paper introduces a new approach for visualizing multidimensional
time-referenced data sets, called Circle View. The Circle View technique is a
combination of hierarchical visualization techniques, such as treemaps [6], and
circular layout techniques such as Pie Charts and Circle Segments [2]. The main
goal is to compare continuous data changing their characteristics over time in
order to identify patterns, exceptions and similarities in the data.
To achieve this goal Circle View is a intuitive and easy to understand visualization interface to enable the user very fast to acquire the information needed. This is an important feature for fast changing visualization caused by time related data streams. Circle View supports the visualization of the changing characteristics over time, to allow the user the observation of changes in the data. Additionally it provides user interaction and drill down mechanism depending on user demands for a effective exploratory data analysis. There is also the capability of exploring correlations and exceptions in the data by using similarity and ordering algorithms. Keywords: advanced visual interface, continuous data streams, information
visualization, visual data mining | |||
| Interactive data summarization: an example application | | BIBAK | Full-Text | 183-187 | |
| Neal Lesh; Michael Mitzenmacher | |||
| Summarizing large multidimensional datasets is a challenging task, often
requiring extensive investigation by a user to identify overall trends and
important exceptions to them. While many visualization tools help a user
produce a single summary of the data at a time, they require the user to
explore the dataset manually. Our idea is to have the computer perform an
exhaustive search and inform the user about where further investigation is
warranted. Our algorithm takes a large, multidimensional dataset as input,
along with a specification of the user's goals, and produces a concise summary
that can be clearly visualized in bar graphs or linegraphs. We demonstrate our
techniques in a sample prototype for summarizing information stored in
spreadsheet databases. Keywords: data mining, information visualization, interactive data exploration | |||
| Exploratory visualization using bracketing | | BIBAK | Full-Text | 188-192 | |
| Jonathan C. Roberts | |||
| There are many tools that provide the user with an abundance of sliders,
buttons and options to change; such tools are popular in exploratory
visualization. As the user changes the parameters so the display dynamically
updates and responds appropriately to changes made. These multiparameter
systems can be difficult to use, as the user is often unaware of the outcome of
any action before it occurs. Specifically it may be unclear whether to increase
or decrease a parameter value to get a desired result. Multiple view systems
can help, as the user can try out various scenarios and compare the results
side-by-side, although if unrestricted the user may be swamped by numerous and
often unnecessary views. In this paper we present the novel idea of
'bracketing', where a principal view is supported with two additional views
from slightly different parameterizations. The idea is inspired by exposure
bracketing in photography. This provides a middle ground: it offers a way to
see adjacent-parameterizations, while allowing yet restraining multiple views.
Moreover, we demonstrate how bracketing can be exploited in many applications
and used in various ways (within parameter, visual and temporal domains). Keywords: bracketing, coordination, exploratory visualization, multiple views | |||
| SWAPit: a multiple views paradigm for exploring associations of texts and structured data | | BIBAK | Full-Text | 193-196 | |
| Andreas Becks; Christian Seeling | |||
| Visualization interfaces that offer multiple coordinated views on a
particular set of data items are useful for navigating and exploring complex
information spaces. In this paper we address the problem of mining text
information which is associated with structured data from relational data
sources. We present a multi-view paradigm that closely integrates the analysis
of unstructured text data with related structured data sets. Our concept brings
together views on text similarity, text categories, and associated relational
attributes for application fields like customer relationship management or
business intelligence. A prototype is presented that exemplarily implements our
multi-view framework. Keywords: document maps, multiple views, ontologies, relational data, text mining,
visual interaction | |||
| Quantum web fields and molecular meanderings: visualising web visitations | | BIBAK | Full-Text | 197-200 | |
| Geoff Ellis; Alan Dix | |||
| This paper describes two visualisation algorithms that give an impression of
current activity on a web site. Both focus on giving a sense of the trail of
individual visitors within the web space and showing their navigation paths.
Past web activity is used to produce a spatial mapping of pages, which results
in highly traversed page links lying close together in the 2D visualisation
space. Pages visited by typical individual visitors thus form intelligible
paths when plotted in the visualisation space. Both techniques attempt to
enhance user awareness and experience, but they differ in their balance between
utility and aesthetics. Keywords: awareness, self-organising map, web visualisation | |||
| Ambient intelligence: visualizing the future | | BIBAK | Full-Text | 203-208 | |
| Boris de Ruyter; Emile Aarts | |||
| As technologies in the area of storage, connectivity and displays are
rapidly evolving and business development is pointing to the direction of the
experience economy, the vision of Ambient Intelligence is positioning the human
needs central to technology development. Equipped with a special research
instrument called HomeLab, scenarios of Ambient Intelligence are implemented
and tested. As two examples of bringing real user experiences through display
technology into the digital home, research on creating the feeling of immersion
and the feeling of being connected, are discussed. Results from this work
indicate that visual displays can indeed be used beyond simple information
rendering but can actually play an important role in creating user experiences. Keywords: Ambient Intelligence, HomeLab, user experiences | |||
| Multi-projectors and implicit interaction in persuasive public displays | | BIBAK | Full-Text | 209-217 | |
| Paul Dietz; Ramesh Raskar; Shane Booth; Jeroen van Baar; Kent Wittenburg; Brian Knep | |||
| Recent advances in computer video projection open up new possibilities for
real-time interactive, persuasive displays. Now a display can continuously
adapt to a viewer so as to maximize its effectiveness. However, by the very
nature of persuasion, these displays must be both immersive and subtle. We have
been working on technologies that support this application including
multi-projector and implicit interaction techniques. These technologies have
been used to create a series of interactive persuasive displays that are
described. Keywords: implicit interaction, multi-projectors, persuasive displays, projective
displays, proximity detection, proximity sensors, retail environments,
ubiquitous computing | |||
| Task-sensitive user interfaces: grounding information provision within the context of the user's activity | | BIBAK | Full-Text | 218-225 | |
| Nathalie Colineau; Andrew Lampert; Cécile Paris | |||
| In the context of innovative Airborne Early Warning and Control (AEW&C)
platform capabilities, we are building an environment that can support the
generation of information tailored to operators' tasks. The challenging issues
here are to improve the methods for managing information delivery to the
operators, and thus provide them with high-value information on their display
whilst avoiding noise and clutter. To this end, we enhance the operator's
graphical interface with information delivery mechanisms that support
maintenance of situation awareness and improving efficiency. We do this by
proactively delivering task-relevant information. Keywords: discourse planning, information delivery, information visualization,
task-sensitive interface | |||
| Aligning information browsing and exploration methods with a spatial navigation aid for mobile city visitors | | BIBAK | Full-Text | 226-230 | |
| Thomas Rist; Stephan Baldes; Patrick Brandmeier | |||
| Navigation support concerning both physical space as well as information
spaces address fundamental information needs of mobile users in many
application scenarios including the classical shopping visit in the town
centre. Therefore it is a particular research objective in the mobile domain to
explore, showcase, and test the interplay of physical navigation with
navigation in an information space that, metaphorically speaking, superimposes
the physical space. We have developed a demonstrator that couples a spatial
navigation aid in the form of a 2D interactive map viewer with other
information services, such as an interactive web directory service that
provides information about shops and restaurants and their product palettes.
The research has raised a number of interesting questions, such as of how to
align interactions performed in the navigation aid with meaningful actions in a
coupled twin application, and vice versa, how to reflect navigation in an
information space in the aligned spatial navigation aid. Keywords: visual interfaces for mobile users, visual navigation tools | |||
| ZoneZoom: map navigation for smartphones with recursive view segmentation | | BIBAK | Full-Text | 231-234 | |
| Daniel C. Robbins; Edward Cutrell; Raman Sarin; Eric Horvitz | |||
| ZoneZoom is an input technique that lets users traverse large information
spaces on smartphones. Our technique ZoneZoom, segments a given view of an
information space into nine sub-segments, each of which is mapped to a key on
the number keypad of the smartphone. This segmentation can be hand-crafted by
the information space author or dynamically created at run-time. ZoneZoom
supports "spring-loaded" view shifting which allows users to easily "glance" at
nearby areas and then quickly return to their current view. Our ZoneZoom
technique lets users gain an overview and compare information from different
parts of a dataset. SmartPhlow is an optimized application for browsing a map
of local-area road traffic conditions. Keywords: SmartPhlow, ZoneZoom, hand-held devices, maps, mobile browsing, smartphones,
spatial cognition, visual interaction, visualization, zoomable user interfaces | |||
| DeepDocument: use of a multi-layered display to provide context awareness in text editing | | BIBAK | Full-Text | 235-239 | |
| Masood Masoodian; Sam McKoy; Bill Rogers; David Ware | |||
| Word Processing software usually only displays paragraphs of text
immediately adjacent to the cursor position. Generally this is appropriate, for
example when composing a single paragraph. However, when reviewing or working
on the layout of a document it is necessary to establish awareness of current
text in the context of the document as a whole. This can be done by scrolling
or zooming, but when doing so, focus is easily lost and hard to regain.
We have developed a system called DeepDocument using a two-layered LCD display in which both focussed and document-wide views are presented simultaneously. The overview is shown on the rear display and the focussed view on the front, maintaining full screen size for each. The physical separation of the layers takes advantage of human depth perception capabilities to allow users to perceive the views independently without having to redirect their gaze. DeepDocument has been written as an extension to Microsoft Word™. Keywords: Deep Video™, Microsoft Word™, context awareness, multi-layered
display, text editing, word processing | |||
| Interacting with embodied agents in public environments | | BIBAK | Full-Text | 240-243 | |
| Addolorata Cavalluzzi; Berardina De Carolis; Sebastiano Pizzutilo; Giovanni Cozzolongo | |||
| In this paper, we present the first results of a research aiming at
developing an intelligent agent able to interact with users in public spaces
through a touch screen or a personal device. The agent communication is adapted
to the situation at both content and presentation levels, by generating an
appropriate combination of verbal and non-verbal agent behaviours. Keywords: interface agents, personalization | |||
| The Input Configurator toolkit: towards high input adaptability in interactive applications | | BIBAK | Full-Text | 244-247 | |
| Pierre Dragicevic; Jean-Daniel Fekete | |||
| This article describes ICON (Input Configurator), an input management system
that enables interactive applications to achieve a high level of input
adaptability. We define input adaptability as the ability of an interactive
application to exploit alternative input devices effectively and offer users a
way of adapting input interaction to suit their needs. We describe several
examples of interaction techniques implemented using ICON with little or no
support from applications that are hard or impossible to implement using
regular GUI toolkits. Keywords: input devices, interaction techniques, toolkits, visual programming | |||
| Robust object-identification from inaccurate recognition-based inputs | | BIBAK | Full-Text | 248-251 | |
| Qiaohui Zhang; Kentaro Go; Atsumi Imamiya; Xiaoyang Mao | |||
| Eyesight and speech are two channels that humans naturally use to
communicate with each other. However both the eye tracking and the speech
recognition technique existing are still far from perfect. This work explored
how to integrate two (or more) error-prone sources of information on users'
selection of objects in a visual interface. The implemented system integrated a
commercial speech recognition system with gaze tracking in order to improve
recognition results. In addition, we employed a new measure of the rate of
mutual disambiguation for the multimodal system and conducted an experimental
evaluation. Keywords: eye tracking, multimodal architecture, mutual correction, mutual
disambiguation, recognition errors, speech input | |||
| Modelling internet based applications for designing multi-device adaptive interfaces | | BIBAK | Full-Text | 252-256 | |
| Enrico Bertini; Giuseppe Santucci | |||
| The wide spread of mobile devices in the consumer market has posed a number
of new issues in the design of internet applications and their user interfaces.
In particular, applications need to adapt their interaction modalities to
different portable devices. In this paper we address the problem of defining
models and techniques for designing internet based applications that
automatically adapt to different mobile devices. First, we define a formal
model that allows for specifying the interaction in a way that is abstract
enough to be decoupled from the presentation layer, which is to be adapted to
different contexts. The model is mainly based on the idea of describing the
user interaction in terms of elementary actions. Then, we provide a formal
device characterization showing how to effectively implements the AIUs in a
multidevice context. Keywords: adaptive interfaces, mobile devices, multiple interfaces | |||
| Augmented multi-user communication system | | BIBAK | Full-Text | 257-260 | |
| Vítezslav Beran | |||
| This paper presents improvements carried out to enhance the visual
interaction of computer users in existing communication systems. It includes
the usage of augmented reality techniques and the modification of a method for
user model reconstruction according to particular requirements of such
applications. Promised achievement is to prepare the background for further
development of multi-user interface, videoconference or collaborative
workspace.
The aim of our research is replacing the standard computer interface components by equipment used in augmented reality and so immerse the user into augmented environment. Such approach allows to user positioning virtual objects in his workspace. One of possible techniques for precise virtual object pose evaluation widely used in augmented reality applications is to employ special tracking markers. Traditionally, communication systems of videoconference type represent a remote user using his sprite (plain, billboard-like) model. The lack of realistic appearance, when the participant is displayed as a sprite model, can be eliminated by its artificial reconstruction. The method gain depth information from knowledge of human anatomy and hence it is able to create the artificial relief model of the remote user. Keywords: augmented reality, grid based triangulation, image analysis, mixed reality,
object reconstruction | |||
| Tangible interfaces in virtual environments for industrial design | | BIBAK | Full-Text | 261-264 | |
| Raffaele De Amicis; Giuseppe Conti; Michele Fiorentino | |||
| In the fields of industrial design and car manufacturing the creation of 3D
curves plays a fundamental role within the design process: it allows the
improvement of the visual appeal of artifacts, it enhances ergonomics and the
product's commercial competitiveness through product differentiation. When
flexibility and intuition are to be privileged it is fundamental to achieve
natural, intuitive, mathematically correct, creation and modification of
surfaces.
The scientific aim of this research is the development of an innovative metaphor to modeling of 3D curves which maintains the natural expertise of the designer. The major contribution of this paper is the capability of the system to create and modify the curve naturally, without mathematical artifices, within the limits set by the use of Bezier curves. The proposed metaphor combines the benefits of two acknowledged techniques referred to as the Digital Tape Drawing and the Eraser Pen, which allow the real-time modification of the curve. The integrated adoption of tangible interfaces and innovative mathematical tools, combined with the adoption of semi-immersive environment and lightweight interaction devices, delivers intuitive curve creation for free-form modeling within the virtual scene. The paper describes the details of the algorithm developed and it highlights its strengths during the styling phase. Keywords: 3D Curve Generation, Computer Aided Styling (CAS), virtual environments | |||
| 3D location-pointing as a navigation aid in Virtual Environments | | BIBAK | Full-Text | 267-274 | |
| Luca Chittaro; Stefano Burigat | |||
| The navigation support provided by user interfaces of Virtual Environments
(VEs) is often inadequate and tends to be overly complex, especially in the
case of large-scale VEs. In this paper, we propose a novel navigation aid that
aims at allowing users to easily locate objects and places inside large-scale
VEs. The aid exploits 3D arrows to point towards the objects and places the
user is interested in. We illustrate and discuss the experimental evaluation we
carried out to assess the usefulness of the proposed solution, contrasting it
with more traditional 2D navigation aids. In particular, we compared subjects'
performance in 4 conditions which differ for the type of provided navigation
aid: three conditions employed respectively the proposed "3D arrows" aid, an
aid based on 2D arrows, and a 2D aid based on a radar metaphor; the fourth
condition was a control condition with no navigation aids available. Keywords: Virtual Environments, evaluation, navigation aids | |||
| Observing and adapting user behavior in navigational 3D interfaces | | BIBA | Full-Text | 275-282 | |
| Augusto Celentano; Fabio Pittarello | |||
| In a navigation-oriented interaction paradigm, such as desktop, mixed and
augmented virtual reality, recognizing the user needs is a valuable
improvement, provided that the system is able to correctly anticipate the user
actions. Methodologies for adapting both navigation and content allow the user
to interact with a customized version of the 3D world, lessening the cognitive
load needed for accomplishing tasks such as finding places and objects, and
acting on virtual devices.
This work discusses adaptivity of interaction in 3D environments, obtained through the coordinated use of three approaches: structured design of the interaction space, distinction between a base world layer and an interactive experience layer, and user monitoring in order to infer interaction patterns. Identification of such recurring patterns is used for anticipating users actions in approaching places and objects of each experience class. An agent based architecture is proposed, and a simple application related to consumer e-business is analyzed. | |||
| Towards the next generation of 3D content creation | | BIBAK | Full-Text | 283-289 | |
| Gerhard H. Bendels; Ferenc Kahlesz; Reinhard Klein | |||
| In this paper we present a novel integrated 3D editing environment that
combines recent advantages in various fields of computer graphics, such as
shape modelling, video-based Human Computer Interaction, force feedback and VR
fine-manipulation techniques. This integration allows us to create a new
compelling form of 3D object creation and manipulation preserving the metaphors
designers, artists and painters have accustomed to during their day to day
practice. Our system comprises a novel augmented reality workbench and enables
users to simultaneously perform natural fine pose determination of the edited
object with one hand and model or paint the object with the other hand. The
hardware setup features a non-intrusive, video-based hand tracking subsystem,
see-through glasses and a 3D 6-degree of freedom input device. The
possibilities delivered by our AR workbench enable us to implement traditional
and recent editing metaphors in an immersive and fully three-dimensional
environment, as well as to develop novel approaches to 3D object interaction. Keywords: AR, HCI, augmented reality, human computer interaction, mesh-editing | |||
| Designing affordances for the navigation of detail-on-demand hypervideo | | BIBAK | Full-Text | 290-297 | |
| Andreas Girgensohn; Lynn Wilcox; Frank Shipman; Sara Bly | |||
| We introduced detail-on-demand video as a simple type of hypervideo that
allows users to watch short video segments and to follow hyperlinks to see
additional detail. Such video lets users quickly access desired information
without having to view the entire contents linearly. A challenge for presenting
this type of video is to provide users with the appropriate affordances to
understand the hypervideo structure and to navigate it effectively. Another
challenge is to give authors tools that allow them to create good
detail-on-demand video. Guided by user feedback, we iterated designs for a
detail-on-demand video player. We also conducted two user studies to gain
insight into people's understanding of hypervideo and to improve the user
interface. We found that the interface design was tightly coupled to
understanding hypervideo structure and that different designs greatly affected
what parts of the video people accessed. The studies also suggested new
guidelines for hypervideo authoring. Keywords: hypervideo, iterative design, link navigation, user studies, video keyframes | |||
| Interactive interfaces of Treecube for browsing 3D multimedia data | | BIBAK | Full-Text | 298-302 | |
| Yoichi Tanaka; Yoshihiro Okada; Koichi Niijima | |||
| The authors of this paper have already proposed Treecube which is a
visualization tool for browsing 3D multimedia data. In this paper, the authors
also propose its interactive interfaces for efficiently browsing 3D multimedia
data. Treecube is regarded as a 3D extension of treemap, which is a
visualization tool for hierarchical information proposed by Ben Shneiderman et
al. in 1992. For treemap, there are several layout algorithms: slice-and-dice,
ordered treemap, strip treemap and so on. Furthermore, quantum treemap exists.
It means a quantization version of these treemap layout algorithms. The authors
implemented mainly three layout algorithms, i.e., slice-and-dice, ordered and
strip treecube algorithm, and implemented their quantization version.
Practically sophisticated interfaces are necessary for efficiently browsing 3D
multimedia data. In this paper, the authors also propose such interfaces. The
authors implemented mainly five interface functionalities for the following
operations. (1) "Cutting plane" concept to solve the occlusion problem, i.e.,
nodes located before the plane are hidden to make it easy to see inside nodes.
(2) The control of node frames, i.e., their brightness and thickness, for
easily understanding the hierarchical structure of nodes. (3) Standard
operations for the translation and the rotation of an eye position, and for the
zoom in/out. (4) Particular operations for the extraction of the user focus
node and for the backward/forward for browsing such node. The authors also
implemented (5) a function to assign color information to any node properties
because color is the most important factor of the visual display properties. Keywords: 3D multimedia, IntelligentBox, Treecube, Treemap, information visualization,
multimedia browser | |||
| 3Book: a 3D electronic smart book | | BIBAK | Full-Text | 303-307 | |
| Stuart K. Card; Lichan Hong; Jock D. Mackinlay; Ed H. Chi | |||
| This paper describes the 3Book, a 3D interactive visualization of a codex
book as a component for various digital library and sensemaking systems. The
book is designed to hold large books and to support sensemaking operations by
readers. The book includes methods in which the automatic semantic analysis of
the book's content is used to dynamically tailor access. Keywords: 3D UI, 3D books, eBooks, electronic publishing, sensemaking, spreading
activation | |||
| Identification and validation of cognitive design principles for automated generation of assembly instructions | | BIBAK | Full-Text | 311-319 | |
| Julie Heiser; Doantam Phan; Maneesh Agrawala; Barbara Tversky; Pat Hanrahan | |||
| Designing effective instructions for everyday products is challenging. One
reason is that designers lack a set of design principles for producing visually
comprehensible and accessible instructions. We describe an approach for
identifying such design principles through experiments investigating the
production, preference, and comprehension of assembly instructions for
furniture. We instantiate these principles into an algorithm that automatically
generates assembly instructions. Finally, we perform a user study comparing our
computer-generated instructions to factory-provided and highly rated
hand-designed instructions. Our results indicate that the computer-generated
instructions informed by our cognitive design principles significantly reduce
assembly time an average of 35% and error by 50%. Details of the experimental
methodology and the implementation of the automated system are described. Keywords: assembly instructions, design principles, diagrams, spatial ability, visual
instructions | |||
| How users interact with biodiversity information using TaxonTree | | BIBAK | Full-Text | 320-327 | |
| Bongshin Lee; Cynthia Sims Parr; Dana Campbell; Benjamin B. Bederson | |||
| Biodiversity databases have recently become widely available to the public
and to other researchers. To retrieve information from these resources, users
must understand the underlying data schemas even though they often are not
content experts. Many other domains share this problem.
We developed an interface, TaxonTree, to visualize the taxonomic hierarchy of animal names. We applied integrated searching and browsing so that users need not have complete knowledge either of appropriate keywords or the organization of the data. Our qualitative user study of TaxonTree in an undergraduate course is the first to describe usage patterns in the biodiversity domain. We found that tree-based interaction and visualization aided users' understanding of the data. Most users approached biodiversity data by browsing, using common, general knowledge rather than the scientific keyword expertise necessary to search using traditional interfaces. Users with different levels of interest in the domain had different interaction preferences. Keywords: animation, biodiversity, browsing, hierarchy/tree visualization, information
retrieval, searching | |||
| Smooth Morphing of Handwritten Text | | BIBAK | Full-Text | 328-335 | |
| Conrad Pomm; Sven Werlen | |||
| There are several approaches for pen-based systems to improve legibility of
handwritten text, e.g. smoothing the strokes composing the characters and
words. A very challenging solution is the smooth morphing approach: handwritten
strokes are transformed gradually into perfectly legible characters provided by
a previously executed handwriting recognition process. In this paper we present
our approach to a smooth real-time metamorphosis of handwritten characters into
clean typography. Our main contributions are a new hybrid algorithm for
character mapping, heuristics for splitting and joining strokes, and finding a
stroke mapping with the use of radial distances. We implemented our methods in
a whiteboard application that provides intuitive to use editing operations due
to floating menus and stroke gestures. The implementation is based on the
Microsoft Tablet PC SDK and the handwriting recognizer provided with the Tablet
PC operating system. Keywords: Tablet PC, animated interfaces, online handwriting recognition, stroke
morphing | |||
| Dealing with geographic continuous fields: the way to a visual GIS environment | | BIBAK | Full-Text | 336-343 | |
| Robert Laurini; Luca Paolino; Monica Sebillo; Genoveffa Tortora; Giuliana Vitiello | |||
| Recently, much attention has been devoted to the management of continuous
fields, which describe geographic phenomena, such as temperature,
electromagnetism and pressure. While objects are distinguished by their
dimensions, and can be associated with points, lines, or areas, such phenomena
can be measurable at any point of their domain by distinguishing what varies,
and how smoothly. Thus, when dealing with continuous fields, a basic
requirement is represented by users' capability to capture some features of a
scenario, by selecting an area of interest and handling the involved phenomena.
The aim of our research is to provide GIS users with a visual environment where they can manage both continuous fields and discrete objects, by posing spatial queries which capture the heterogeneous nature of phenomena. In particular, in this paper we propose a visual query language Phenomena, which provides users with a uniform style of interaction with the world, which is conceptually modeled as a composition of continuous fields and discrete objects. The intuitiveness of the underlying operators as well as of the query formulation process is ensured by the choice of suitable metaphors and by the adoption of the paradigm of direct manipulation. A prototype of a visual environment running Phenomena has been realized, which allows users to query experimental data by following a SQL-like SELECT-FROM-WHERE scheme. Keywords: continuous fields, geographical information systems, visual interfaces,
visual query languages | |||
| Painting pictures to augment advice | | BIBAK | Full-Text | 344-349 | |
| Kevin Burns | |||
| I present an approach to designing decision support systems. The approach is
to dissect a decision from both a normative and a cognitive perspective, and
then to design a diagram that helps bridge the gap between the math and the
mind. The resulting diagram is ultimately implemented as a visual interface in
a support system. I apply the approach to two prototypical problems in "Command
and Control" and highlight two practical principles that were used to guide the
interface designs. One principle is that the system's interface should be
informative, i.e., it should show users the underlying reasons for algorithmic
results. The other principle is that the system's interface should be
interactive, i.e., it should let users see and set the inputs that affect
outputs. I discuss how interfaces designed by these principles can help users
understand system recommendations and overcome system limitations. Keywords: command and control, decisions, diagrams, mental models, support systems | |||
| Shrinking window operations for expanding display space | | BIBAK | Full-Text | 350-353 | |
| Dugald Ralph Hutchings; John Stasko | |||
| Recent research and technology advances indicate that multiple monitor
systems are likely to become commonplace in the near future. An important
property of such systems is that the physical separation of the display prompts
users to place windows entirely within monitors, and thus does not fully
alleviate the problem of managing windows on smaller monitors. Another finding
about multiple monitor systems is that an additional monitor often holds
windows that help the user maintain awareness rather than support interaction
with information, but that multiple monitor users tend not to have many more
windows visible than their single-monitor counterparts. We therefore present a
window shrinking operation that specifically intends to help users display a
window's relevant information. The operation should help to create smaller
windows to manage, helping the "small monitor management" problem and targeting
use of awareness windows on multiple monitor systems. Keywords: multiple monitors, relevant regions, window operations | |||
| Sim-U-Sketch: a sketch-based interface for SimuLink | | BIBA | Full-Text | 354-357 | |
| Levent Burak Kara; Thomas F. Stahovich | |||
| Sim-U-Sketch is an experimental sketch-based interface we developed for Matlab®'s Simulink® software package. With this tool, users can construct functional Simulink models simply by drawing sketches on a computer screen. To support iterative design, Sim-U-Sketch allows users to interact with their sketches in real time to modify existing objects and add new ones. The system is equipped with a domain-independent, trainable symbol recognizer that can learn new symbols from single prototype examples. This makes our system easily extensible and customizable to new domains and unique drawing styles. | |||
| Example-based programming: a pertinent visual approach for learning to program | | BIBAK | Full-Text | 358-361 | |
| Nicolas Guibert; Patrick Girard; Laurent Guittet | |||
| Computer Science introductory courses are known to be difficult for
students. Kaasboll [1] reports that drop-out or failure rates vary from 25 to
80% world-wide. The explanation is related to the very nature of programming:
"programming is having a task done by a computer" [2]. We can notice three
internal difficulties in this definition:
* The task itself. How do we define it, and specify it?
* The abstraction
process. In order to "have it done by..." students need to create a static
model covering each task behavior.
* The "cognitive gap". It
is difficult for novice programmers to model the computer, and its "mindset",
which is required to express the task model in a computer-readable way. The bad
usability of programming languages increases this difficulty.
The lack of interactivity in the editing-running-debugging loop is often pointed as an important aggravating factor for these difficulties. In the mid-seventies, Smith [3] introduced with Pygmalion another programming paradigm: Programming by Examples, where algorithms are not described abstractly, but are demonstrated through concrete examples. This approach involves several advantages for novices. It allows them to work concretely, and to express the solution in their own way of thinking, instead of having to embrace a computer-centered mindset. The programming process becomes interactive, and as PbE languages are "animated" languages, no translation from the dynamic process to any static representation is required. In this paper we investigate both the novice programmer and existing PbE languages, to show how visual and example-based paradigms can be used to improve programming teaching. We give some elements of a new Example-based Programming environment, called Melba, based on this study, which has been designed to help novice programmers learning to program. Keywords: didactics for computer science, example-based programming, metaphors, visual
programming | |||
| An intelligent and adaptive virtual environment and its application in distance learning | | BIBAK | Full-Text | 362-365 | |
| Cássia T. dos Santos; Fernando S. Osório | |||
| This paper presents an intelligent and adaptive virtual environment, which
has its structure and presentation customized according to users' interests and
preferences (represented in a user model) and in accordance with insertion and
removal of contents in this environment. An automatic content categorization
process is applied to create content models, used in the spatial organization
of the contents in the environment. An intelligent agent assists users during
navigation in the environment and retrieval of relevant information. In order
to validate our proposal, a prototype of a distance learning environment, used
to make educational content available, was developed. Keywords: adaptive interfaces, content modeling, intelligent virtual agents,
intelligent virtual environments, user modeling, virtual reality | |||
| A visual adaptive interface to file systems | | BIBAK | Full-Text | 366-369 | |
| Rosario De Chiara; Ugo Erra; Vittorio Scarano | |||
| In this paper we present our experience in building a visual flle manager,
VENNFS2, that offers to users an adaptive interface toward access to files. Our
file manager was originally designed to overcome some of limitations of
hierarchical file systems, since it allows users to categorize files in such a
way that files may belong multiple categories at once. Based on the past
history of the files that were opened and modified by the user, VENNFS2
graphically presents the user a small number of choices of the next file the
user will modify. Some preliminary testing with interesting hints are also
reported. Keywords: Venn diagram, adaptivity, set, user interfaces, venn | |||
| Visualizing programs with Jeliot 3 | | BIBAK | Full-Text | 373-376 | |
| Andrés Moreno; Niko Myller; Erkki Sutinen; Mordechai Ben-Ari | |||
| We present a program visualization tool called Jeliot 3 that is designed to
aid novice students to learn procedural and object oriented programming. The
key feature of Jeliot is the fully or semi-automatic visualization of the data
and control flows. The development process of Jeliot has been
research-oriented, meaning that all the different versions have had their own
research agenda rising from the design of the previous version and their
empirical evaluations. In this process, the user interface and visualization
has evolved to better suit the targeted audience, which in the case of Jeliot
3, is novice programmers. In this paper we explain the model for the system and
introduce the features of the user interface and visualization engine.
Moreover, we have developed an intermediate language that is used to decouple
the interpretation of the program from its visualization. This has led to a
modular design that permits both internal and external extensibility. Keywords: novice programming, program visualization | |||
| Sorting out sorting through concretization with robotics | | BIBAK | Full-Text | 377-380 | |
| Javier López; Niko Myller; Erkki Sutinen | |||
| We describe a novel approach to algorithms concretization that extends the
current mode of software visualization from computer screens to the real world.
The method combines hands-on robotics and traditional algorithm visualization
techniques to help diverse learners comprehend the basic idea of the given
algorithm. From this point of view the robots interpret an algorithm while
their internal program and external appearance determine the role they have in
it. This gives us the possibility to bring algorithms into the real physical
world where students can even touch the data structures during the execution.
In the first version, we have concentrated on a few sorting algorithms as a
proof-of-concept. Moreover, we have carried out an evaluation with
13-to-15-year-old students who used the concretization for gaining insight into
one sorting algorithm. The preliminary results indicate that the tool can
enhance learning. Now, our aim is to build an environment that supports both
visualizations and robotics based concretizations of algorithms at the same
time. Keywords: algorithms, concretization, robotics, sorting | |||
| Combining visual techniques for Association Rules exploration | | BIBAK | Full-Text | 381-384 | |
| Dario Bruzzese; Paolo Buono | |||
| The abundance of data available nowadays fosters the need of developing
tools and methodologies to help users in extracting significant information.
Visual data mining is going in this direction, exploiting data mining
algorithms and methodologies together with information visualization
techniques.
The demand for visual and interactive analysis tools is particularly pressing in the Association Rules context where often the user has to analyze hundreds of rules in order to grasp valuable knowledge. This paper presents a visual strategy to face this drawback by exploiting graph-based technique and parallel coordinates to visualize the results of association rules mining algorithms. The combination of the two approaches allows both to get an overview on the association structure hidden in the data and to deeper investigate inside a specific set of rules selected by the user. Keywords: association rules, graph drawing, parallel coordinates, visual data mining | |||
| MVT: a system for visual testing of software | | BIBAK | Full-Text | 385-388 | |
| Jan Lönnberg; Ari Korhonen; Lauri Malmi | |||
| Software development is prone to time-consuming and expensive errors.
Finding and correcting errors in a program (debugging) is usually done by
executing the program with different inputs and examining its intermediate
and/or final results (testing). The tools that are currently available for
debugging (debuggers) do not fully make use of several potentially useful
visualisation and interaction techniques.
This article presents a prototype debugging tool (MVT-Matrix Visual Tester) based on a new interactive graphical software testing methodology called visual testing. A programmer can use a visual testing tool to examine and manipulate a running program and its data structures. The tool combines aspects of visual algorithm simulation, high-level data visualisation and visual debugging, and allows easier testing, debugging and understanding of software. Keywords: algorithm simulation, bytecode instrumentation, execution history logging,
visual debugging, visual testing | |||
| Task oriented visual interface for debugging timing problems in hardware design | | BIBAK | Full-Text | 389-392 | |
| Donna Nakano; Erric Solomon | |||
| We describe a graphical toolkit for debugging timing problems in hardware
design. The toolkit was developed as a part of the graphical user interface for
a static timing analysis tool PrimeTime from Synopsys Inc. A static timing
analysis tool identifies critical logical paths with timing violations in a
circuit design without simulating the design, thereby dramatically shortening
the time required for timing closure. The toolkit's visual organization of
multiple graphical views of timing data helps the user manage the complexity of
the data and the debugging process. Keywords: cognitive model of users, information visualization, visual interface design | |||
| A domain model-driven approach for producing user interfaces to multi-platform information systems | | BIBAK | Full-Text | 395-398 | |
| Julien Stocq; Jean Vanderdonckt | |||
| User interfaces to information systems can be considered systematic as they
consist of two types of tasks performed on classes of a domain model: basic
tasks performed on one class at a time (such as insert, delete, modify, sort,
list, print) and complex tasks performed on parts or whole of one or several
classes (e.g., tasks involving various attributes of different classes with
constraints between and establishing relationships between). This paper
presents how a wizard tool can produce user interfaces to such tasks according
to a model-driven approach based on a domain model of the information system.
This process consists of seven steps: database selection, data source
selection, building the opening procedure, data source selection for control
widgets, building the closing procedure, setting the size of the widgets, and
laying them out. The wizard generates code for Visual Basic and eMbedded Visual
Basic, thus enabling to obtain support for both stationary and mobile tasks
simultaneously, while maintaining consistency. Keywords: RAD, code generation, data base, information system, model-driven approach,
multi-platform, wizard | |||
| Xface: MPEG-4 based open source toolkit for 3D Facial Animation | | BIBAK | Full-Text | 399-402 | |
| Koray Balci | |||
| In this paper, we present our open source, platform independent toolkit for
developing 3D talking agents, namely Xface. It relies on MPEG-4 Face Animation
(FA) standard. The toolkit currently incorporates three pieces of software. The
core Xface library is for developers who want to embed 3D facial animation to
their software as well as researchers who want to focus on related topics
without the hassle of implementing a full framework from scratch. XfaceEd
editor provides an easy to use interface to generate MPEG-4 ready meshes from
static 3D models. Last, XfacePlayer is a sample application that demonstrates
the toolkit in action. All the pieces are implemented in C++ programming
language and rely on only operating system independent libraries. The main
design principles for Xface are ease of use and extendibility. Keywords: 3D facial animation, MPEG-4, open source, talking heads | |||
| Generative Programming of graphical user interfaces | | BIBAK | Full-Text | 403-406 | |
| Max Schlee; Jean Vanderdonckt | |||
| Generative Programming (GP) is a computing paradigm allowing automatic
creation of entire software families utilizing the configuration of elementary
and reusable components. GP can be projected on different technologies, e.g.
C++-templates, Java-Beans, Aspect-Oriented Programming (AOP), or Frame
technology. This paper focuses on Frame Technology, which aids the possible
implementation and completion of software components. The purpose of this paper
is to introduce the GP paradigm in the area of GUI application generation. It
demonstrates how automatically customized executable applications with GUI
parts can be generated from an abstract specification. Keywords: code generation, data base, diagram transformation, feature diagram,
generative programming, model-driven approach, object-oriented programming | |||
| A visual interface for a multimodal interactivity annotation tool: design issues and implementation solutions | | BIBAK | Full-Text | 407-410 | |
| Mykola Kolodnytsky; Niels Ole Bernsen; Laila Dybkjær | |||
| This paper discusses the user interface design for the NITE WorkBench for
Windows (NWB) which enables annotation and analysis of full natural interactive
communicative behaviour between humans and between humans and systems. The
system enables users to perceive voice and video data and control its
presentation when performing multi-level, cross-level and cross-modality
annotation, information visualisation for data coding and analysis, information
retrieval, and data exploitation. Keywords: data annotation tools, data visualisation, interface design | |||
| A collaborative annotation system for data visualization | | BIBAK | Full-Text | 411-414 | |
| Sean E. Ellis; Dennis P. Groth | |||
| We present Collaborative Annotations on Visualizations (CAV), a system for
annotating visual data in remote and collocated environments. Our system
consists of a network framework, and a client application built for tablet
PC's. CAV is designed to support the collection and sharing of annotations,
through the use of mobile devices connected to visualization servers. We have
developed a working system prototype based on tablet PC's that supports digital
ink, voice and text annotation, and illustrates our approach in a variety of
application domains, including biology, chemistry, and telemedicine. We have
created an XML based open standard that supports access to a variety of client
devices by publishing visualizations (data and annotations) as streams of
images. CAV's primary goal is to enhance scientific discovery by supporting
collaboration in the context of data visualizations. Keywords: Computer Supported Collaborative Visualization (CSCV), Computer Supported
Collaborative Works (CSCW), annotation, visualization | |||
| Interactive visual tools to explore spatio-temporal variation | | BIBAK | Full-Text | 417-420 | |
| Natalia Andrienko; Gennady Andrienko | |||
| CommonGIS is a developing software system for exploratory analysis of
spatial data. It includes a multitude of tools applicable to different data
types and helping an analyst to find answers to a variety of questions.
CommonGIS has been recently extended to support exploration of spatio-temporal
data, i.e. temporally variant data referring to spatial locations. The set of
new tools includes animated thematic maps, map series, value flow maps, time
graphs, and dynamic transformations of the data. We demonstrate the use of the
new tools by considering different analytical questions arising in the course
of analysis of thematic spatio-temporal data. Keywords: animated maps, exploratory data analysis, information visualisation,
temporal variation, time-series analysis, time-series spatial data | |||
| DOITrees revisited: scalable, space-constrained visualization of hierarchical data | | BIBAK | Full-Text | 421-424 | |
| Jeffrey Heer; Stuart K. Card | |||
| This paper extends previous work on focus+context visualizations of
tree-structured data, introducing an efficient, space-constrained, multi-focal
tree layout algorithm ("TreeBlock") and techniques at both the system and
interactive levels for dealing with scale. These contributions are realized in
a new version of the Degree-Of-Interest Tree browser, supporting real-time
interactive visualization and exploration of data sets containing on the order
of a million nodes. Keywords: focus+context, layout, scalability, tree, visualization | |||
| Tuning a CBIR system for vector images: the interface support | | BIBAK | Full-Text | 425-428 | |
| Tania Di Mascio; Marco Francesconi; Daniele Frigioni; Laura Tarantino | |||
| This paper presents a system supporting tuning and evaluation of a
Content-Based Image Retrieval (CBIR) engine for vector images, by a graphical
interface providing query-by-sketch and query-by-example interaction with query
results, and analysis of result quality. Vector images are first modelled as an
inertial system and then they are associated with descriptors representing
visual features invariant to affine transformation. To support requirements of
different application domains, the engine offers a variety of moment sets as
well as different metrics for similarity computation. The graphical interface
offers tools that helps in the selection of criteria and parameters necessary
to tune the system to a specific application domain. Keywords: CBIR, vector images, visual interfaces | |||
| Sketch-based retrieval of ClipArt drawings | | BIBAK | Full-Text | 429-432 | |
| Manuel Fonseca; Bruno Barroso; Pedro Ribeiro; Joaquim Jorge | |||
| These days there are a lot of vector drawings available for people to
integrate into documents. These come in a variety of formats, such as Corel,
Postscript, CGM, WMF and recently SVG. Typically, such ClipArt drawings tend to
be archived and accessed by categories (e.g. food, shapes, transportation,
etc.). However, to find a drawing among hundreds of thousands is not an easy
task. While text-driven attempts at classifying image data have been recently
supplemented with query-by-image content, these have been developed for
bitmap-type data and cannot handle vectorial information. In this paper we
present an approach to allow indexing and retrieving vector drawings by content
from large datasets. Our prototype can already handle databases with thousands
of drawings using commodity hardware. Furthermore, preliminary usability
assessments show promising results and suggest good acceptance of sketching as
a query mechanism by users. Keywords: drawing simplification, sketch and content-based retrieval | |||
| MediaBrowser: reclaiming the shoebox | | BIBAK | Full-Text | 433-436 | |
| Steven M. Drucker; Curtis Wong; Asta Roseway; Steven Glenner; Steven De Mar | |||
| Applying personal keywords to images and video clips makes it possible to
organize and retrieve them, as well as automatically create thematically
related slideshows. MediaBrowser is a system designed to help users create
annotations by uniting a careful choice of interface elements, an elegant and
pleasing design, smooth motion and animation, and a few simple tools that are
predictable and consistent. The result is a friendly, useable tool for turning
shoeboxes of old photos into labeled collections that can be easily browsed,
shared, and enjoyed. Keywords: annotation, digital photography, organization, visualization | |||
| X-Presenter: a tool for video-based hypermedia applications | | BIBAK | Full-Text | 437-440 | |
| Mario Bochicchio; Antonella Longo; Giuseppe Caldarazzo | |||
| Long video sequences can be difficult to navigate by adopting usual video
players, based on VTR-like controls and linear sliders. Hierarchical indexes,
i.e. multilevel tables of contents structured in topics and subtopics linked to
the corresponding video sub-sequences, can be very effective to support the
user browsing long videos, but, in general, the effort required to index the
video sequences is resource-intensive (in terms of required skills, hardware
and software) and time-consuming. In the paper we propose X-Presenter, a tool
to easily create hierarchically indexed video sequences (hypervideos) in
real-time. The hypervideos created with X-Presenter can be easily enriched with
hypertextual and multimedia elements to produce effective hypermedia
applications suitable both for on-line (the Web) and off-line (CD, DVD, Kiosks,
...) purposes. Keywords: X-Presenter, hierarchical index, hypervideo, hypervideo authoring tool | |||
| Perceiving awareness information through 3D representations | | BIBA | Full-Text | 443-446 | |
| Fabrizio Nunnari; Carla Simone | |||
| The paper describes a framework supporting the creation of 3D user interfaces to visualize awareness information about the cooperation context of distributed actors. The paper discusses the motivations behind the framework and illustrates ThreeDmap, an editor allowing the creation and customization of 3D interfaces supporting the perception of awareness information. | |||
| Two methods for enhancing mutual awareness in a group recommender system | | BIBA | Full-Text | 447-449 | |
| Anthony Jameson; Stephan Baldes; Thomas Kleinbauer | |||
| We present a group recommender system for vacations that helps group members who are not able to communicate synchronously to specify their preferences collaboratively and to arrive at an agreement about an overall solution. The system's design includes two innovations in visual user interfaces: 1. An interface for collaborative preference specification offers various ways in which one group member can view and perhaps copy the previously specified preferences of other users. This interface has been found to further mutual understanding and agreement. The same interface is used by the system to display recommended solutions and to visualize the extent to which a solution satisfies the preferences of the various group members. 2. In a novel application of animated characters, each character serves as a representative of a group member who is not currently available for communication. By responding with speech, facial expressions, and gesture to proposed solutions, a representative conveys to the current real user some key aspects of the corresponding real group member's responses to a proposed solution. Taken together, these two aspects of the interface provide complementary and partly redundant means by which a group member can achieve awareness of the preferences and responses of other group members: an abstract, complete, graphical representation and a concrete, selective, human-like representation. By allowing users to choose flexibly which representation they will attend to under what circumstances, we aim to move beyond the usual debates about the relative merits of these two general types of representation. | |||
| Extensible interfaces for mobile devices in an advanced platform for infomobility services | | BIBAK | Full-Text | 450-453 | |
| Luigi Mazzucchelli; Matteo Pace | |||
| Satellite position based applications, location based information services
and communication infrastructures, integrated in Infomobility systems, have had
a great influence on the development of new applications in various fields.
INStANT, "INfomobility Services for SafeTy-critical Application on Land and Sea
based on the use of iNtegrated GNSS Terminals for needs of OLYMPIC cities", is
a Pilot Project co-funded by European Commission and GALILEO Joint Undertaking;
it aims to provide a scalable and dynamic re-configurable system for
Infomobility services. The main innovations of the project are the design of an
info-mobile architecture that allows scalability and dynamic mode of operations
to achieve robustness, service continuity and usability in specific contexts
(e.g. Emergency Services). A demanding task of the project was the design of an
innovative platform for the user terminal, based on the integration of advanced
software components capable of geo-positioning, mobile communications,
visualization and mapping.
Modeling dynamic user interfaces, based on prescriptions written in XUL -- "XML-based User Interface Language", is the target of our study. The resulting solution, capable to run on mobile devices such as Pocket and Tablet PCs, shows features like flexibility, dynamic response to context and processes, usability and on-demand features. Keywords: XUL, extensible user interfaces, integrated models for process driven
Architectures, mobile devices | |||
| A mobile system for non-linear access to time-based data | | BIBAK | Full-Text | 454-457 | |
| Saturnino Luz; Masood Masoodian | |||
| Conventional interfaces for visualisation of time-based media support access
to sequential data in a linear fashion. We present two visualisation interfaces
for a mobile application that supports non-linear, structured browsing of
multimedia recordings by exploiting certain features of concurrent multimedia
streams. The system is built on a content mapping framework which automatically
creates links between text and audio data by establishing "temporal
neighbourhoods". It illustrates how non-linear browsing may be particularly
valuable for devices with limited screen real-estate. Keywords: collaborative writing, handheld devices, information visualisation | |||
| First prototype of conversational H.C. Andersen | | BIBAK | Full-Text | 458-461 | |
| Niels Ole Bernsen; Marcela Charfuelàn; Andrea Corradini; Laila Dybkjær; Thomas Hansen; Svend Kiilerich; Mykola Kolodnytsky; Dmytro Kupkin; Manish Mehta | |||
| This paper describes the implemented first prototype of a domain-oriented,
conversational edutainment system which allows users to interact via speech and
2D gesture input with life-like animated fairy-tale author Hans Christian
Andersen. Keywords: domain-oriented spoken conversation, life-like animated agents | |||