| Ambient mobility: human environment interface and interaction challenges | | BIBA | Full-Text | 3 | |
| José Luis Encarnação | |||
| Through the convergence of mobility, ubiquity and
multimediality/multimodality a new information & communication technology
paradigm is emerging: Ambient Intelligence (AmI). AmI basically means that
computers move from the desktop into the infrastructure of our everyday life to
build networks of smart items serving "smart players" (humans, machines, smart
items, animals, etc.) in intelligent environments. AmI begins to influence the
way we interact with our environments. An important aspect of future human
interaction therefore is the way this interaction evolves from human-computer
interaction (HCI) to a human environment interaction (HEI) supporting us in
efficiently managing our personal environment, not only at home or in the
office, but also in public and industrial environments.
This contribution addresses the fundamental components that are involved in the forthcoming human-computer-environment interaction. Focus will be specially on the challenges arising when the interaction takes place with mobile services e.g. providing information, supporting work tasks, or enriching leisure, in public and possibly outdoor environments. | |||
| Computational photography and video: interacting and creating with videos and images | | BIBA | Full-Text | 4 | |
| Irfan Essa | |||
| Digital image capture, processing, and sharing has become pervasive in our society. This has had significant impact on how we create novel scenes, how we share our experiences, and how we interact with images and videos. In this talk, I will present an overview of series of ongoing efforts in the analysis of images and videos for rendering novel scenes. First I will discuss (in brief) our work on Video Textures, where repeating information is extracted to generate extended sequences of videos. I will then describe some our extensions to this approach that allows for controlled generation of animations of video sprites. We have developed various learning and optimization techniques that allow for video-based animations of photorealistic characters. Using these sets of approaches as a foundation, then I will show how new images and videos can be generated. I will show examples of Photorealistic and Non-photorealistic Renderings of Scenes (Videos and Images) and how these methods support the media reuse culture, so common these days with user generated content. Time permitting, I will also share some of our efforts on video annotation and how we have taken some of these new concepts of video analysis to undergraduate classrooms. | |||
| Principles of entertainment in inhabited television | | BIBAK | Full-Text | 5-12 | |
| Marco Fanciulli | |||
| Inhabited TV paradigms have been around since a while and several
experimental implementations have been delivered around the world. The basic
model of these experiments involves the deployment of collaborative virtual
environments so that users can take part in TV shows from within these virtual,
shared environments. Unfortunately, this approach although cheap and easily
implementable, doesn't add up too much to engagement, pace gap between
real/virtual world, camera control techniques and most important, adequate TV
Formats.
The talk presents a new paradigm of basic principles for entertaining a virtually deployed audience allowing for interaction and maintaining the entertainment sensation. Keywords: inhabited TV, social design, virtual life | |||
| Flower menus: a new type of marking menu with large menu breadth, within groups and efficient expert mode memorization | | BIBAK | Full-Text | 15-22 | |
| Gilles Bailly; Eric Lecolinet; Laurence Nigay | |||
| This paper presents Flower menu, a new type of Marking menu that does not
only support straight, but also curved gestures for any of the 8 usual
orientations. Flower menus make it possible to put many commands at each menu
level and thus to create as large a hierarchy as needed for common
applications. Indeed our informal analysis of menu breadth in popular
applications shows that a quarter of them have more than 16 items. Flower menus
can easily contain 20 items and even more (theoretical maximum of 56 items).
Flower menus also support within groups as well as hierarchical groups. They
can thus favor breadth organization (within groups) or depth organization
(hierarchical groups): as a result, the designers can lay out items in a very
flexible way in order to reveal meaningful item groupings. We also investigate
the learning performance of the expert mode of Flower menus. A user experiment
is presented that compares linear menus (baseline condition), Flower menus and
Polygon menus, a variant of Marking menus that supports a breadth of 16 items.
Our experiment shows that Flower menus are more efficient than both Polygon and
Linear menus for memorizing command activation in expert mode. Keywords: within groups, curved gestures, expert mode, flower menus, learning
performance, marking menus, novice mode, polygon menus | |||
| Efficient web browsing on small screens | | BIBAK | Full-Text | 23-30 | |
| Hamed Ahmadi; Jun Kong | |||
| A global increase in PDA and cell phone ownership and a rise in the use of
wireless services have caused mobile browsing to become an important means of
Internet access. However, the small screen of such mobile devices limits the
usability of information browsing and searching. This paper presents a novel
method that automatically adapts a desktop presentation to a mobile
presentation, proceeding in two steps: detecting boundaries between different
information blocks and then representing the information to fit in small
screens. Distinct from other approaches, our approach analyzes both the DOM
structure and the visual layout to divide the original Web page into several
subpages, each of which includes closely related content and is suitable for
display on the small screen. Furthermore, a table of contents is automatically
generated to facilitate the navigation between different subpages. An
evaluation of a prototype of our approach shows that the browsing usability is
significantly improved. Keywords: adaptive interface for small screens, mobile web browser | |||
| Bridging the gap between real printouts and digital whiteboard | | BIBAK | Full-Text | 31-38 | |
| Peter Brandl; Michael Haller; Juergen Oberngruber; Christian Schafleitner | |||
| In this paper, we describe a paper-based interface, which combines the
physical (real) with the digital world: while interacting with real paper
printouts, users can seamlessly work with a digital whiteboard at the same
time. Users are able to send data from a real paper to the digital world by
picking up the content (e.g. images) from real printouts and drop it on the
digital surface. The reverse direction for transferring data from the
whiteboard to the real paper is supported through printouts of the whiteboard
page that are enhanced with integrated Anoto patterns. We present four
different interaction techniques that show the potential of this paper and
digital world combination. Moreover, we describe the workflow of our system
that bridges the gap between the two worlds in detail. Keywords: digital pen, interactive paper, paper interface | |||
| Exploring blog archives with interactive visualization | | BIBAK | Full-Text | 39-46 | |
| A Indratmo; Julita Vassileva; Carl Gutwin | |||
| Browsing a blog archive is currently not well supported. Users cannot gain
an overview of a blog easily, nor do they receive adequate support for finding
potentially interesting entries in the blog. To overcome these problems, we
developed a visualization tool that offers a new way to browse a blog archive.
The main design principles of the tool are twofold. First, a blog should
provide a rich overview to help users reason about the blog at a glance.
Second, a blog should utilize social interaction history preserved in the
archive to ease exploration and navigation. The tool was evaluated using a
tool-specific questionnaire and the Questionnaire for User Interaction
Satisfaction. Responses from the participants confirmed the utility of the
design principles: the user satisfaction was high, supported by a low error
rate in the given tasks. Qualitative feedback revealed that the decision to
select which entry to read was multidimensional, involving factors such as the
topic, the posting time, the length, and the number of comments on an entry. We
discuss the implications of these findings for the design of navigational
support for blogs, in particular to facilitate exploratory tasks. Keywords: blog visualization, social interaction history, social navigation | |||
| Using subjective and physiological measures to evaluate audience-participating movie experience | | BIBAK | Full-Text | 49-56 | |
| Tao Lin; Akinobu Maejima; Shigeo Morishima | |||
| In this paper we subjectively and physiologically investigate the effects of
the audiences' 3D virtual actor in a movie on their movie experience, using the
audience-participating movie DIM as the object of study. In DIM, the
photo-realistic 3D virtual actors of audience are constructed by combining
current computer graphics (CG) technologies and can act different roles in a
pre-rendered CG movie. To facilitate the investigation, we presented three
versions of a CG movie to an audience -- a Traditional version, its Self-DIM
(SDIM) version with the participation of the audience's virtual actor, and its
Self-Friend-DIM (SFDIM) version with the co-participation of the audience and
his friends' virtual actors. The results show that the participation of
audience's 3D virtual actors indeed cause increased subjective sense of
presence and engagement, and emotional reaction; moreover, SFDIM performs
significantly better than SDIM, due to increased social presence.
Interestingly, when watching the three movie versions, subjects experienced not
only significantly different galvanic skin response (GSR) changes on average --
changing trend over time, and number of fluctuations -- but they also
experienced phasic GSR increase when watching their own and friends' virtual 3D
actors appearing on the movie screen. These results suggest that the
participation of the 3D virtual actors in a movie can improve interaction and
communication between audience and the movie. Keywords: audience experience evaluation, audience-participating movie, physiological
measures | |||
| Content aware video presentation on high-resolution displays | | BIBAK | Full-Text | 57-64 | |
| Clifton Forlines | |||
| We describe a prototype video presentation system that presents a video in a
manner consistent with the video's content. Our prototype takes advantage of
the physically large display and pixel space that current high-definition
displays and multi-monitor systems offer by rendering the frames of the video
into various regions of the display surface. The structure of the video informs
the animation, size, and the position of these regions. Additionally,
previously displayed frames are often allowed to remain on-screen and are
filtered over time. Our prototype presents a video in a manner that not only
preserves the continuity of the story, but also supports the structure of the
video; thus, the content of the video is reflected in its presentation,
arguably enhancing the viewing experience. Keywords: digital video, entertainment technology, video playback | |||
| SparTag.us: a low cost tagging system for foraging of web content | | BIBAK | Full-Text | 65-72 | |
| Lichan Hong; Ed H. Chi; Raluca Budiu; Peter Pirolli; Les Nelson | |||
| Tagging systems such as del.icio.us and Diigo have become important ways for
users to organize information gathered from the Web. However, despite their
popularity among early adopters, tagging still incurs a relatively high
interaction cost for the general users. We introduce a new tagging system
called SparTag.us, which uses an intuitive Click2Tag technique to provide in
situ, low cost tagging of web content. SparTag.us also lets users highlight
text snippets and automatically collects tagged or highlighted paragraphs into
a system-created notebook, which can be later browsed and searched. We report
several user studies aimed at evaluating Click2Tag and SparTag.us. Keywords: Web 2.0, annotation, highlighting, social bookmarking, tagging | |||
| Timeline trees: visualizing sequences of transactions in information hierarchies | | BIBAK | Full-Text | 75-82 | |
| Michael Burch; Fabian Beck; Stephan Diehl | |||
| In many applications transactions between the elements of an information
hierarchy occur over time. For example, the product offers of a department
store can be organized into product groups and subgroups to form an information
hierarchy. A market basket consisting of the products bought by a customer
forms a transaction. Market baskets of one or more customers can be ordered by
time into a sequence of transactions. Each item in a transaction is associated
with a measure, for example, the amount paid for a product.
In this paper we present a novel method for visualizing sequences of these kinds of transactions in information hierarchies. It uses a tree layout to draw the hierarchy and a timeline to represent progression of transactions in the hierarchy. We have developed several interaction techniques that allow the users to explore the data. Smooth animations help them to track the transitions between views. The usefulness of the approach is illustrated by examples from several very different application domains. Keywords: hierarchy, time, visualization | |||
| Visualizing antenna design spaces | | BIBAK | Full-Text | 83-90 | |
| Kent Wittenburg; Tom Lanning; Darren Leigh; Kathy Ryall | |||
| This paper describes a long-term project exploring advanced visual
interfaces for antenna design. MERL developed three successive prototypes that
embodied an evolution towards larger scales and more concrete semantics for
visualization of large sets of candidate designs and then winnowing them down.
We experimented with multidimensional scaling and then collective line graphs
before settling on linked scatterplots to visualize performance in a design
space of up to 10 million antennas at a time. In the end, the scatterplot
solution was most successful at balancing intelligibility with visualization of
the space as a whole. The design allows for adding more 1D or 2D linked feature
visualizations if needed, and it smoothly transitions to other "details on
demand" views for final tweaking. Keywords: antenna design, human-guided search, information visualization, line graphs,
multivariate visualization | |||
| The in-context slider: a fluid interface component for visualization and adjustment of values while authoring | | BIBAK | Full-Text | 91-99 | |
| Andrew Webb; Andruid Kerne | |||
| As information environments grow in complexity, we yearn for simple
interfaces that streamline human cognition and effort. Users need to perform
complex operations on thousands of objects. Human attention and available
screen real estate are constrained. We develop a new fluid interface component
for the visualization and adjustment of values while authoring, the In-Context
Slider, which reduces physical effort and demand on attention by using fluid
mouse gestures and in-context interaction. We hypothesize that such an
interface will make adjusting values easier for the user. We evaluated the
In-Context Slider as an affordance for adjusting values of interest in text and
images, compared with a more typical interface. Participants performed faster
with the In-Context Slider. They found the new interface easier to use and more
natural for expressing interest. We then integrated the In-Context Slider in
the information composition platform, combinFormation. Participants experienced
the In-Context Slider as easier to use while developing collections to answer
open-ended information discovery questions. This research is relevant for many
applications in which users provide ratings, such as recommender systems, as
well as for others in which users' adjustment of values on concurrently
displayed objects is integrated with extensive interactive functionality. Keywords: fluid gestures, in-context interface, in-context slider, interaction design,
interest expression | |||
| Exploring the feasibility of video mail for illiterate users | | BIBAK | Full-Text | 103-110 | |
| Archana Prasad; Indrani Medhi; Kentaro Toyama; Ravin Balakrishnan | |||
| We present work that explores whether the asynchronous peer-to-peer
communication capabilities of email can be made accessible to illiterate
populations in the developing world. Building on metaphors from traditional
communication systems such as postal mail, and relevant design principles
established by previous research into text-free interfaces, we designed and
evaluated a prototype asynchronous communication application built on standard
email protocols. We considered different message formats -- text, freeform ink,
audio, and video + audio -- and via iterative usage and design sessions,
determined that video + audio was the most viable. Design alternatives for
authentication processes were also explored. Our prototype was refined over
three usability iterations, and the final version evaluated in a two-stage
study with 20 illiterate users from an urban slum in Bangalore, India. Our
results are mixed: On the one hand, the results show that users can understand
the concept of video mail. They were able to successfully complete tasks
ranging from account setup to login to viewing and creating mail, but required
assistance from an online audio assistant. On the other hand, there were some
surprising challenges such as a consistent difficulty understanding the notion
of asynchronicity. The latter suggests that more work on the paradigm is
required before the benefits of email can be brought to illiterate users. Keywords: ICT for development, illiterate users, video mail | |||
| The inspection of very large images by eye-gaze control | | BIBAK | Full-Text | 111-118 | |
| Nicholas Adams; Mark Witkowski; Robert Spence | |||
| The increasing availability and accuracy of eye gaze detection equipment has
encouraged its use for both investigation and control. In this paper we present
novel methods for navigating and inspecting extremely large images solely or
primarily using eye gaze control. We investigate the relative advantages and
comparative properties of four related methods: Stare-to-Zoom (STZ), in which
control of the image position and resolution level is determined solely by the
user's gaze position on the screen; Head-to-Zoom (HTZ) and Dual-to-Zoom (DTZ),
in which gaze control is augmented by head or mouse actions; and Mouse-to-Zoom
(MTZ), using conventional mouse input as an experimental control.
The need to inspect large images occurs in many disciplines, such as mapping, medicine, astronomy and surveillance. Here we consider the inspection of very large aerial images, of which Google Earth is both an example and the one employed in our study. We perform comparative search and navigation tasks with each of the methods described, and record user opinions using the Swedish User-Viewer Presence Questionnaire. We conclude that, while gaze methods are effective for image navigation, they, as yet, lag behind more conventional methods and interaction designers may well consider combining these techniques for greatest effect. Keywords: eye-gaze control, image space navigation, user interaction studies, visual
interaction | |||
| Evaluation of pointing performance on screen edges | | BIBAK | Full-Text | 119-126 | |
| Caroline Appert; Olivier Chapuis; Michel Beaudouin-Lafon | |||
| Pointing on screen edges is a frequent task in our everyday use of
computers. Screen edges can help stop cursor movements, requiring less precise
movements from the user. Thus, pointing at elements located on the edges should
be faster than pointing in the central screen area. This article presents two
experiments to better understand the foundations of "edge pointing". The first
study assesses several factors both on completion time and on users' mouse
movements. The results highlight some weaknesses in the current design of
desktop environments (such as the cursor shape) and reveal that movement
direction plays an important role in users' performance. The second study
quantifies the gain of edge pointing by comparing it with other models such as
regular pointing and crossing. The results not only show that the gain can be
up to 44%, but also reveal that movement angle has an effect on performance for
all tested models. This leads to a generalization of the 2D Index of Difficulty
of Accot and Zhai that takes movement direction into account to predict
pointing time using Fitts' law. Keywords: Fitts' law, edge pointing, performance modelling, screen edges | |||
| Starburst: a target expansion algorithm for non-uniform target distributions | | BIBAK | Full-Text | 129-137 | |
| Patrick Baudisch; Alexander Zotov; Edward Cutrell; Ken Hinckley | |||
| Acquiring small targets on a tablet or touch screen can be challenging. To
address the problem, researchers have proposed techniques that enlarge the
effective size of targets by extending targets into adjacent screen space. When
applied to targets organized in clusters, however, these techniques show little
effect because there is no space to grow into. Unfortunately, target clusters
are common in many popular applications. We present Starburst, a space
partitioning algorithm that works for target clusters. Starburst identifies
areas of available screen space, grows a line from each target into the
available space, and then expands that line into a clickable surface. We
present the basic algorithm and extensions. We then present 2 user studies in
which Starburst led to a reduction in error rate by factors of 9 and 3 compared
to traditional target expansion. Keywords: Voronoi, labeling, mouse, pen, target acquisition, target expansion, touch
input | |||
| Physical handles at the interactive surface: exploring tangibility and its benefits | | BIBAK | Full-Text | 138-145 | |
| Lucia Terrenghi; David Kirk; Hendrik Richter; Sebastian Krämer; Otmar Hilliges; Andreas Butz | |||
| In this paper we investigate tangible interaction on interactive tabletops.
These afford the support and integration of physical artefacts for the
manipulation of digital media. To inform the design of interfaces for
interactive surfaces we think it is necessary to deeply understand the benefits
of employing such physical handles, i.e., the benefits of employing a third
spatial dimension at the point of interaction.
To this end we conducted an experimental study by designing and comparing two versions of an interactive tool on a tabletop display, one with a physical 3D handle, and one purely graphical (but direct touch enabled). Whilst hypothesizing that the 3D version would provide a number of benefits, our observations revealed that users developed diverse interaction approaches and attitudes about hybrid and direct touch interaction. Keywords: GUI, design, hybrid, interfaces, tangible | |||
| TapTap and MagStick: improving one-handed target acquisition on small touch-screens | | BIBAK | Full-Text | 146-153 | |
| Anne Roudaut; Stéphane Huot; Eric Lecolinet | |||
| We present the design and evaluation of TapTap and MagStick, two thumb
interaction techniques for target acquisition on mobile devices with small
touch-screens. These two techniques address all the issues raised by the
selection of targets with the thumb on small tactile screens: screen
accessibility, visual occlusion and accuracy. A controlled experiment shows
that TapTap and MagStick allow the selection of targets in all areas of the
screen in a fast and accurate way. They were found to be faster than four
previous techniques except Direct Touch which, although faster, is too error
prone. They also provided the best error rate of all tested techniques. Finally
the paper also provides a comprehensive study of various techniques for thumb
based touch-screen target selection. Keywords: interaction techniques, mobile devices, one-handed interaction, thumb
interaction, touch-screens | |||
| Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces | | BIBAK | Full-Text | 154-161 | |
| Peter Brandl; Clifton Forlines; Daniel Wigdor; Michael Haller; Chia Shen | |||
| Many research projects have demonstrated the benefits of bimanual
interaction for a variety of tasks. When choosing bimanual input, system
designers must select the input device that each hand will control. In this
paper, we argue for the use of pen and touch two-handed input, and describe an
experiment in which users were faster and committed fewer errors using pen and
touch input in comparison to using either touch and touch or pen and pen input
while performing a representative bimanual task. We present design principles
and an application in which we applied our design rationale toward the creation
of a learnable set of bimanual, pen and touch input commands. Keywords: bimanual input, pen and touch, self revealing gestures | |||
| Semiotic engineering in practice: redesigning the CoScripter interface | | BIBAK | Full-Text | 165-172 | |
| Clarisse Sieckenius de Souza; Allen Cypher | |||
| Semiotic Engineering uses semiotic theories to characterize human-computer
interaction and support research and development of interactive systems. In
order to show the value of Semiotic Engineering in design, we illustrate how
semiotic concepts have been used in the analysis and generation of redesign
alternatives for a web browser-based program called CoScripter. We also discuss
how specific perspectives and expectations about the design process can
increase the benefit from Semiotic Engineering in design activities, and
describe our future steps in this research. Keywords: UI design methodology, end user programming, graphical user interface
design, semiotic engineering | |||
| Affective geographies: toward a richer cartographic semantics for the geospatial web | | BIBAK | Full-Text | 173-180 | |
| Elisa Giaccardi; Daniela Fogli | |||
| Due to the increasing sophistication in web technologies, maps can easily be
created, modified, and shared. This possibility has popularized the power of
maps by enabling people to add and share cartographic content, giving rise to
the geospatial web. People are increasingly using web maps to connect with each
other and with the urban and natural environment in ways no one had predicted.
As a result, web maps are growing into a venue in which knowledge and meanings
can be traced and visualized. However, the cartographic semantics of current
web mapping services are not designed to elicit and visualize what we call
affective meaning. Contributing a new perspective for the geospatial web, the
authors argue for affective geographies capable of allowing richer and multiple
readings of the same territory. This paper illustrates the cartographic
semantics developed by the authors and discusses it through a case study in
natural heritage interpretation. Keywords: collaborative web mapping, information visualization, map-based interaction,
web cartography | |||
| Recognition and processing of hand-drawn diagrams using syntactic and semantic analysis | | BIBAK | Full-Text | 181-188 | |
| Florian Brieler; Mark Minas | |||
| We present an approach to the processing of hand-drawn diagrams. Hand
drawing is inherently imprecise; we rely on syntactical and semantical analysis
to resolve the inevitable ambiguities arising from this impreciseness. Based on
the specification of a diagram language (containing aspects like concrete and
abstract syntax, grammar rules for a parser, and attributes for semantics),
editors supporting free hand drawing are generated. Since the generation
process relies on the specifications only, our approach is fully generic. In
this paper the overall architecture and concepts of our approach are explained
and discussed. The user-drawn strokes (forming the diagram) are transformed
into a number of independent models. The drawn components are recognized in
these models, directed by the specification. Then the set of all components is
analyzed to find the interpretation that best fits the whole diagram. We build
upon DiaGen, a generic diagram editor generator enabling syntax and semantic
analysis for diagrams, and extend it to support hand drawing. Case studies
(done with a fully working implementation in Java) confirm the strength and
applicability of our approach. Keywords: DiaGen, ambiguity resolution, hand drawing, model, recognition, sketching | |||
| Exploring video streams using slit-tear visualizations | | BIBAK | Full-Text | 191-198 | |
| Anthony Tang; Saul Greenberg; Sidney Fels | |||
| Video slicing -- a variant of slit scanning in photography -- extracts a
scan line from a video frame and successively adds that line to a composite
image over time. The composite image becomes a time line, where its visual
patterns reflect changes in a particular area of the video stream. We extend
this idea of video slicing by allowing users to draw marks anywhere on the
source video to capture areas of interest. These marks, which we call
slit-tears, are used in place of a scan line, and the resulting composite
timeline image provides a much richer visualization of the video data.
Depending on how tears are placed, they can accentuate motion, small changes,
directional movement, and relational patterns. Keywords: information visualization, timelines, video analysis, video history | |||
| Exploring the role of individual differences in information visualization | | BIBAK | Full-Text | 199-206 | |
| Cristina Conati; Heather Maclaren | |||
| In this paper, we describe a user study aimed at evaluating the
effectiveness of two different data visualization techniques developed for
describing complex environmental changes in an interactive system designed to
foster awareness in sustainable development. While several studies have
compared alternative visualizations, the distinguishing feature of our research
is that we try to understand whether individual user differences may be used as
predictors of visualization effectiveness in choosing among alternative
visualizations for a given task. We show that the cognitive ability known as
perceptual speed can predict which one of our target visualizations is most
effective for a given user. This result suggests that tailored visualization
selection can be an effective way to improve user performance. Keywords: evaluation of visualization techniques, individual differences | |||
| An empirical evaluation of interactive visualizations for preferential choice | | BIBAK | Full-Text | 207-214 | |
| Jeanette Bautista; Giuseppe Carenini | |||
| Many critical decisions for individuals and organizations are often framed
as preferential choices: the process of selecting the best option out of a set
of alternatives. This paper presents a task-based empirical evaluation of
ValueCharts, a set of interactive visualization techniques to support
preferential choice. The design of our study is grounded in a comprehensive
task model and we measure both task performance and insights. In the
experiment, we not only tested the overall usefulness and effectiveness of
ValueCharts, but we also assessed the differences between two versions of
ValueCharts, a horizontal and a vertical one. The outcome of our study is that
ValueCharts seem very effective in supporting preferential choice and the
vertical version appears to be more effective than the horizontal one. Keywords: empirical evaluation, preferential choice, user studies, visualization
techniques | |||
| Model-based layout generation | | BIBAK | Full-Text | 217-224 | |
| Sebastian Feuerstack; Marco Blumendorf; Veit Schwartze; Sahin Albayrak | |||
| Offering user interfaces for interactive applications that are flexible
enough to be adapted to various context-of-use scenarios such as supporting
different display sizes or addressing various input styles requires an adaptive
layout. We describe an approach for layout derivation that is embedded in a
model-based user interface generation process. By an interactive and
tool-supported process we can efficiently create a layout model that is
composed of interpretations of the other design models and is consistent to the
application design. By shifting the decision about which interpretations are
relevant to support a specific context-of-use scenario from design-time to
run-time, we can flexibly adapt the layout to consider new device capabilities,
user demands and user interface distributions. We present our run-time
environment that is able to evaluate the relevant model layout information to
constraints as they are required and to reassemble the user interface parts
regarding the updated containment, order, orientation and sizes information of
the layout-model. Finally we present results of an evaluation we performed to
test the design and run-time efficiency of our model-based layouting approach. Keywords: constraint generation, context-of-use, human-computer interaction,
layouting, model-based user interfaces | |||
| A mixed-fidelity prototyping tool for mobile devices | | BIBAK | Full-Text | 225-232 | |
| Marco de Sá; Luís Carriço; Luís Duarte; Tiago Reis | |||
| In this paper we present a software framework which supports the
construction of mixed-fidelity (from sketch-based to software) prototypes for
mobile devices. The framework is available for desktop computers and mobile
devices (e.g., PDAs, Smartphones). It operates with low-fidelity sketch based
prototypes or mid to high-fidelity prototypes with some range of functionality,
providing several dimensions of customization (e.g., visual components,
audio/video files, navigation, behavior) and targeting specific usability
concerns. Furthermore, it allows designers and users to test the prototypes on
actual devices, gathering usage information, both passively (e.g., logging) and
actively (e.g., questionnaires/Experience Sampling). Overall, it conveys common
prototyping procedures with effective data gathering methods that can be used
on ubiquitous scenarios supporting in-situ prototyping and participatory design
on-the-go. We address the framework's features and its contributions to the
design and evaluation of applications for mobile devices and the field of
mobile interaction design, presenting real-life case studies and results. Keywords: evaluation, mobile interaction design, prototyping, usability | |||
| Gummy for multi-platform user interface designs: shape me, multiply me, fix me, use me | | BIBAK | Full-Text | 233-240 | |
| Jan Meskens; Jo Vermeulen; Kris Luyten; Karin Coninx | |||
| Designers still often create a specific user interface for every target
platform they wish to support, which is time-consuming and error-prone. The
need for a multi-platform user interface design approach that designers feel
comfortable with increases as people expect their applications and data to go
where they go. We present Gummy, a multi-platform graphical user interface
builder that can generate an initial design for a new platform by adapting and
combining features of existing user interfaces created for the same
application. Our approach makes it easy to target new platforms and keep all
user interfaces consistent without requiring designers to considerably change
their work practice. Keywords: GUI builder, UIML, design tools, multi-platform design | |||
| KMVQL: a visual query interface based on Karnaugh map | | BIBAK | Full-Text | 243-250 | |
| Jiwen Huo | |||
| Extracting information from data is an interactive process. Visualization
plays an important role, particularly during data inspection. Querying is also
important, allowing the user to isolate promising portions of the data. As a
result, data exploration environments normally include both, integrating them
tightly.
This paper presents KMVQL, the Karnaugh map based visual query language. It has been designed to support the interactive exploration of multidimensional datasets. KMVQL uses Karnaugh map as the visual representation for Boolean queries. It provides a visual query interface to help users formulate arbitrarily complex Boolean queries by direct manipulation operations. With KMVQL, users do not have to worry about the logic operators any more, which makes Boolean query specification much easier. The Karnaugh maps also function as visualization spreadsheets that provide seamless integration of queries with their results, which is helpful for users to better understand the data and refine their queries efficiently. Keywords: Karnaugh map, direct manipulation, query formulation, visual query,
visualization | |||
| Query-through-drilldown: data-oriented extensional queries | | BIBAK | Full-Text | 251-259 | |
| Alan Dix; Damon Oram | |||
| Traditional database query formulation is intensional: at the level of
schemas, table and column names. Previous work has shown that filters can be
created using a query paradigm focused on interaction with data tables. This
paper presents a technique, Query-through-Drilldown, to enable join formulation
in a data-oriented paradigm. Instead of formulating joins at the level of
schemas, the user drills down through tables of data and the query is
implicitly created based on the user's actions. Query-through-Drilldown has
been applied to a large relational database, but similar techniques could be
applied to semi-structured data or semantic web ontologies. Keywords: SQL, data structure mining, data-oriented interaction, database query,
extensional query, query-by-browsing, tabular interface | |||
| Automatically adapting web sites for mobile access through logical descriptions and dynamic analysis of interaction resources | | BIBAK | Full-Text | 260-267 | |
| Fabio Paternò; Carmen Santoro; Antonio Scorcia | |||
| While several solutions for desktop user interface adaptation for mobile
access have been proposed, there is still a lack of solutions able to
automatically generate mobile versions taking semantic aspects into account. In
this paper, we propose a general solution able to dynamically build logical
descriptions of existing desktop Web site implementations, adapt the design to
the target mobile device, and generate an implementation that preserves the
original communications goals while taking into account the actual resources
available in the target device. We describe the novel transformations supported
by our new solution, show example applications and report on first user tests. Keywords: mobile interfaces, model-based design, multi-device web interfaces, user
interface adaptation | |||
| A physics-based approach for interactive manipulation of graph visualizations | | BIBAK | Full-Text | 271-278 | |
| Andre Suslik Spritzer; Carla M. D. S. Freitas | |||
| This paper presents an interactive physics-based technique for the
exploration and dynamic reorganization of graph layouts that takes into account
semantic properties which the user might need to emphasize. Many techniques
have been proposed that take a graph as input and produce a visualization
solely based on its topology, seldom ever relying on the semantic attributes of
nodes and edges. These automatic topology-based algorithms might generate
aesthetically interesting layouts, but they neglect information that might be
important for the user. Among these are the force-directed or energy
minimization algorithms, which use physics analogies to produce satisfactory
layouts. They consist of applying forces on the nodes, which move until the
physical system enters a state of mechanical equilibrium. We propose an
extension of this metaphor to include tools for the interactive manipulation of
such layouts. These tools are comprised of magnets, which attract nodes with
user-specified criteria to the regions surrounding the magnets. Magnets can be
nested and also used to intuitively perform set operations such as union and
intersection, becoming thus an intuitive visual tool for sorting through the
datasets. To evaluate the technique we discuss how they can be used to perform
common graph visualization tasks. Keywords: graph visualization, interaction | |||
| Agent warp engine: formula based shape warping for networked applications | | BIBAK | Full-Text | 279-286 | |
| Alexander Repenning; Andri Ioannidou | |||
| Computer visualization and networking have advanced dramatically. 3D
hardware acceleration has reached the point where even low-power handheld
computers can render and animate complex 3D graphics efficiently.
Unfortunately, end-user computing does not yet provide the necessary tools and
conceptual frameworks to let end-users access these technologies and build
their own networked interactive 2D and 3D applications such as rich
visualizations, animations and simulations. The Agent Warp Engine (AWE) is a
formula-based shape-warping framework that combines end-user visualization and
end-user networking. AWE is a spreadsheet-inspired framework based on Web
sharable variables. To build visualizations, users define these variables,
relate them through equations and connect them to 2D and 3D shapes. In addition
to basic shape control such as rotation, size, and location, AWE enables the
creation of rich shape warping visualizations. We motivate the AWE approach
with the Mr. Vetro human physiology simulation supporting collaborative
learning through networked handheld computers. Keywords: 3D graphics, collective simulations, end-user development, end-user
programming, real-time image warping, spreadsheets | |||
| Image geo-mashups: the example of an augmented reality weather camera | | BIBAK | Full-Text | 287-294 | |
| Jana Gliet; Antonio Krüger; Otto Klemm; Johannes Schöning | |||
| This paper presents the general idea of image geo-mashups, which combines
concepts from web mashups and augmented reality by adding geo-referenced data
to a perspective image. The paper shows how to design and implement an
augmented reality weather cam, that combines data from a steerable weather cam
with additional sensor information retrieved from the web. Keywords: augmented reality, geo-mashups, image composition processes | |||
| How coherent environments support remote gestures | | BIBAK | Full-Text | 297-300 | |
| Naomi Yamashita; Keiji Hirata; Toshihiro Takada; Yasunori Harada | |||
| Previous studies have demonstrated the importance of providing users with a
coherent environment across distant sites. To date, it remains unclear how such
an environment affects people's gestures and their comprehension. In this
study, we investigate how a coherent environment across distant sites affects
people's hand gestures when collaborating on physical tasks. We present
video-mediated technology that provides distant users with a coherent
environment in which they can freely gesture toward remote objects by the
unmediated representations of hands. Using this system, we examine the values
of a coherent environment by comparing remote collaboration on physical tasks
in a fractured setting versus a coherent setting. The results indicate that a
coherent environment facilitates gesturing toward remote objects and their use
improves task performance. The results further suggest that a coherent
environment improves the sense of co-presence across distant sites and enables
quick recovery from misunderstandings. Keywords: coherent environment, collaborative physical task, computer-supported
collaborative work, remote gesture, video-mediated communication | |||
| SLMeeting: supporting collaborative work in Second Life | | BIBAK | Full-Text | 301-304 | |
| Andrea De Lucia; Rita Francese; Ignazio Passero; Genoveffa Tortora | |||
| Second Life is a virtual world which is often used for the synchronous
meeting of teams. However, supporting distributed meeting goes beyond
supporting user activities during the meeting itself, because it is also
necessary to facilitate their coordination, arrangement and set up.
In this paper we investigate how teams can work together more effectively in Second Life. We also propose a system, named SLMeeting, which enhances the communication facilities of Second Life to support the management of collaborative activities, organized as conferences or Job meetings and later replayed, queried, analyzed and visualized. The meeting organization and management functionalities are performed by ad-hoc developed Second Life objects and by the communication between these objects and a supporting web site. As a result, the functionalities offered by Second Life are enriched with the capabilities of organizing meetings and recoding all the information concerning the event. Keywords: 3D interfaces, CSCW, Second Life, collaborative virtual environment,
collaborative work, groupware, multimedia meetings | |||
| Balancing physical and digital properties in mixed objects | | BIBAK | Full-Text | 305-308 | |
| Céline Coutrix; Laurence Nigay | |||
| Mixed interactive systems seek to smoothly merge physical and digital
worlds. In this paper we focus on mixed objects that take part in the
interaction. Based on our Mixed Interaction Model, we introduce a new
characterization space of the physical and digital properties of a mixed object
from an intrinsic viewpoint without taking into account the context of use of
the object. The resulting enriched Mixed Interaction Model aims at balancing
physical and digital properties in the design process of mixed objects. The
model extends and generalizes previous studies on the design of mixed systems
and covers existing approaches of mixed systems including tangible user
interfaces, augmented reality and augmented virtuality. A mixed system called
ORBIS that we developed is used to illustrate the discussion: we highlight how
the model informs the design alternatives of ORBIS. Keywords: augmented reality, design space, mixed objects, mixed systems, tangible user
interfaces | |||
| A flexible, declarative presentation framework for domain-specific modeling | | BIBAK | Full-Text | 309-312 | |
| Tamás Mészáros; Gergely Mezei; Tihamér Levendovszky | |||
| Domain-Specific Modeling has gained increasing popularity in software
modeling. Domain-Specific Modeling Languages can simplify the design and the
implementation of systems in various domains. Consequent domain specific
visualization helps to understand the models for domain specialists. However,
the efficiency of domain-specific modeling is often determined by the limited
capabilities -- i.e. the lack of interactive design elements, low customization
facilities -- of the editor applications.
This paper introduces the Presentation Framework of Visual Modeling and Transformation System, the framework provides a flexible environment for model visualization and provides a declarative solution for appearance description as well. Keywords: domain-specific modeling, metamodeling, model visualization, modeling
framework, software modeling | |||
| Advanced visual systems supporting unwitting EUD | | BIBAK | Full-Text | 313-316 | |
| Maria Francesca Costabile; Piero Mussio; Loredana Parasiliti Provenza; Antonio Piccinno | |||
| The ever increasing use of interactive software systems and the evolution of
the World Wide Web into the so-called Web 2.0 determines the rise of new roles
for users, who evolve from information consumers to information producers. The
distinction between users and designers becomes fuzzy. Users are increasingly
involved in the design and development of the tools they use, thus users and
developers are not anymore two mutually exclusive groups of people. In this
paper types of users that are between pure end users and software developers
are analyzed. Some users take a very active role in shaping software tools to
their needs, but they do it without being aware of programming, they are
unwitting programmers who need appropriate development techniques and
environments. A meta-design participatory approach for supporting unwitting
end-user development through advanced visual systems is briefly discussed. Keywords: end user, end-user development, user classification | |||
| VCode and VData: illustrating a new framework for supporting the video annotation workflow | | BIBAK | Full-Text | 317-321 | |
| Joey Hagedorn; Joshua Hailpern; Karrie G. Karahalios | |||
| Digital tools for annotation of video have the promise to provide immense
value to researchers in disciplines ranging from psychology to ethnography to
computer science. With traditional methods for annotation being cumbersome,
time-consuming, and frustrating, technological solutions are situated to aid in
video annotation by increasing reliability, repeatability, and workflow
optimizations. Three notable limitations of existing video annotation tools are
lack of support for the annotation workflow, poor representation of data on a
timeline, and poor interaction techniques with video, data, and annotations.
This paper details a set of design requirements intended to enhance video
annotation. Our framework is grounded in existing literature, interviews with
experienced coders, and ongoing discussions with researchers in multiple
disciplines. Our model is demonstrated in a new system called VCode and VData.
The benefit of our system is that is directly addresses the workflow and needs
of both researchers and video coders. Keywords: Graphical User Interfaces (GUI), annotation, video | |||
| An investigation of dynamic landmarking functions | | BIBAK | Full-Text | 322-325 | |
| Philip Quinn; Andy Cockburn; A Indratmo; Carl Gutwin | |||
| It is easy for users to lose awareness of their location and orientation
when navigating large information spaces. Providing landmarks is one common
technique that helps users remain oriented, alleviating the mental workload and
reducing the number of redundant interactions. But how many landmarks should be
displayed? We conducted an empirical evaluation of several relationships
between the number of potential landmarked items in the display and the number
of landmarks rendered at any one time, with results strongly favouring a
logarithmic relationship. Keywords: information scent, landmarks, navigation, navigational aids, visual clutter,
visual search | |||
| Eulr: a novel resource tagging facility integrated with Flickr | | BIBAK | Full-Text | 326-330 | |
| Rosario De Chiara; Andrew Fish; Salvatore Ruocco | |||
| We have developed a novel information storage and display structure called
EulerView, which can be used for the systematic management of tagged resources.
The storage model is a non hierarchical classification structure, based on
Euler diagrams, which can be especially useful if overlapping categories are
commonplace. Keeping the constraints on the display structure relaxed enables
its use as a categorisation structure which provides the user with flexibility
and facilitates quick user tagging of multiple resources. As one instance of
the application of this theory, in the case when the resources are photos, we
have developed the Eulr tool which we have integrated with Flickr. User
feedback indicates that the Eulr representation is intuitive and that users
would be keen to use Eulr again. Keywords: Euler diagrams, EulerView, Flickr, categorisation, classification, metadata,
tagging | |||
| Ambiguity detection in multimodal systems | | BIBAK | Full-Text | 331-334 | |
| Maria Chiara Caschera; Fernando Ferri; Patrizia Grifoni | |||
| Multimodal systems support users to communicate in a natural way according
to their needs. However, the naturalness of the interaction implies that it is
hard to find one and only one interpretation of the users' input. Consequently
the necessity to define methods for users' input interpretation and ambiguity
detection is arising. This paper proposes a theoretical approach based on a
Constraint Multiset Grammar combined with Linear Logic, for representing and
detecting ambiguities, and in particular semantic ambiguities, produced by the
user's input. It considers user's input as a set of primitives defined as
terminal elements of the grammar, composing multimodal sentences. The Linear
Logic is used to define rules that allow detecting ambiguities connected to the
semantics of the user's input. In particular, the paper presents the main
features of the user's input and connections between the elements belonging to
a multimodal sentence, and it enables to detect ambiguities that can arise
during their interpretation process. Keywords: grammar-based language, interpretation of multimodal input, multimodal
ambiguity, multimodal interfaces | |||
| Fostering conversation after the museum visit: a WOZ study for a shared interface | | BIBAK | Full-Text | 335-338 | |
| Cesare Rocchi; Daniel Tomasini; Oliviero Stock; Massimo Zancanaro | |||
| According to recent studies, a museum visit by a small group (e.g. a family
or a few friends) can be considered successful if conversation about the
experience develops among its members. Often people stop at the museum
café to have a break during the visit or before leaving. The museum
café is the location that we foresee as ideal to introduce a tabletop
interface meant to foster the conversation of the visitors.
We describe a Wizard of Oz study of a system that illustrates the reactions of people to visual stimuli (floating words, images, text snippets) projected on a tabletop interface. The stimuli, dynamically selected taking into account the topic discussed and a set of communicative strategies, are meant to support the conversation about the exhibition and the visit or to foster a topic change, in case the group is discussing something unrelated to the visit. The results of the Wizard of Oz show that people recognized visuals on the table as "cues" for a group conversation about the visit, and interesting insights about the design have emerged. Keywords: conversation, museum visit, tabletop interface | |||
| Exploring emotions and multimodality in digitally augmented puppeteering | | BIBAK | Full-Text | 339-342 | |
| Lassi A. Liikkanen; Giulio Jacucci; Eero Huvio; Toni Laitinen; Elisabeth Andre | |||
| Recently, multimodal and affective technologies have been adopted to support
expressive and engaging interaction, bringing up a plethora of new research
questions. Among the challenges, two essential topics are 1) how to devise
truly multimodal systems that can be used seamlessly for customized performance
and content generation, and 2) how to utilize the tracking of emotional cues
and respond to them in order to create affective interaction loops. We present
PuppetWall, a multi-user, multimodal system intended for digitally augmented
puppeteering. This application allows natural interaction to control puppets
and manipulate playgrounds comprising background, props, and puppets.
PuppetWall utilizes hand movement tracking, a multi-touch display and emotion
speech recognition input for interfacing. Here we document the technical
features of the system and an initial evaluation. The evaluation involved two
professional actors and also aimed at exploring naturally emerging expressive
speech categories. We conclude by summarizing challenges in tracking emotional
cues from acoustic features and their relevance for the design of affective
interactive systems. Keywords: affective computing, gestural interaction, interactive installations | |||
| Face bubble: photo browsing by faces | | BIBAK | Full-Text | 343-346 | |
| Jun Xiao; Tong Zhang | |||
| Face recognition technology presents an opportunity in computer automation
to help people better organize their personal photo collections. However, the
robustness of the technology needs to improve and how users interact with face
clusters needs to go beyond the traditional file folder metaphor. We designed a
visualization called face bubble that supports both fast one-glance view and
filtering and exploration of photo collections based on face clustering
results. Our clustering algorithm provides a better accuracy rate than previous
work and our circular space filling visual design offers an alternative UI
based on the traditional weighted list view. Other visualization techniques
such as a fisheye view are also integrated into the interface for fast image
browsing. Finally, fine tuning of the design based on user feedback improved
the aesthetics of the visualization. Keywords: clustering, face detection, face recognition, filtering, visualization | |||
| Browsing a website with topographic hints | | BIBA | Full-Text | 347-350 | |
| S. Rossi; A. Inserra; E. Burattini | |||
| This work aimed to propose an adaptive web site in the field of cultural heritage that can dynamically suggest links, based on not intrusive profiling methodologies integrated with topographical information. A fundamental issue, typical in web sites that refer to real sites, is to help the user to orient himself geographically. Our system can support the user in its exploration of physical/virtual space suggesting new physical locations structured as a thematic itinerary through the excavations. | |||
| Visual tag authoring: picture extraction via localized, collaborative tagging | | BIBAK | Full-Text | 351-354 | |
| Andrea Bellucci; Stefano Levialdi Ghiron; Ignacio Aedo; Alessio Malizia | |||
| In this work we present a system to encode location based information
extracted from a media collection (the Flickr tagging system) into a single 2D
physical label. This information is clustered by using locations metadata
(geotags) and key-words (tags) associated to pictures. Our system helps two
types of users: the user authoring the physical label and the final user who
retrieves up-to-date information scanning the label with his/her camera phone.
Preliminary results for a given seed word (the tag Napoli) on 3000 photographs
are presented, together with some ad-hoc weighting factors that help in finding
significant pictures (representing places) that can be associated to a specific
area. Keywords: clustering, collaborative tagging, geo-referenced photographs | |||
| Time2Hide: spatial searches and clutter alleviation for the desktop | | BIBAK | Full-Text | 355-358 | |
| George Lepouras; Aggelos Papatriantafyllou; Akrivi Katifori; Alan Dix | |||
| With information abundance the user's desktop is often cluttered with files
and folders. Existing tools partially address the clutter problem. Time2Hide
enhances desktop functionality by allowing icons that are not used for a long
time to gradually fade and merge with the background. This aims to alleviate
the problem of icon clutter. Users can also perform spatial searches, defining
areas of the desktop they wish to search for icons; can reveal one or more
hidden files or can go back in time animating the desktop and its changes. With
Time2Hide users can still use the desktop as a place for storing files and
folders, without worrying about the possible clutter and without being afraid
that the files might be moved to an unknown location. The new desktop has been
implemented and evaluated. Evaluation results reveal that such an enhanced
desktop can significantly support users and propose suggestions for further
improvements. Keywords: desktop tool, icon clutter, personal information management, spatial search | |||
| Users' quest for an optimized representation of a multi-device space | | BIBAK | Full-Text | 359-362 | |
| Dzmitry Aliakseyeu; Andrés Lucero; Jean-Bernard Martens | |||
| A plethora of reaching techniques, intended for moving objects between
locations distant to the user, have recently been proposed and tested. One of
the most promising techniques is the Radar View. Up till now, the focus has
been mostly on how a user can interact efficiently with a given radar map, not
on how these maps are created and maintained. It is for instance unclear
whether or not users would appreciate the possibility of adapting such radar
maps to particular tasks and personal preferences. In this paper we address
this question by means of a prolonged user study with the Sketch Radar
prototype. The study demonstrates that users do indeed modify the default maps
in order to improve interactions for particular tasks. It also provides
insights into how and why the default physical map is modified. Keywords: interaction techniques, large-display systems, map, multi-display systems,
reaching, spatial | |||
| Multiview user interfaces with an automultiscopic display | | BIBAK | Full-Text | 363-366 | |
| Wojciech Matusik; Clifton Forlines; Hanspeter Pfister | |||
| Automultiscopic displays show 3D stereoscopic images that can be viewed from
any viewpoint without special glasses. These displays are becoming widely
available and affordable. In this paper, we describe how an automultiscopic
display, built for viewing 3D images, can be repurposed to display 2D
interfaces that appear differently from different points-of-view. For
single-user applications, point-of-view becomes a means of input and a user is
able to reveal different views of an application by simply moving their head
left and right. For multi-user applications, a single-display application can
show each member of the group a different variation of the interface. We
outline three types of multi-view interfaces and illustrate each with example
applications. Keywords: automultiscopic, display, multi-view | |||
| Adapting a single-user, single-display molecular visualization application for use in a multi-user, multi-display environment | | BIBA | Full-Text | 367-371 | |
| Clifton Forlines; Ryan Lilien | |||
| In this paper, we discuss the adaptation of an open-source single-user, single-display molecular visualization application for use in a multi-display, multi-user environment. Jmol, a popular, open-source Java applet for viewing PDB files, is modified in such a manner that allows synchronized coordinated views of the same molecule to be displayed in a multi-display workspace. Each display in the workspace is driven by a separate PC, and coordinated views are achieved through the passing of RasMol script commands over the network. The environment includes a tabletop display capable of sensing touch-input, two large vertical displays, and a TabletPC. The presentation of large molecules is adapted to best take advantage of the different qualities of each display, and a set of interaction techniques that allow groups working in this environment to better collaborate are also presented. | |||
| As time goes by: integrated visualization and analysis of dynamic networks | | BIBAK | Full-Text | 372-375 | |
| Mathias Pohl; Florian Reitz; Peter Birke | |||
| The dynamics of networks have become more and more important in all research
fields that depend on network analysis. Standard network visualization and
analysis tools usual do not offer a suitable interface to network dynamics.
These tools do not incorporate specialized visualization algorithms for dynamic
networks but only algorithms for static networks. This results in layouts that
bother the user with too many layout changes which makes it very hard to work
with them.
To handle dynamic networks the D Keywords: dynamic network visualization, dynamics of networks, human-centered visual
analytics, multiple and integrated views | |||
| Revealing uncertainty for information visualization | | BIBAK | Full-Text | 376-379 | |
| Meredith Skeels; Bongshin Lee; Greg Smith; George Robertson | |||
| Uncertainty in data occurs in domains ranging from natural science to
medicine to computer science. By developing ways to include uncertainty in our
information visualizations we can provide more accurate visual depictions of
critical datasets. One hindrance to visualizing uncertainty is that we must
first understand what uncertainty is and how it is expressed by users. We
reviewed existing work from several domains on uncertainty and conducted
qualitative interviews with 18 people from diverse domains who self-identified
as working with uncertainty. We created a classification of uncertainty
representing commonalities in uncertainty across domains and that will be
useful for developing appropriate visualizations of uncertainty. Keywords: information visualization, qualitative research, uncertainty visualization,
user-centered design | |||
| Perceptual usability: predicting changes in visual interfaces & designs due to visual acuity differences | | BIBAK | Full-Text | 380-383 | |
| Mike Bennett; Aaron Quigley | |||
| When designing interfaces and visualizations how does a human or automatic
visual interface designer know how easy or hard it will be for viewers to see
the interface? In this paper we present a perceptual usability measure of how
easy or hard visual designs are to see when viewed over different distances.
The measure predicts the relative perceivability of sub-parts of a visual
design by using simulations of human visual acuity coupled with an information
theoretic measure. We present results of the perceptual measure predicting the
perceivability of optometrists eye charts, a webpage and a small network graph. Keywords: evaluation, methodology, methods, screen design | |||
| Illustrative halos in information visualization | | BIBAK | Full-Text | 384-387 | |
| Martin Luboschik; Heidrun Schumann | |||
| In many interactive scenarios, the fast recognition and localization of
crucial information is very important to effectively perform a task. However,
in information visualization the visualization of permanently growing large
data volumes often leads to a simultaneously growing amount of presented
graphical primitives. Besides the fundamental problem of limited screen space,
the effective localization of single or multiple items of interest by a user
becomes more and more difficult. Therefore, different approaches have been
developed to emphasize those items -- mainly by manipulating the items size, by
suppressing the whole context or by adding supplemental visual elements (e.g.,
contours, arrows). This paper introduces the well known illustrative technique
of haloing to information visualization to address the localization problem.
Applying halos emphasizes items without a manipulation of size or an
introduction of additional visual elements and reduces the context suppression
to a locally defined region. This paper also presents the results of a first
user-study to get an impression of the usefulness of halos for a faster
recognition. Keywords: halos, illustrative rendering, illustrative visualization, information
visualization | |||
| Shadow tracking on multi-touch tables | | BIBAK | Full-Text | 388-391 | |
| Florian Echtler; Manuel Huber; Gudrun Klinker | |||
| Multi-touch interfaces have been a focus of research in recent years,
resulting in development of various innovative UI concepts. Support for
existing WIMP interfaces, however, should not be overlooked. Although several
approaches exist, there is still room for improvement, particularly regarding
implementation of the "hover" state, commonly used in mouse-based interfaces.
In this paper, we present a multi-touch system which is designed to address this problem. A multi-touch table based on FTIR (frustrated total internal reflection) is extended with a ceiling-mounted light source to create shadows of hands and arms. By tracking these shadows with the rear-mounted camera which is already present in the FTIR setup, users can control multiple cursors without touching the table and trigger a "click" event by tapping the surface with any finger of the corresponding hand. An informal evaluation with 15 subjects found an improvement in accuracy when compared to an unaugmented touch screen. Keywords: FTIR, direct-touch, mouse emulation, multi-touch, shadow tracking, tabletop
interfaces | |||
| LocaweRoute: an advanced route history visualization for mobile devices | | BIBAK | Full-Text | 392-395 | |
| Taina M. Lehtimäki; Timo Partala; Mika Luimula; Pertti Verronen | |||
| In this research, we addressed the problem of visualizing route histories on
a mobile device. We developed a solution, which combines the visualization of
three route history parameters: speed, direction, and location. The
visualization was tested in a laboratory evaluation with 12 subjects. The
results showed that by using the visualization the subjects were able to
estimate actual driving speeds accurately. The subjects also evaluated that the
visualization supported their knowledge of the speed, location, and direction
quite well. The results suggest that the presented visualization is an
improvement over currently used route history visualizations. Keywords: mobile device, route history, visualization | |||
| The effect of animated transitions in zooming interfaces | | BIBAK | Full-Text | 396-399 | |
| Maruthappan Shanmugasundaram; Pourang Irani | |||
| Zooming interfaces use animated transitions to smoothly shift the users view
between different scales of the workspace. Animated transitions assist in
preserving the spatial relationships between views. However, they also increase
the overall interaction time. To identify whether zooming interfaces should
take advantage of animations, we carried out one experiment that explores the
effects of smooth transitions on a spatial task. With metro maps, users were
asked to identify the number of metro stops between different subway lines with
and without animated zoom-in/out transitions. The results of the experiment
show that animated transitions can have significant benefits on user
performance -- participants in the animation conditions were twice as fast and
overall made fewer errors than in the non-animated conditions. In addition,
short animations were found to be as effective as long ones, suggesting that
some of the costs of animations can be avoided. Users also preferred
interacting with animated transitions than without. Our study gives empirical
evidence on the benefits of animated transitions in zooming interfaces. Keywords: animation, information visualization, zooming interfaces | |||
| Visual design of service deployment in complex physical environments | | BIBAK | Full-Text | 400-403 | |
| Augusto Celentano; Fabio Pittarello | |||
| In this paper we discuss the problem of deploying appliances for interactive
services in complex physical environments using a knowledge based approach to
define the relations between the environment and the services, and a visual
interface to check the associated constraints, in order to design a solution
satisfactory for the user. Keywords: X3D, navigation, virtual and augmented reality, visual interaction | |||
| Visualizing program similarity in the Ac plagiarism detection system | | BIBAK | Full-Text | 404-407 | |
| Manuel Freire | |||
| Programming assignments are easy to plagiarize in such a way as to foil
casual reading by graders. Graders can resort to automatic plagiarism detection
systems, which can generate a "distance" matrix that covers all possible
pairings. Most plagiarism detection programs then present this information as a
simple ranked list, losing valuable information in the process.
The Ac system uses the whole distance matrix to provide graders with multiple linked visualizations. The graph representation can be used to explore clusters of highly related submissions at different filtering levels. The histogram representation presents compact "individual" histograms for each submission, complementing the graph representation in aiding graders during analysis. Although Ac's visualizations were developed with plagiarism detection in mind, they should also prove effective to visualize distance matrices from other domains, as demonstrated by preliminary experiments. Keywords: software plagiarism, visualization | |||
| Visual representation of web design patterns for end-users | | BIBAK | Full-Text | 408-411 | |
| Paloma Díaz; Ignacio Aedo; Mary Beth Rosson | |||
| In this paper, we discuss the use of visual representations of web design
patterns to help end-users and casual developers to identify the patterns they
can apply in a specific project. The main goal is to promote design knowledge
reuse by facilitating the identification of the right patterns, taking into
account that these users have little or no knowledge about web design, and
certainly not about design patterns, and that each pattern might include some
trade-offs users should consider to make more rational decisions. Keywords: design patterns, goal-oriented design, web design | |||
| Memoria mobile: sharing pictures of a point of interest | | BIBAK | Full-Text | 412-415 | |
| Rui Jesus; Ricardo Dias; Rute Frias; Arnaldo J. Abrantes; Nuno Correia | |||
| This paper presents the Memoria mobile interface, an application to share
and access personal memories when visiting historical sites, museums or other
points of interest. With the proposed interface people can navigate the memory
space of the place they are visiting and, using their camera-phones or Personal
Digital Assistants (PDA), view what has interested them or other people in past
occasions. The system consists of a retrieval engine and a mobile user
interface that allows capture and automatic annotation of images. Experimental
results are presented to show the performance of the retrieval mechanisms and
the usability of the interface. Keywords: mobile user interfaces, multimedia information retrieval, personal memories | |||
| An eye tracking approach to image search activities using RSVP display techniques | | BIBAK | Full-Text | 416-420 | |
| Simone Corsato; Mauro Mosconi; Marco Porta | |||
| Rapid Serial Visual Presentation (RSVP) is now a well-established category
of image display methods. In this paper we compare four RSVP techniques when
applied to very large collections of images (thousands), in order to extract
the highest quantity of items that match a textual description. We report on
experiments with more than 30 testers, in which we exploit an eye tracking
system to perform the selection of images, thus obtaining quantitative and
qualitative data about the efficacy of each presentation mode with respect to
this task. Our study aims at confirming the feasibility and convenience of an
eye tracking approach for effective image selection in RSVP techniques,
compared to the mouse-click "traditional" selection method, in view of a future
where eye trackers might become nearly as common as LCD displays are now. We
propose an interpretation of the experimental data and provide short
considerations on technical issues. Keywords: eye tracking, image browsing, image database, image presentation, rapid
serial visual presentation | |||
| Advanced interfaces for music enjoyment | | BIBAK | Full-Text | 421-424 | |
| Adriano Baratè; Luca A. Ludovico | |||
| Music enjoyment in a digital format is more than listening to a binary file.
An overall music description is made of many interdependent aspects, that
should be taken into account in an integrated and synchronized way. In this
article, a proposal for an advanced interface to enjoy music in all its aspects
will be described. The encoding language that allows the design and
implementation of such interface is the IEEE P1599 standard, an XML-based
format known as MX. Keywords: MX, XML, multimedia, music, synchronization | |||
| Funky wall: presenting mood boards using gesture, speech and visuals | | BIBAK | Full-Text | 425-428 | |
| Andrés Lucero; Dzmitry Aliakseyeu; Jean-Bernard Martens | |||
| In our studies aimed at understanding design practice we have identified the
creation of mood boards as a relevant task for designers. In this paper we
introduce an interactive wall-mounted display system that supports the
presentation of mood boards. The system allows designers to easily record their
mood board presentations while capturing the richness of their individual
presentation skills and style. Designers and clients can play back, explore and
comment on different aspects of the presentation using an intuitive and
flexible interaction based on hand gestures thus supporting two-way
communication. The system records the presentation and organizes it into three
information layers (i.e. gesture, sound and visuals), which are first used to
segment the presentation into meaningful parts, and later for playback.
Exploratory evaluations show that designers are able to use the system with no
prior training, and see a practical use of the proposed system in their design
studios. Keywords: gesture-based interaction, wall projection displays | |||
| Toward a natural interface to virtual medical imaging environments | | BIBAK | Full-Text | 429-432 | |
| Luigi Gallo; Giuseppe De Pietro; Antonio Coronato; Ivana Marra | |||
| Immersive Virtual Reality environments are suitable to support activities
related to medicine and medical practice. The immersive visualization of
information-rich 3D objects, coming from patient scanned data, provides
clinicians with a clear perception of depth and shapes. However, to benefit
from immersive visualization in medical imaging, where inspection and
manipulation of volumetric data are fundamental tasks, medical experts have to
be able to act in the virtual environment by exploiting their real life
abilities. In order to reach this goal, it is necessary to take into account
user skills and needs so as to design and implement usable and accessible
human-computer interaction interfaces. In this paper we present a natural
interface for a semi-immersive virtual environment. Such interface is based on
an off-the-shelf handheld wireless device and a speech recognition component,
and provides clinicians with intuitive interaction modes for inspecting
volumetric medical data. Keywords: 3D interaction, 3D user interface, VTK, medical imaging, virtual reality,
wireless | |||
| Music selection using the PartyVote democratic jukebox | | BIBAK | Full-Text | 433-436 | |
| David Sprague; Fuqu Wu; Melanie Tory | |||
| PartyVote is a democratic music jukebox designed to give all participants an
equal influence on the music played at social gatherings or parties. PartyVote
is designed to provide appropriate music in established social groups with
minimal user interventions and no pre-existing user profiles. The visualization
uses dimensionality reduction to show song similarity and overlays information
about how votes affect the music played. Visualizing voting decisions allows
users to link music selections with individuals, providing social awareness.
Traditional group norms can subsequently be leveraged to maintain fair system
use and empower users. Keywords: CSCW, entertainment, group dynamics, information visualization, music map,
music systems, social interaction, voting | |||
| A haptic rendering engine of web pages for blind users | | BIBAK | Full-Text | 437-440 | |
| Nikolaos Kaklanis; Juan González Calleros; Jean Vanderdonckt; Dimitrios Tzovaras | |||
| To overcome the shortcomings posed by audio rendering of web pages for blind
users, this paper implements an interaction technique where web pages are
parsed so as to automatically generate a virtual reality scene that is
augmented with a haptic feedback. All elements of a web page are transformed
into a corresponding "hapget" (haptically-enhanced widget), a three dimensional
widget exhibiting a behavior that is consistent with their web counterpart and
having haptic extension governed by usability guidelines for haptic
interaction. A set of implemented hapgets is described and used in some
examples. All hapgets introduced an extension to UsiXML, a XML-compliant User
Interface Description Language that fosters model-driven engineering of user
interfaces. In this way, it possible to render any UsiXML-compatible user
interface thanks to the interaction technique described, and not only web
pages. Keywords: haptic interaction, haptically enhanced widget, user interface extensible
markup language, virtual reality | |||
| Realizing the hidden: interactive visualization and analysis of large volumes of structured data | | BIBA | Full-Text | 441-444 | |
| Olaf Noppens; Thorsten Liebig | |||
| An emerging trend in Web computing aims at collecting and integrating distributed data. For instance, various communities recently have build large repositories of structured and interlinked data sets from different Web sources. However, up to date there is virtually no support in navigating, visualising or even analysing structured date sets of this size appropriately. This paper describes novel rendering techniques enabling a new level of visual analytics combined with interactive exploration principles. The underlying visualisation rationale is driven by the principle of providing detail information with respect to qualitative as well as quantitative aspects on user demand while offering an overview at any time. By means of our prototypical implementation and two real-world data sets we show how to answer several data specific tasks by interactive visual exploration. | |||
| A wearable Malossi alphabet interface for deafblind people | | BIBAK | Full-Text | 445-448 | |
| Nicholas Caporusso | |||
| Deafblind people have a severe degree of combined visual and auditory
impairment resulting in problems with communication, (access to) information
and mobility. Moreover, in order to interact with other people, most of them
need the constant presence of a caregiver who plays the role of an interpreter
with an external world organized for hearing and sighted people. As a result,
they usually live behind an invisible wall of silence, in a unique and
inexplicable condition of isolation.
In this paper, we describe DB-HAND, an assistive hardware/software system that supports users to autonomously interact with the environment, to establish social relationships and to gain access to information sources without an assistant. DB-HAND consists of an input/output wearable peripheral (a glove equipped with sensors and actuators) that acts as a natural interface since it enables communication using a language that is easily learned by a deafblind: Malossi method. Interaction with DB-HAND is managed by a software environment, whose purpose is to translate text into sequences of tactile stimuli (and vice-versa), to execute commands and to deliver messages to other users. It also provides multi-modal feedback on several standard output devices to support interaction with the hearing and the sighted people. Keywords: deafblindness, multimodal feedback, tactile alphabet, ubiquitous computing | |||
| SyncDecor: communication appliances for virtual cohabitation | | BIBAK | Full-Text | 449-453 | |
| Hitomi Tsujita; Koji Tsukada; Itiro Siio | |||
| Despite the fact that various means of communication such as mobile phones,
instant messenger and e-mail are now widespread; many romantic couples
separated by long distances worry about the health of their relationships.
Likewise, these couples have a greater desire to feel a sense of connection and
synchronicity with their partners than traditional inter-family bonds. In many
prior research projects, unique devices were developed that required a level of
interpretation which did not directly affect one's daily routine -- and
therefore were more casual in nature. However, this paper concentrates on the
use of common, day-to-day items and modifying them to communicate everyday
actions while maintaining a sustained and natural usage pattern for strongly
paired romantic couples. For this purpose, we propose the "SyncDecor" system,
which pairs traditional appliances and allow them to remotely synchronize and
provide awareness or cognizance about their partners -- thereby creating a
virtual "living together" feeling. We present evidence, from a 3-month long
field study, where traditional appliances provided a significantly more
natural, varied and sustained usage patterns which ultimately enhanced
communications between the couples. Keywords: awareness, communication, synchronization | |||
| Toward haptic mathematics: why and how | | BIBAK | Full-Text | 454-457 | |
| C. Bernareggi; A. Marcante; P. Mussio; L. Parasiliti Provenza; Sara Vanzi | |||
| Understanding a mathematical concept, expressed in a written form, requires
the exploration of the whole symbolic expression to recognize its component
significant patterns as well as its overall structure. This exploration is
difficult for visually impaired people whether the symbolic expression is
materialized as an oral description or a Braille expression. The paper
introduces the notion of Haptic Mathematics as a digital medium of thought and
communication of mathematical concepts that adopts the nomenclature and
language of Mathematics and makes its expressions perceptible as sets of haptic
signals. As a first step toward Haptic Mathematics, the paper presents a system
adopting an audio-haptic interaction whose goal is to enable visual impaired or
blind people to reason on graph structures and communicate their reasoning with
sighted people. The paper describes a first system prototype and some
preliminary usability results aimed at evaluating the effectiveness of the
proposal. Keywords: blind users, haptic, multimodal interactive systems | |||
| The need for an interaction cost model in adaptive interfaces | | BIBAK | Full-Text | 458-461 | |
| Bowen Hui; Sean Gustafson; Pourang Irani; Craig Boutilier | |||
| The development of intelligent assistants has largely benefited from the
adoption of decision-theoretic (DT) approaches that enable an agent to reason
and account for the uncertain nature of user behaviour in a complex software
domain. At the same time, most intelligent assistants fail to consider the
numerous factors relevant from a human-computer interaction perspective. While
DT approaches offer a sound foundation for designing intelligent agents, these
systems need to be equipped with an interaction cost model in order to reason
the impact of how (static or adaptive) interaction is perceived by different
users. In a DT framework, we formalize four common interaction factors --
information processing, savings, visual occlusion, and bloat. We empirically
derive models for bloat and occlusion based on the results of two users
experiments. These factors are incorporated in a simulated help assistant where
decisions are modeled as a Markov decision process. Our simulation results
reveal that our model can easily adapt to a wide range of user types with
varying preferences. Keywords: bloat, information processing, interaction models and techniques, perceived
savings, user interaction studies, visual occlusion | |||
| Theia: open environment for multispectral image analysis | | BIBAK | Full-Text | 462-465 | |
| Vito Roberto; Massimiliano Hofer | |||
| Preliminary results of Theia, a software system for multispectral image
visualization and analysis are presented. A new approach is adopted, based on
modern design techniques and better tuned to the recent advancements in
hardware. A careful implementation in the C++ language addresses the issues of
time efficiency, openness to personalizations and portability by exploiting the
advances of Open Source technologies. Experimental tests on multispectral
images have given promising results towards the use of the system as a dynamic,
interactive interface to massive data visualization, mining and processing. Keywords: hyperspectral, image processing, image processing environment, interactive
interfaces, multispectral, multispectral analysis, object oriented design, open
source, visualization | |||
| The multi-touch SoundScape renderer | | BIBA | Full-Text | 466-469 | |
| Katharina Bredies; Nick Alexander Mann; Jens Ahrens; Matthias Geier; Sascha Spors; Michael Nischt | |||
| In this paper, we introduce a direct manipulation tabletop multi-touch user interface for spatial audio scenes. Although spatial audio rendering exists for several decades now, mass market applications have not been developed and the user interfaces still address a small group of expert users. We implemented an easy-to-use direct manipulation interface for multiple users, taking full advantage of the object-based audio rendering mode. Two versions of the user interface have been developed to explore variations in information architecture and will be evaluated in user tests. | |||
| Interactive visual interfaces for evacuation planning | | BIBAK | Full-Text | 472-473 | |
| Gennady Andrienko; Natalia Andrienko; Ulrich Bartling | |||
| To support planning of massive transportations under time-critical
conditions, in particular, evacuation of people from a disaster-affected area,
we have developed a software module for automated generation of transportation
schedules and a suite of visual analytics tools that enable the verification of
a schedule by a human expert. We combine computational, visual, and interactive
techniques to help the user to deal with large and complex data involving
geographical space, time, and heterogeneous objects. Keywords: coordinated multiple views, geovisualization, task-centered visualization
design, transportation planning, visual analytics | |||
| Supporting visual exploration of massive movement data | | BIBAK | Full-Text | 474-475 | |
| Natalia Andrienko; Gennady Andrienko | |||
| To make sense from large amounts of movement data (sequences of positions of
moving objects), a human analyst needs interactive visual displays enhanced
with database operations and methods of computational analysis. We present a
toolkit for analysis of movement data that enables a synergistic use of the
three types of techniques. Keywords: aggregation, cluster analysis, exploratory data analysis, interactive
displays, movement behavior, movement data, movement patterns, trajectory,
visual analytics, visualization | |||
| Scenique: a multimodal image retrieval interface | | BIBAK | Full-Text | 476-477 | |
| Ilaria Bartolini; Paolo Ciaccia | |||
| Searching for images by using low-level visual features, such as color and
texture, is known to be a powerful, yet imprecise, retrieval paradigm. The same
is true if search relies only on keywords (or tags), either derived from the
image context or user-provided annotations. In this demo we present Scenique, a
multimodal image retrieval system that provides the user with two basic
facilities: 1) an image annotator, that is able to predict keywords for new
(i.e., unlabelled) images, and 2) an integrated query facility that allows the
user to search for images using both visual features and tags, possibly
organized in semantic dimensions. We demonstrate the accuracy of image
annotation and the improved precision that Scenique obtains with respect to
querying with either only features or keywords. Keywords: multi-structural databases, semantic dimensions, visual features | |||
| Multimodal user interfaces for smart environments: the multi-access service platform | | BIBAK | Full-Text | 478-479 | |
| Marco Blumendorf; Sebastian Feuerstack; Sahin Albayrak | |||
| User interface modeling is a well accepted approach to handle increasing
user interface complexity. The approach presented in this paper utilizes user
interface models at runtime to provide a basis for user interface distribution
and synchronization. Task and domain model synchronize workflow and dynamic
content across devices and modalities. A cooking assistant serves as example
application to demonstrate multimodality and distribution. Additionally a
debugger allows the inspection of the underlying user interface models at
runtime. Keywords: human-computer interaction, interface design, model-based user interfaces,
multimodal interaction, runtime interpretation, smart home environments,
ubiquitous computing, usability | |||
| Interactive shape specification for pattern search in time series | | BIBAK | Full-Text | 480-481 | |
| Paolo Buono; Adalberto Lafcadio Simeone | |||
| Time series analysis is a process whose goal is to understand phenomena. The
analysis often involves the search for a specific pattern. Finding patterns is
one of the fundamental steps for time series observation or forecasting. The
way in which users are able to specify a pattern to use for querying the time
series database is still a challenge. We hereby propose an enhancement of the
SearchBox, a widget used in TimeSearcher, a well known tool developed at the
University of Maryland that allows users to find patterns similar to the one of
interest. Keywords: information visualization, interactive system, interactive visualization,
visual querying | |||
| A system for dynamic 3D visualisation of speech recognition paths | | BIBAK | Full-Text | 482-483 | |
| Saturnino Luz; Masood Masoodian; Bill Rogers; Bo Zhang | |||
| This paper presents an interactive visualisation system that assists users
of semi-automatic speech transcription systems to assess alternative
recognition results in real time and provide feedback to the speech recognition
back-end in an intuitive manner. This prototype uses the OpenGL libraries to
implement an animated 3D visual representation of alternative recognition
results generated by the Sphinx automatic speech recognition system. It is
expected that displaying alternatives dynamically will facilitate early
detection of recognition errors and encourage user interaction, which in turn
can be used to improve future recognition performance. Keywords: animated interfaces, automatic speech transcription, error correction,
interactive visualisation | |||
| Perspective change: a system for switching between on-screen views by closing one eye | | BIBAK | Full-Text | 484-485 | |
| Fabian Hemmert; Danijela Djokic; Reto Wettach | |||
| This project explores the change of on-screen views through single-sided eye
closure. A prototype was developed, three different applications are presented:
Activating a sniper scope in a 3D shooter game, zooming out into a overview
perspective over a web page, and filtering out icons on a cluttered desktop.
Initial user testing results are presented. Keywords: eye, eye closure, eyelid, perspective change, prototype, screen interface | |||
| Improving citizens' interactions in an e-deliberation environment | | BIBAK | Full-Text | 486-487 | |
| Fiorella De Cindio; Cristian Peraboni; Leonardo Sonnante | |||
| In an e-deliberation environment it is particularly important to conceive
tools and web interfaces able to facilitate social online interactions between
citizens and public officers. In this paper we present some choices made in the
development of an e-deliberation platform. In particular we will focus on the
use of maps to facilitate citizens interaction based on geo-localized
discussions, and on the design of an ad hoc interface for online discussion to
increase citizens' participation. Keywords: e-deliberation, e-participation, map-based interaction, web interfaces,
web-based social interaction | |||
| "Isn't this archaeological site exciting!": a mobile system enhancing school trips | | BIBAK | Full-Text | 488-489 | |
| Carmelo Ardito; Rosa Lanzilotti | |||
| Explore! is an m-learning system that aims to improve young visitors'
experience of historical sites. It exploits the imaging and multimedia
capabilities of the latest generation cell phone, creating electronic games
that support learning of ancient history during a visit to historical sites.
Explore! consists of two main components: 1) the Game Application running on
cellular phones, to be used during the game and 2) the Master Application
running on a notebook, used by the game master (i.e. a teacher) to perform a
reflection phase, which follows the game. Having the Game Application been
described in previous papers, in this work we mainly illustrate the Master
Application. Keywords: learning game, mobile system | |||
| MedioVis: visual information seeking in digital libraries | | BIBAK | Full-Text | 490-491 | |
| Mathias Heilig; Mischa Demarmels; Werner A. König; Jens Gerken; Sebastian Rexhausen; Hans-Christian Jetter; Harald Reiterer | |||
| MedioVis is a visual information seeking system that aims to support users'
natural seeking behavior, particularly in complex information spaces. To
achieve this goal we introduce multiple complementary visualization techniques
together with an easy-to-use and consistent interaction concept. Over the last
four years, MedioVis was developed in the context of digital libraries
following a user-centered design process. The focus of this paper is the
presentation of our interaction model and further to give an overview of the
applied visualization techniques. Keywords: coordinated views, interaction design, semantic zooming | |||
| End-user visualizations | | BIBAK | Full-Text | 492-493 | |
| Alexander Repenning; Andri Ioannidou | |||
| Computer visualization has advanced dramatically over the last few years,
partially driven by the exploding video game market. 3D hardware acceleration
has reached the point where even low-power handheld computers can render and
animate complex 3D graphics efficiently. Unfortunately, end-user computing does
not yet provide the necessary tools and conceptual frameworks to let end-user
developers access these technologies and build their own interactive 2D and 3D
applications such as rich visualizations, animations and simulations. In this
paper, we demonstrate the Agent Warp Engine (AWE), a formula-based
shape-warping framework for end-user visualization. Keywords: 3D graphics, end-user programming, real-time image warping | |||
| Agrafo: a visual interface for grouping and browsing digital photos | | BIBAK | Full-Text | 494-495 | |
| João Mota; Manuel J. Fonseca; Daniel Gonçalves; Joaquim A. Jorge | |||
| With the growing popularity of digital cameras, the organization, browsing,
management and grouping of photos become a problem of every photograph
(professional or amateur), because their collections easily achieve the order
of thousands. Here, we present a system to automate these processes, which
relies on photo information, such as, semantic features (extracted from
content), meta-information and low level. Keywords: image analysis, image grouping, user interface | |||