| Epecial issue on selected papers from HCC 2003 | | BIB | Full-Text | 1-2 | |
| Philip Cox; John Hosking | |||
| Interactive, visual fault localization support for end-user programmers | | BIBAK | Full-Text | 3-40 | |
| Joseph R. Ruthruff; Shrinu Prabhakararao; James Reichwein; Curtis Cook; Eugene Creswick; Margaret Burnett | |||
| End-user programmers are writing an unprecedented number of programs,
primarily using languages and environments that incorporate a number of
interactive and visual programming techniques. To help these users debug these
programs, we have developed an entirely visual, interactive approach to fault
localization. This paper presents the approach. We also present the results of
a think-aloud study that examined interactive, human-centric issues that arise
in end-user debugging using a fault localization approach. Our results provide
insights into the contributions such approaches can make to the end-user
debugging process. Keywords: End-user programming; Visual fault localization; Debugging; End-user software engineering; Testing; Slicing; Form-based visual programs | |||
| A framework and methodology for studying the causes of software errors in programming systems | | BIBA | Full-Text | 41-84 | |
| Andrew J. Ko; Brad A. Myers | |||
| An essential aspect of programmers' work is the correctness of their code. This makes current HCI techniques ill-suited to analyze and design the programming systems that programmers use everyday, since these techniques focus more on problems with learnability and efficiency of use, and less on error-proneness. We propose a framework and methodology that focuses specifically on errors by supporting the description and identification of the causes of software errors in terms of chains of cognitive breakdowns. The framework is based on both old and new studies of programming, as well as general research on the mechanisms of human error. Our experiences using the framework and methodology to study the Alice programming system have directly inspired the design of several new programming tools and interfaces. This includes the Whyline debugging interface, which we have shown to reduce debugging time by a factor of 8 and help programmers get 40% further through their tasks. We discuss the framework's and methodology's implications for programming system design, software engineering, and the psychology of programming. | |||
| Adding speech recognition support to UML tools | | BIBAK | Full-Text | 85-118 | |
| Samuel Lahtinen; Jari Peltonen | |||
| In the Unified Modeling Language (UML) models are constructed graphically,
by drawing diagrams. However, it is not always easy to manipulate diagrams
using the CASE-tools of today. Typically, functionality and information are
hidden behind complex menu or dialog hierarchies, diminishing the usability of
the tools. Speech recognition can be used to promote usability of an existing
tool as a complementary user interface that allows simultaneous use of other
interfaces.
In this paper we present an approach to develop speech interfaces to UML tools. We also show that UML is a favorable target to speech recognition and that speech recognition is applicable and mature enough to be used to enhance usage of UML tools. To support our claims, we present a spoken language created for editing UML models, a prototype of a speech control system integrated with Rational Rose, and the results of the early user tests on the system and the language. Keywords: Speech recognition; Multimodal interaction; UML modeling; CASE-tool; Visual language | |||
| The JOpera visual composition language | | BIBAK | Full-Text | 119-152 | |
| Cesare Pautasso; Gustavo Alonso | |||
| Composing Web services into a coherent application can be a tedious and
error-prone task when using traditional textual scripting languages or emerging
XML-based approaches. As an alternative, complex interactions patterns and data
exchanges between different Web services can be effectively modeled using a
visual language. In this paper, we discuss the requirements of such an
application scenario and we fully describe the JOpera Visual Composition
Language. An extensive set of visual editing tools, a compiler and a debugger
for the language have been implemented as part of the JOpera system with the
goal of providing a true visual environment for Web service composition with
usability features emphasizing rapid development and visual scalability. Keywords: Visual composition languages; Web services; JOpera; Data flow | |||
| Using end-user visualization environments to mediate conversations: a 'Communicative Dimensions' framework | | BIBA | Full-Text | 153-185 | |
| Christopher D. Hundhausen | |||
| An end-user visualization environment aims to empower end users to create graphical representations of phenomena within a scientific domain of interest. Research into end-user visualization environments has traditionally focused on developing the human-computer interaction necessary to enable the quick and easy construction of domain-specific visualizations. That traditional focus has left open the question of how such environments might support human-human interaction. Especially in situations in which end-user visualization environments are enlisted to facilitate learning and to build design consensus, we hypothesize that a key benefit is their ability to mediate conversations about a scientific domain of interest. In what ways might end-user visualization environments support human communication, and what design features make them well-suited to do so? Drawing both on a theory of communication, and on empirical studies in which end-user environments were enlisted to support human communication, we propose a provisional framework of six 'Communicative Dimensions' of end-user visualization environments: programming salience, provisionality, story content, modifiability, controllability, and referencability. To illustrate the value of these dimensions as an analytic and design tool, we use them to map a sample of publicly available end-user visualization environments into the 'Communicative' design space. By characterizing those aspects of end-user visualization environments that impact social interaction, our framework provides an important extension to Green and Petre's (J. Visual Lang. Comput. 7 (1996) 131-174) 'Cognitive Dimensions'. | |||
| A framework for visual notation exchange | | BIBAK | Full-Text | 187-212 | |
| Hermann Stoeckle; John Grundy; John Hosking | |||
| A wide range of software tools provide software engineers with different
views (static and dynamic) of software systems. Much recent work has focused on
software information model exchange. However, most software tools lack support
for exchange of information about visualisation notations (both definitions of
notations and instances of them). Some basic converters have been developed to
support the exchange of notation information between software tools but almost
all are custom-built to support specific notations and difficult to maintain.
We describe the development of several notation exchange converters for tools
supporting software architecture notations. This has lead to the development of
a unified converter generator framework for notation exchange. Keywords: Visual notation exchange; Notation converters; Tool integration; Visual language representation | |||
| Visual design and programming for Web applications | | BIBAK | Full-Text | 213-230 | |
| Takao Shimomura | |||
| With the development of the information society, it has become necessary to
release software early that satisfies users. Therefore, it has become important
to develop the software quickly so that the users can try it, and give the
developers feedback. Recently, instead of the conventional water-fall-model
development, new development techniques such as aspect-oriented programming
have been researched. The software development techniques that make use of
graphics have also been researched in a variety of fields. This paper presents
the image-oriented programming method that uses graphics as a tool of designing
software, and enables users to easily develop software according to their image
of what they want to develop. It also describes the BioPro system that
implements this method for Web applications. The BioPro system has the
following features; (1) users can develop programs according to their image,
(2) they can easily verify the completeness of components that make up the
program and the consistency of those relationships, and (3) they can easily
confirm what they have developed, regardless of which stage of development they
are currently at. Keywords: Visual programming; Web applications; Program generator; Prototyping | |||
| A universal fast graphical user interface building tool for arbitrary interpreters | | BIBAK | Full-Text | 231-244 | |
| L. Pere; M. Koniorczyk | |||
| We consider the issue of implementing graphical user interfaces (GUIs): we
present an easy-to-use and fast GUI building tool, specially designed to be
used with interpreters. It supports a variety of communication methods and
interaction models, therefore being able to collaborate with a huge diversity
of interpreters in a natural way, in POSIX compliant (or similar) environment.
Thus it enables the programmer to easily create a GUI, no matter what kind of
language or model the actual interpreter implements. Event-driven programs in
UNIX shells and graphical user interfaces in a data oriented language are
presented as example applications. Keywords: User interface; GUI; Interpreters; Shells; Event-driven programming | |||
| Ontology-driven map generalization | | BIBAK | Full-Text | 245-267 | |
| Lars Kulik; Matt Duckham; Max Egenhofer | |||
| Different users of geospatial information have different requirements of
that information. Matching information to users' requirements demands an
understanding of the ontological aspects of geospatial data. In this paper, we
present an ontology-driven map generalization algorithm, called DMin, that can
be tailored to particular users and users' tasks. The level of detail in a
generated map is automatically adapted by DMin according to the semantics of
the features represented. The DMin algorithm is based on a weighting function
that has two components: (1) a geometric component that differs from previous
approaches to map generalization in that no fixed threshold values are needed
to parameterize the generalization process and (2) a semantic component that
considers the relevance of map features to the user. The flexibility of DMin is
demonstrated using the example of a transportation network. Keywords: Cartographic generalization; Line simplification; Geospatial information semantics; Task-oriented | |||
| Editorial | | BIB | Full-Text | 269-270 | |
| Christelle Vangenot | |||
| A model for uncertain lines | | BIBA | Full-Text | 271-288 | |
| Eliseo Clementini | |||
| The paper introduces a geometric model for uncertain lines that is capable of describing all the sources of uncertainty in spatial objects of linear type. We define uncertain lines as lines that incorporate uncertainties description both in the boundary and interior. These objects can model all the uncertainty by which spatial data are commonly affected and allow computations in presence of uncertainty without rough simplifications of the reality. The proposed model is an extension of the model for regions with a broad boundary and can be easily integrated into existing data models for spatial databases. We use the model as a basis for the study of topological relations between uncertain lines. | |||
| Aggregations and constituents: geometric specification of multi-granular objects | | BIBA | Full-Text | 289-309 | |
| Hedda R. Schmidtke | |||
| The article presents an approach for modeling the regions of large-scale aggregations like forests that is based on an axiomatic geometry formalizing concepts of granularity and local perception: based on a planar geometry of incidence and ordering, congruence of size is characterized using certain geometric entities called places, which are specified to have the same extension in every direction. Size is axiomatized in a comparative and non-numeric manner suitable for qualitative spatial reasoning over granularity. A specification for the granular interior of an aggregation in a global (GGI) and a local variant (LGI) is given. The specification was derived from the observation that objects like forests are perceived locally in an environment. The expressiveness of the characterization is tested on different commonsense notions concerning the representation of aggregations of natural objects. | |||
| Wayfinding choremes -- a language for modeling conceptual route knowledge | | BIBA | Full-Text | 311-329 | |
| Alexander Klippel; Heike Tappe; Lars Kulik; Paul U. Lee | |||
| The emergent interest in ontological and conceptual approaches to modeling
route information results from new information technologies as well as from a
multidisciplinary interest in spatial cognition. Linguistics investigates
verbal route directions; cartography carries out research on route maps and on
the information needs of map users; and computer science develops formal
representations of routes with the aim to build new wayfinding applications. In
concert with geomatics, ontologies of spatial domain knowledge are assembled
while sensing technologies for location-aware wayfinding aids are developed
simultaneously (e.g. cell phones, GPS-enabled devices or PDAs). These joint
multidisciplinary efforts have enhanced cognitive approaches for route
directions.
In this article, we propose an interdisciplinary approach to modeling route information, the wayfinding choreme theory. Wayfinding choremes are mental conceptualizations of functional wayfinding and route direction elements. With the wayfinding choreme theory, we propose a formal treatment of (mental) conceptual route knowledge that is based on qualitative calculi and refined by behavioral experimental research. This contribution has three parts: First, we introduce the theory of wayfinding choremes. Second, we present term rewriting rules that are grounded in cognitive principles and can tailor route directions to different user requirements. Third, we exemplify various application scenarios for our approach. | |||
| A critical evaluation of ontology languages for geographic information retrieval on the Internet | | BIBA | Full-Text | 331-358 | |
| Alia I. Abdelmoty; Philip D. Smart; Christopher B. Jones; Gaihua Fu; David Finch | |||
| A geo-ontology has a key role to play in the development of a spatially aware search engine, with regard to providing support for query disambiguation, query term expansion, relevance ranking and web resource annotation. This paper reviews those functions and identifies the challenges arising in the construction and maintenance of such an ontology. Two current contenders for the representation of the geo-ontology are GML, a specific markup language for geographic domains and OWL, a generic ontology representation language. Both languages are used to model the geo-ontology designed for supporting web retrieval of geographic concepts. The powers and limitations of the languages are identified. In particular, the paper highlights the lack of representation and reasoning abilities for different types of rules needed for supporting the geo-ontology. | |||
| Visual analysis for ontology engineering | | BIBA | Full-Text | 359-381 | |
| A. Johannes Pretorius | |||
| An ontology may be decomposed into a layer of binary fact types and a layer of application specific constraints imposed on these fact types. An ontology base is a large set of binary fact types called lexons. This paper presents LexoVis, a lexon visualization tool that addresses the inherent size and scale of ontology bases. LexoVis facilitates the analysis of lexons by providing an ordered visual representation. This representation offers overview and detail by employing the graphical fisheye view. Different ordering and clustering heuristics incorporated in LexoVis lead to insights not explicit in text-based representations of lexons. | |||
| Introduction to the special issue on "Context and emotion aware visual computing" | | BIB | Full-Text | 383-385 | |
| Nadia Bianchi-Berthouze; Piero Mussio | |||
| Environments to support context and emotion aware visual interaction | | BIBA | Full-Text | 386-405 | |
| Daniela Fogli; Antonio Piccinno | |||
| The interaction with software systems is often affected by many types of hurdles that induce users to make errors and mistakes, and to break the continuity of their reasoning while carrying out a working task with the computer. As a consequence, negative emotional states, such as frustration, dissatisfaction, and anxiety, may arise. In this paper, we illustrate how the Software Shaping Workshop (SSW) methodology can represent a solution to the problem of developing interactive systems that are correctly perceived and interpreted by end-users, thus becoming more acceptable and favouring positive emotional states. In the methodology, a key role is played by domain-expert users, that is, experts in a specific domain, not necessarily experts in computer science. Domain-expert users' skills and background, including their knowledge of the domain and users' needs and habits, are exploited to create context and emotion aware visual interactive systems. Examples of these systems are illustrated by referring to a case study in the automation field. | |||
| Enhancing experiential and subjective qualities of discrete structure representations with aesthetic computing | | BIBAK | Full-Text | 406-427 | |
| Paul Fishwick | |||
| The task of visualization, as it applies to computing, includes by default
the notion of pluralism and perspectivism since there is an explicit attempt at
representing one, often textual, interface in terms of a more graphical one.
This desire for alternate, subjective perspectives is consistent with art
theory and practice, and even though rigor and formalism generally mean
different things to artists and computer scientists, there is room for
collaboration and connection by applying artistic aesthetics to computing,
while maintaining that which makes computing a viable, usable field. This new
area is called aesthetic computing. Within this area, there is an attempt to
balance qualitative with quantitative representational aspects of visual
computing, recognizing that aesthetics creates a dimension that is consistent
with supporting numerous visual perspectives. We introduce one aspect of
aesthetic computing, with specific examples from our research and teaching to
illustrate the potential and possibilities associated with alternate
representations of discrete structures such as finite state automata and a data
flow network. We limit ourselves, and our methodology, to model notations with
components that bear a largely symbolic connection to what they represent, thus
providing greater degrees of representational freedom. We show that by
exploring aesthetics, we surface some important philosophical and cultural
questions regarding notation, which turn out to be at least as important as the
algorithmic and procedural means of achieving customized model component
representations. Keywords: Discrete structure; Modeling; Aesthetics; Customization | |||
| Is the subjective feel of "presence" an uninteresting goal? | | BIBA | Full-Text | 428-441 | |
| Roberto Casati; Elena Pasquinelli | |||
| An ideal goal of virtual reality technology is to deliver a complete visual and sensorimotor duplicate of an object: a fully integrated haptic and visual set of stimuli that would make us feel as if we are in the "presence" of the real object in an ordinary situation. The goal is very ambitious, but what is a measure of success? An analysis of presence is much needed, and one of the main tenets of our paper is that an empirical study of the psychological aspects of the feel of presence would constitute the pivotal element of such an analysis; we shall argue that some interesting lessons can be learned about the ideal goal. To sustain our argument, we consider two case studies in turn. The tunnel effect case teaches us that actual stimulation is neither necessary nor sufficient to convey presence. The picture case teaches us that it is possible to learn how to interact to a high degree of success with very impoverished stimuli and successfully compensate for poor stimulation. Research should thus be oriented not towards potentially useless and costly "duplication" of reality, but towards the unexplored potentialities offered by new and complex interfaces. | |||
| A hand gesture recognition system based on local linear embedding | | BIBAK | Full-Text | 442-454 | |
| Xiaolong Teng; Bian Wu; Weiwei Yu; Chongqing Liu | |||
| Even after more than two decades of input devices development, many people
still find the interaction with computers an uncomfortable experience. Efforts
should be made to adapt computers to our natural means of communication: speech
and body language. The aim of this paper is to propose a real-time vision
system within visual interaction environments through hand gesture recognition,
using general-purpose hardware and low-cost sensors, like a simple computer and
an USB web camera, so any user could make use of it in his office or at home.
The basis of our method is a fast detection process to obtain the meaningful
hand region from the whole image, which is able to deal with a large number of
hand gestures against different indoor backgrounds and lighting condition, and
a recognition process that identifies the hand gestures from the images of the
normalized hand. The most important part of the recognition method is a feature
extraction process using local linear embedding. This paper includes
experimental evaluations of the recognition process of 30 hand gestures that
belong to Chinese sign language (CSL) alphabet and discusses the results.
Experiments show that the new approach can achieve a 90% average rate and is
suitable for real-time application. Keywords: Human-computer interaction; Hand gesture recognition; Chinese sign language; Local linear embedding | |||
| Distributed visibility culling technique for complex scene rendering | | BIBAK | Full-Text | 455-479 | |
| Tainchi Lu; Chenghe Chang | |||
| This paper describes a complex scene rendering system that it can
comprehensively render larger and more complex 3D scenes in a form of
output-sensitive way by means of using distributed visibility culling
technique. The process of the proposed visibility calculations is explicitly
divided into two distinct phases, one is preprocessing stage, and the other is
on-the-fly stage. At the preprocessing stage, the whole scene is partitioned
into numerous regions, namely spatial cells, by adopting BSP tree algorithm.
Accordingly, the complexity weight of each cell is estimated in advance
depending on the number of geometric polygons within the cell. Afterwards we
find out possible occluders in each cell for accelerating the real-time
occlusion culling at run time. Moreover, instant visibility is taken into
account to quickly calculate the tight potentially visible set (PVS) which is
valid for several frames during the on-the-fly phase. As dynamic load balancing
algorithm is concerned, we employ the cell arrangement mechanism to dynamically
assign a specific amount of service demand to each calculating machine. The
amount of service demand is estimated when a calculating machine is dynamically
inserted into or removed from the distributed calculating cluster. Finally,
after the drawing machines gather the PVS results from every calculating
machine, they render the scene for users to view it on the next frames. From
the simulation results, we can see that the proposed real-time walkthrough
environment takes good advantage of the distributed visibility culling
technique for displaying large, complex 3D scenes in real time and gets rid of
a troublesome computation delay problem. Keywords: Spatial subdivision; Occlusion culling; Load balancing; Distributed computing; Walkthrough system | |||
| Special issue on selected papers from VLFM'04 | | BIB | Full-Text | 483-484 | |
| Mark Minas | |||
| High-level replacement units and their termination properties | | BIBAK | Full-Text | 485-507 | |
| Paolo Bottoni; Kathrin Hoffmann; Francesco Parisi Presicce; Gabriele Taentzer | |||
| Visual rewriting techniques, in particular graph transformations, are
increasingly used to model transformations of systems specified through
diagrammatic sentences. Several rewriting models have been proposed, differing
in the expressivity of the types of rules and in the complexity of the
rewriting mechanism; yet, for many of them, basic results concerning the formal
properties of these models are still missing. In this paper, we give a
contribution towards solving the termination problem for rewriting systems with
external control mechanisms. In particular, we obtain results of more general
validity by extending the concept of transformation unit to high-level
replacement systems, a generalization of graph transformation systems. For
high-level replacement units, we state and prove several abstract properties
based on termination criteria. Then, we instantiate the high-level replacement
systems by attributed graph transformation systems and present concrete
termination criteria. We explore some types of rules and replacement units for
which the criterion can be established. These are used to show the termination
of some replacement units needed to express model transformations formalizing
refactoring. Keywords: Visual transformations; Transformation units; High level replacement; Termination; Refactoring | |||
| Building syntax-aware editors for visual languages | | BIBAK | Full-Text | 508-540 | |
| Gennaro Costagliola; Vincenzo Deufemia; Giuseppe Polese; Michele Risi | |||
| Syntax-aware editors are a class of editors prompting users into writing
syntactically correct programs by exploiting visual language syntax. They are
particularly useful in those application domains where the way a visual symbol
spatially relates to others depends from the context. This does not mean
constraining users to enter only correct syntactic states in a visual sentence,
rather it means detecting both syntax and potential semantic errors as early as
possible, and providing error feedbacks in a non-intrusive way during editing.
As a consequence, error handling strategies are an essential part of this
editing style.
In this work, we present a strategy for the automatic generation of syntax-aware visual language editors integrating incremental subsentence parsers into freehand editors. The proposed parsing strategy has turned out to be useful in many application domains involving spatial information systems, thanks to the possibility of interactively prompting feasible visual sentence extensions, and to the presence of a non-correcting error recovery strategy. A first experimental prototype implementing the whole approach has been embedded into the VLDesk system, and empirical studies have been performed in order to verify the performance and the effectiveness of the proposed approach. Keywords: Visual language parsing; Syntax-aware editing; Error-handling | |||
| The semantics of augmented constraint diagrams | | BIBAK | Full-Text | 541-573 | |
| Andrew Fish; Jean Flower; John Howse | |||
| Constraint diagrams are a diagrammatic notation which may be used to express
logical constraints. They generalize Venn diagrams and Euler circles, and
include syntax for quantification and navigation of relations. The notation was
designed to complement the Unified Modelling Language in the development of
software systems.
Since symbols representing quantification in a diagrammatic language can be naturally ordered in multiple ways, some constraint diagrams have more than one intuitive meaning in first-order predicate logic. Any equally expressive notation which is based on Euler diagrams and conveys logical statements using explicit quantification will have to address this problem. We explicitly augment constraint diagrams with reading trees, which provides a partial ordering for the quantifiers (determining their scope as well as their relative ordering). Alternative approaches using spatial arrangements of components, or alphabetical ordering of symbols, for example, can be seen as implicit representations of a reading tree. Whether the reading tree accompanies the diagram explicitly (optimizing expressiveness) or implicitly (simplifying diagram syntax), we show how to construct unambiguous semantics for the augmented constraint diagram. Keywords: Visual formalisms; Software specification; Formal methods; Constraint diagrams | |||