HCI Bibliography Home | HCI Journals | About VLC | Journal Info | VLC Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
VLC Tables of Contents: 16171819202122232425

Journal of Visual Languages & Computing 17

Editors:S.-K. Chang; Stefano Levialdi
Standard No:ISSN: 1045-926X
Links:Table of Contents
  1. VLC 2006-02 Volume 17 Issue 1
  2. VLC 2006-04 Volume 17 Issue 2
  3. VLC 2006-06 Volume 17 Issue 3
  4. VLC 2006-08 Volume 17 Issue 4
  5. VLC 2006-10 Volume 17 Issue 5
  6. VLC 2006-12 Volume 17 Issue 6

VLC 2006-02 Volume 17 Issue 1

A web-centric semantic mediation approach for spatial information systems BIBAKFull-Text 1-24
  Kokou Yétongnon; Seksun Suwanmanee; Djamal Benslimane; Pierre-Antoine Champin
Semantics-related issues are at the heart of web-centric information systems and emerging spatial applications that require an integrated access to collections of heterogeneous data sources. We present an ontology-based semantic mediation approach and its application to spatial system interoperability. The approach is based on three contexts described by ontologies used to capture the semantics of data sources and to resolve their semantic discrepancies. The global ontology context defines generic application domain mediation concepts while the ontology local contexts are devoted to the description of local concepts. The cooperation contexts provide semantic concepts that encapsulate three key components: (1) semantic roles defined by ontological agreements on the global ontology, (2) virtual views defined on the local ontologies, and (3) context transformation mappings used to define inter-ontology relationships. We illustrate the proposed mediation approach on spatial information systems, relying on the description and reasoning capabilities of OWL to establish relations between concepts of the different ontologies.
Keywords: Ontologies; Semantic interoperability; Spatial data; OWL
VIREX: visual relational to XML conversion tool BIBAKFull-Text 25-45
  Anthony Lo; Reda Alhajj; Ken Barker
Developing user-friendly transformation tools for converting all or part of a given relational database into XML has not received enough consideration. This paper presents a flexible user interface called VIREX (VIsual RElational to XML), which facilitates converting a selected portion of a given underlying relational database into XML. VIREX works even when the catalogue of the underlying relational database is missing. For the latter case, VIREX extracts the required catalogue information by analyzing the underlying database content. From the catalogue information, whether available or extracted, VIREX derives and displays on the screen a graph similar to the entity-relationship diagram. VIREX provides a user-friendly interface to specify on the graph certain factors to be considered while converting relational data into XML. Such factors include: (1) selecting the relations/attributes to be converted into XML; (2) specifying a predicate to be satisfied by the information to be converted into XML; (3) deciding on the order of nesting between the relations to be converted into XML. All of these are specified by a sequence of mouse clicks with minimum keyboard input. As a result, VIREX displays on the screen the XML schema that satisfies the specified characteristics and generates the XML document from the underlying relational database. Finally, VIREX is essential to optimize the amount of information to be transferred over a network by giving the user the flexibility to specify the amount of relational data to be converted into XML. Also, VIREX can be used to teach XML to beginners.
Keywords: Data conversion; Flexible interface; Graphical user interface; Relational database; Visual query language; XML
Coordination for multi-person visual program development BIBAKFull-Text 46-77
  Jeffrey D. Campbell
Typically, visual programming has been limited to only one person developing one program at a time. This article describes a technique for helping multiple people coordinate working together on the same diagram at the same time. This approach identifies transactions based on domain syntax. These transactions are used to notify people when someone else changes the diagram in a way that is likely to impact other people's work. In particular, the system assigns ownership of each syntactically incorrect element to the person who last acted upon that element. This ownership can be transferred between people. The potential problem of incomplete transactions when work extends beyond a single session is resolved by restarting transactions when work resumes. This syntax-based approach is particularly appropriate for visual languages. Various domain constraints are described as alternatives or supplements to the syntactic criteria. The technique was validated with data from 20 groups of three people using CoDiagram, a proof of concept system.
Keywords: Computer supported cooperative work; Collaboration; Groupware; Semantic concurrency control; Consistency maintenance; Shared diagrams
On translating UML models into graph transformation systems BIBAKFull-Text 78-105
  Karsten Hölscher; Paul Ziemann; Martin Gogolla
In this paper we present a concept of a rigorous approach that provides a formal semantics for a fundamental subset of UML. This semantics is derived by translating a given UML model into a graph transformation system, allowing modelers to actually execute their UML model. The graph transformation system comprises graph transformation rules and a working graph which represents the current state of the modeled system. In order to support UML models which use OCL, we introduce a specific graph transformation approach that incorporates full OCL in the common UML fashion. The considered UML subset is defined by means of a metamodel similar to the UML 1.5 metamodel. The concept of a system state that represents the state of the system at a specific point in time during execution is likewise introduced by means of a metamodel. The simulated system run is performed by applying graph transformation rules on the working graph. The approach has been implemented in a research prototype which allows the modeler to execute the specified model and to validate the basic aspects of the model in an early software development phase.
Keywords: Graph transformation; UML semantics; Validation; CASE tool

VLC 2006-04 Volume 17 Issue 2

Filter co-ordinations for exploring multi-dimensional data BIBAKFull-Text 107-125
  Mark Sifer
Many interface designs have been developed for the exploration of multi-dimensional data sets which are based on finding subsets by filtering attribute values. Systems such as dynamic queries use a collection of independent filters to interactively query by restricting attribute values. However, for large data sets there is a need for an alternative style of filtering that better supports stepwise query refinement. This article introduces a new filter coordination which supports both stepwise query refinement and independent filters. Our filter visualization also supports the visualization of attribute value hierarchies enabling multi-level data distribution overviews to be given. Our coordination design is implemented in our SGViewer query tool which we demonstrate with a multi-dimensional web log data set. An evaluation of SGViewer showed that after a short learning period users were able to use it to read trends and proportions and make drill-down queries.
Keywords: View co-ordination; Visual query; Query refinement multi-dimensional data
Anatomy-based face reconstruction for animation using multi-layer deformation BIBAKFull-Text 126-160
  Yu Zhang; Terence Sim; Chew Lim Tan; Eric Sung
This paper presents a novel multi-layer deformation (MLD) method for reconstructing animatable, anatomy-based human facial models with minimal manual intervention. Our method is based on adapting a prototype model with the multi-layer anatomical structure to the acquired range data in an "outside-in" manner: deformation applied to the external skin layer is propagated along with the subsequent transformations to the muscles, with the final effect of warping the underlying skull. The prototype model has a known topology and incorporates a multi-layer structure hierarchy of physically based skin, muscles, and skull. In the MLD, a global alignment is first carried out to adapt the position, size, and orientation of the prototype model to align it with the scanned data based on measurements between a subset of specified anthropometric landmarks. In the skin layer adaptation, the generic skin mesh is represented as a dynamic deformable model which is subjected to internal force stemming from the elastic properties of the surface and external forces generated by input data points and features. A fully automated approach has been developed for adapting the underlying muscle layer which consists of three types of physically based facial muscle models. MLD deforms a set of automatically generated skull feature points according to the adapted external skin and muscle layers. The new positions of these feature points are then used to drive a volume morphing applied to the template skull model. We demonstrate our method by applying it to generate a wide range of different facial models on which various facial expressions are animated.
Keywords: Face reconstruction; Facial animation; Multi-layer deformation; Anatomy-based model; Multi-layer skin/muscle/skull structure; Scanned data
Visual type inference BIBAKFull-Text 161-186
  Martin Erwig
We describe a type-inference algorithm that is based on labeling nodes with type information in a graph that represents type constraints. This algorithm produces the same results as the famous algorithm of Milner, but is much simpler to use, which is of importance especially for teaching type systems and type inference.
   The proposed algorithm employs a more concise notation and yields inferences that are shorter than applications of the traditional algorithm. Simplifications result, in particular, from three facts: (1) We do not have to maintain an explicit type environment throughout the algorithm because the type environment is represented implicitly through node labels. (2) The use of unification is simplified through label propagation along graph edges. (3) The typing decisions in our algorithm are dependency-driven (and not syntax-directed), which reduces notational overhead and bookkeeping.
Keywords: Type-inference algorithm; Lambda calculus; Polymorphic type system; Graph
Impact of high-intensity negotiated-style interruptions on end-user debugging BIBAKFull-Text 187-202
  T. J. Robertson; Joseph Lawrance; Margaret Burnett
Extending our previous work [T. Robertson, S. Prabhakararao, M. Burnett, C. Cook, J. Ruthruff, L. Beckwith, A. Phalgune, Impact of interruption style on end-user debugging, ACM Conference on Human Factors in Computing Systems (2004)], we delve deeper into the question of which interruption style best supports end-user debugging. Previously, we found no advantages of immediate-style interruptions (which force the user to divert attention to the interruption at hand) over negotiated-style interruptions (which notify users without actually preventing them from working) in supporting end-user debugging. In this study, we altered our negotiated-style interruptions [A. Wilson, M. Burnett, L. Beckwith, O. Granatir, L. Casburn, C. Cook, M. Durham, G. Rothermel, Harnessing curiosity to increase correctness in end-user programming, Proceedings of the CHI 2003 (2003), 305-312] (which were shown to help end-user debuggers learn about and use debugging features of our programming language) such that they were more intense (larger, blinking, and/or accompanied by text).
Keywords: Interruptions; End-user software engineering; Debugging; Surprise-Explain-Reward; Spreadsheets

VLC 2006-06 Volume 17 Issue 3

Automatic visualisation of metro maps BIBAKFull-Text 203-224
  Seok-Hee Hong; Damian Merrick; Hugo A. D. do Nascimento
We investigate the new problem of automatic metro map layout. In general, a metro map consists of a set of lines which have intersections or overlaps. We define a set of aesthetic criteria for good metro map layouts and present a method to produce such layouts automatically. Our method uses a variation of the spring algorithm with a suitable preprocessing step. The experimental results with real world data sets show that our method produces good metro map layouts quickly.
Keywords: Metro map layout; Metro map labelling; Metro map metaphor
Clustering graphs for visualization via node similarities BIBAKFull-Text 225-253
  Xiaodi Huang; Wei Lai
Graph visualization is commonly used to visually model relations in many areas. Examples include Web sites, CASE tools, and knowledge representation. When the amount of information in these graphs becomes too large, users, however, cannot perceive all elements at the same time. A clustered graph can greatly reduce visual complexity by temporarily replacing a set of nodes in clusters with abstract nodes. This paper proposes a new approach to clustering graphs. The approach constructs the node similarity matrix of a graph that is derived from a novel metric of node similarity. The linkage pattern of the graph is thus encoded into the similarity matrix, and then one obtains the hierarchical abstraction of densely linked subgraphs by applying the k-means algorithm to the matrix. A heuristic method is developed to overcome the inherent drawbacks of the k-means algorithm. For clustered graphs we present a multilevel multi-window approach to hierarchically drawing them in different abstract level views with the purpose of improving their readability. The proposed approaches demonstrate good results in our experiments. As application examples, visualization of part of Java class diagrams and Web graphs are provided. We also conducted usability experiments on our algorithm and approach. The results have shown that the hierarchically clustered graph used in our system can improve user performance for certain types of tasks.
Keywords: Graph clustering; Similarity metric; k-Means algorithm; Multilevel graph drawing; Graph visualization
Exploring personal media: A spatial interface supporting user-defined semantic regions BIBAKFull-Text 254-283
  Hyunmo Kang; Ben Shneiderman
Graphical mechanisms for spatially organizing personal media data could enable users to fruitfully apply their conceptual models. This paper introduces Semantic regions, an innovative way for users to construct display representations of their conceptual models by drawing regions on 2D space and specifying the semantics for each region. Then users can apply personal categorizations to personal media data using the fling-and-flock metaphor. This allows personal media to be dragged to the spatially organized display and automatically grouped according to time, geography, family trees, groups of friends, or other spatially organized display representations of conceptual models. The prototype implementation for semantic regions, MediaFinder, was refined based on two small usability tests for usage and construction of user-defined conceptual models.
Keywords: User interfaces; Personal media management; Spatial information management; Fling-and-flock; Dynamic queries

VLC 2006-08 Volume 17 Issue 4

Ten years of cognitive dimensions in visual languages and computing: Guest Editor's introduction to special issue BIBFull-Text 285-287
  Alan F. Blackwell
Aims, achievements, agenda -- where CDs stand now BIBFull-Text 288-291
  Thomas Green
Cognitive dimensions 'beyond the notation' BIBAKFull-Text 292-301
  Marian Petre
This personal reflection on cognitive dimensions (CDs) over the past decade is in three parts: an account of how empirical studies of professional software developers informed the development of CDs in the early 1990s; an articulation of unresolved issues 'beyond the notation' which were emphasized by the empirical studies and which might yet be addressed by CDs; and a speculation on the application of facet theory to CDs as a possible approach to those issues.
Keywords: Cognitive dimensions; Empirical studies; Software development; Facet theory
Using cognitive dimensions: Advice from the trenches BIBAKFull-Text 302-327
  Jason Dagit; Joseph Lawrance; Christoph Neumann; Margaret Burnett; Ronald Metoyer; Sam Adams
Many researchers have analyzed visual language design using Cognitive Dimensions (CDs), but some have reinterpreted the purpose, vocabulary, and use of CDs, potentially creating confusion. In particular, those who have used CDs to convince themselves or others that their language is usable have tended to ignore or downplay the tradeoffs inherent in design, resulting in evaluations that provide few insights. Researchers who do not consider who, when, and how best to analyze a visual language using CDs are likely to miss the most useful opportunities to uncover problems in their visual languages. In this paper, we consider common breakdowns when using CDs in analysis. Then, using three case studies, we demonstrate how the who, when, and how circumstances under which CDs are applied impact the gains that can be expected.
Keywords: Cognitive dimensions; Visual programming language; Language design; Case study; Pitfalls
Cognitive dimensions: Achievements, new directions, and open questions BIBAFull-Text 328-365
  T. R. G. Green; A. E. Blandford; L. Church; C. R. Roast; S. Clarke
The cognitive dimensions framework has inspired research both more and less varied than expected. In this paper, we revisit the original aims and briefly describe some subsequent research, to consider whether the original aims were too austere in rejecting knowledge-based dimensions; whether the dimensions can be shown to have real-world relevance; and whether their definitions can be improved, either piecemeal or by refactoring the entire set. We mention some issues that remain unexplored, and conclude by describing two different ventures into defining clear procedures for real-life application, operating in very different milieux but both accepting that the framework should be developed from its original formulation.
Correlates of the cognitive dimensions for tangible user interface BIBAKFull-Text 366-394
  Darren Edge; Alan Blackwell
We describe an application of the cognitive dimensions (CDs) of notations framework to tangible user interfaces (TUIs) -- interaction with computers using physical devices other than mice and keyboards. We are particularly interested in situations where the TUI is used to construct some information structure (a manipulable solid diagram) and where that structure is intended to specify computer behaviour other than by direct manipulation (a tangible programming language). We analyse several tangible programming languages that have been described in previous research, considering the ways in which their physical properties influence the manipulability of the diagrammatic structure. This is the contribution that a CDs analysis would provide for any notation, but we find consistent ways in which particular dimensions can be predicted to apply to any solid diagram. We describe these as the tangible correlates of those dimensions. We then demonstrate that the tangible correlates can be used for both generative and analytic purposes in early stages of TUI design, much as CDs are applicable to the design of visual notations, but more immediately recognisable in their physical implications.
Keywords: Cognitive dimensions; Tangible user interfaces; Tangible programming languages

VLC 2006-10 Volume 17 Issue 5

Introduction to the special issue on "Context and Emotion Aware Visual Computing" BIBFull-Text 395-397
  Nadia Bianchi-Berthouze; Piero Mussio
Mediators in visual interaction: An analysis of the "Poietic Generator" and "Open Studio" BIBAKFull-Text 398-429
  Elisa Giaccardi
The Poietic Generator and Open Studio are examples of interactive art, a form of art intended for the viewer's direct participation. They are based on distributed applications for visual interaction enabling to collaborate on the creation of visual images and narratives. This paper reports the analysis of the visual activity generated by their users. Such an analysis is founded on the phenomenological hypothesis that the visual activity generated by the participants in the Poietic Generator and Open Studio allows the study of the interaction process in terms of a co-determining relationship between perception and action. The results of this analysis indicate five classes of mediators capable of tuning the development of the interaction process according to the context and emotional state of the participants. These classes are based on: (1) spatial relationships, (2) chromatic relationships, (3) figurative elements, (4) textual elements, and (5) temporal events.
   By sustaining the intersubjective processing of information among participants, mediators sustain their socially intelligent ability of constructing and sharing meaningful activities; that is, they sustain co-creation. For this reason, mediators are particularly important in the design of social interactive systems that have purposes but not explicit goals (as in the case of art and creative activities in general).
Keywords: Interactive art; Visual interaction; Distributed applications; Collaborative systems; Poietic Generator; Open Studio; Phenomenology; Intersubjectivity; Embodied interaction; Mediator; Co-creation
MAUI avatars: Mirroring the user's sensed emotions via expressive multi-ethnic facial avatars BIBAKFull-Text 430-444
  Fatma Nasoz; Christine L. Lisetti
In this paper we describe the multimodal affective user interface (MAUI) we created to capture its users' emotional physiological signals via wearable computers and visualize the categorized signals in terms of recognized emotion. MAUI aims at (1) giving feedback to the users about their emotional states via various modalities (e.g. mirroring the users' facial expressions and describing verbally the emotional state via an anthropomorphic avatar) and (2) animating the avatar's facial expressions based on the users' captured signals. We first describe a version of MAUI which we developed as an in-house research tool for developing and testing affective computing research. We also discuss applications for which building intelligent user interfaces similar to MAUI can be useful and we suggest ways of adapting the MAUI approach to fit those specific applications.
Keywords: Affective intelligent user interfaces; Emotion recognition
Expressive image generation: Towards expressive Internet communications BIBAFull-Text 445-465
  Zhe Xu; David John; A. C. Boucouvalas
The use of the Internet as a means to socialize and to communicate is now widespread. In real life we have learnt to 'read' faces and guess feelings. However, in cyberspace it is not as easy to conduct similar behaviour. The use of expressive images to accompany text may provide some of the missing visual cues. In this paper, we describe an expressive image generator for generating a set of expressive images from a single default image. For each expression category the generator can generate images with three different intensities. Our method requires users to provide only one default image, six control points and three control shapes. A series of experiments is presented that tested the recognition rates of the expressions depicted in the generated images. The results demonstrate that more than 60% of the participants correctly recognized the synthesized human facial expressions in the middle- and high-intensity expression categories and more than 70% correctly recognized the cartoon expressions in all intensities.
A diagrammatic approach to investigate interval relations BIBAKFull-TextErratum for Figure 2 466-502
  Zenon Kulpa
This paper describes several diagrammatic tools developed by the author for representing the space of intervals and especially interval relations. The basic tool is a two-dimensional, diagrammatic representation of space of intervals, called an MR-diagram. A diagrammatic notation based on it, called a W-diagram, is the main tool for representing arrangement (or Allen's) interval relations. Other auxiliary diagrams, like conjunction and lattice diagrams, are also introduced. All these diagrammatic tools are evaluated by their application to various representational and reasoning tasks of interval relations research, producing also certain new results in the field.
Keywords: Interval relations; Interval algebra; Time intervals; Interval diagrams; Diagrammatic representation; Diagrammatic reasoning; Diagrammatic notation

VLC 2006-12 Volume 17 Issue 6

Visual modeling for software intensive systems BIBFull-Text 503-507
  Kendra M. L. Cooper; Holger Giese; Ingolf H. Krüger
AutoGen: Easing model management through two levels of abstraction BIBAKFull-Text 508-527
  Guanglei Song; Jun Kong; Kang Zhang
Due to its extensive potential applications, model management has attracted many research interests and gained great progress. To provide easy-to-use interfaces, we have proposed a graph transformation-based model management approach that provides intuitive interfaces for manipulation of graphical data models. The approach consists of two levels of graphical operators: low-level customizable operators and high-level generic operators, both of which consist of a set of graph transformation rules. Users need to program or tune the low-level operators for desirable results. To further improve the ease-of-use of the graphical model management, automatic generation of low level of operators is highly desirable. The paper formalizes specifications of low- and high-level operators and proposes a generator to automatically transform high-level operators into low-level operators upon specific input data models. Based on graph transformation theoretical foundation, we design an algorithm for the generator to automatically produce low-level operators from input data models and mappings according to a high-level operator. The generator, called AutoGen, therefore eliminates many tedious specifications and thus eases the use of the graphical model management system.
Keywords: Model management; Graph transformation; Graph grammar; Visual programming; Schema interoperation
A survey of approaches for the visual model-driven development of next generation software-intensive systems BIBAKFull-Text 528-550
  Holger Giese; Stefan Henkler
Software-intensive systems of the future are expected to be highly distributed and to exhibit adaptive and anticipatory behavior when operating in highly dynamic environments and interfacing with the physical world. Therefore, visual modeling techniques to address these software-intensive systems require a mix of models from a multitude of disciplines such as software engineering, control engineering, and business process engineering. As in this concert of techniques software provides the most flexible element, the integration of these different views can be expected to happen in the software. The software thus includes complex information processing capabilities as well as hard real-time coordination between distributed technical systems and computers.
   In this article, we identify a number of general requirements for the visual model-driven specification of next generation software-intensive systems. As business process engineering and software engineering are well integrated areas and in order to keep this survey focused, we restrict our attention here to approaches for the visual model-driven development of adaptable software-intensive systems where the integration of software engineering with control engineering concepts and safety issues are important. In this survey, we identify requirements and use them to classify and characterize a number of approaches that can be employed for the development of the considered class of software-intensive systems.
Keywords: Survey; Software-intensive systems; Adaptable; MDD; MDA; Visual modeling
Integrating visual goal models into the Rational Unified Process BIBAKFull-Text 551-583
  K. Cooper; S. P. Abraham; R. S. Unnithan; L. Chung; S. Courtney
The Rational Unified Process is a comprehensive process model that is tailorable, provides templates for the software engineering products, and integrates the use of the Unified Modeling Language (UML); it is rapidly becoming a de facto standard for developing software. The process supports the definition of requirements at multiple levels. Currently, the early requirements, or goals, are captured in a textual document called the Vision Document, as the UML does not include a goal modeling diagram. The goals are subsequently refined into software requirements, captured in UML Use Case Diagrams. Given the well documented advantages of visual modeling techniques in requirements engineering, including the efficient communication and understanding of complex information among numerous diverse stakeholders, the need for an enhanced version of the Vision Document template which supports the visual modeling of goals is identified. Here, an Enhanced Vision Document is proposed which integrates two existing visual goal models: AND/OR Graph for functional goals and Softgoal Interdependency Graph for non-functional goals. A specific approach to establishing traceability relationships from the goals to the Use Cases is presented. Tool support has been developed for the Enhanced Vision Document template; the approach is illustrated using an example system called the Quality Assurance Review Assistant Tool.
Keywords: Goal-oriented requirements engineering; Rational Unified Process; Unified Modeling Language; Visual modeling; Requirement traceability
Modeling real-time communication systems: Practices and experiences in Motorola BIBAKFull-Text 584-605
  Michael Jiang; Michael Groble; Andrij Neczwid; Allan Willey
Visual modeling languages and techniques have been increasingly adopted for software specification, design, development, and testing. With the major improvements of UML 2.0 and tools support, visual modeling technologies have significant potential for simplifying design, facilitating collaborations, and reducing development cost. In this paper, we describe our practices and experiences of applying visual modeling techniques to the design and development of real-time wireless communication systems within Motorola. A model-driven engineering approach of integrating visual modeling with development and validation is described. Results, issues, and our viewpoints are also discussed.
Keywords: UML modeling; SDL modeling; MDE code generation; Model validation; Real-time communication systems; TTCN; Structured methods