HCI Bibliography Home | HCI Conferences | AVI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AVI Tables of Contents: 9496980002040608101214

Proceedings of the 1996 International Conference on Advanced Visual Interfaces

Fullname:AVI'96 Working Conference on Advanced Visual Interfaces
Editors:Tiziana Catarci; Maria F. Costabile; Stefano Levialdi; Giuseppe Santucci
Location:Gubbio, Italy
Dates:1996-May-27 to 1996-May-29
Publisher:ACM
Standard No:ISBN: 0-89791-834-7; ACM DL: Table of Contents hcibib: AVI96
Papers:37
Pages:281
  1. Invited papers and panel
  2. Navigating within the data
  3. Interfaces to databases
  4. Interacting with the WWW
  5. Interface tools
  6. Applications
  7. Empowering the interface
  8. Pictorial interfaces
  9. Descriptions of prototypes

Invited papers and panel

Information visualization and information foraging BIBFull-Text 12
  Stuart Card
Visualizing the World Wide Web BIBAFull-Text 13-19
  Alberto O. Mendelzon
We discuss some principles that we believe are important in creating useful visualizations of the World Wide Web. They are: layout, abstraction, focus, and interaction. We illustrate these points with examples from the work of our group at the University of Toronto.
Closing the loop: modelling action, perception and information BIBAKFull-Text 20-28
  Alan Dix
Visual interfaces to computer systems are interactive. The cycle of visual interaction involves both visual perception and action. This paper examines formal models of interactive systems and cognitive models of users. Neither completely captures the special nature of visual interaction. In order to investigate this, the paper examines two forms of non-visual interaction: mathematics for the blind and interaction by smell (nasal interaction). Finally three forms of more pragmatic design-oriented method are considered: information rich task analysis (what information is required), status-event analysis (when it is perceived) and models of information (how to visually interact with it).
Keywords: aural interfaces, cognitive models, formal methods, status-event analysis
Elastic windows: improved spatial layout and rapid multiple window operations BIBAKFull-Text 29-38
  Eser Kandogan; Ben Shneiderman
Most windowing systems follow the independent overlapping windows approach, which emerged as an answer to the needs of the 80s' application and technology. Advances in computers, display technology, and the applications demand more functionality from window management systems. Based on these changes and the problems of current windowing approaches, we have updated the requirements for multiwindow systems to guide new methods of window management. We propose elastic windows with improved spatial layout and rapid multi-window operations. Multi-window operations are achieved by issuing operations on window groups hierarchically organized in a space-filling tiled layout. Sophisticated multi-window operations and spatial layout dynamics helps users to handle fast task-switching and to structure their work environment to their rapidly changing needs. We claim that these multi-window operations and the improved spatial layout decrease the cognitive load on users. Users found our prototype system to be comprehensible and enjoyable as they playfully explored the way multiple windows are reshaped.
Keywords: CAD, elastic windows, multi-window operations, personal role manager, programming environment, task switching, window manager
Multimedia user interfaces BIBFull-Text 39
  Isabel F. Cruz

Navigating within the data

Zoom navigation exploring large information and application spaces BIBAKFull-Text 40-48
  Michael Rüger; Bernhard Preim; Alf Ritter
We present the concept of ZOOM NAVIGATION, a new interaction paradigm to cope with visualization and navigation problems as found in large information and application spaces. It is based on the pluggable zoom, an object-oriented component derived from the variable zoom fisheye algorithm.
   Working with a limited screen space we apply a Degree-of-interest (DOI) function to guide the level of detail used in presenting information. Furthermore we determine the user's information and navigation needs by analysing the interaction history. This leads to the definition of the aspect-of-interest (AOI) function. The AOI is evaluated in order to choose one of the several information aspects, under which an item can be studied. This allows us to change navigational affordance and thereby enhance navigation.
   In this paper we describe the ideas behind the pluggable zoom and the definition of DOI and AOI functions. The application of these functions is demonstrated within two case studies, the ZOOM ILLUSTRATOR and the ZOOM NAVIGATOR. We discuss our experience with these implemented systems.
Keywords: detail + context technique, fisheye display, human-computer interfaces, information navigation, screen layout, zoom navigation, zooming interfaces
VINETA: navigation through virtual information spaces BIBAFull-Text 49-58
  Uwe Krohn
Vineta is a system prototype allowing navigation through bibliographic data without the typing and revising of keyword-based queries. Our approach to visualizing documents and terms in navigational retrieval includes the representation of documents and terms as graphical objects, and dynamic positioning of these objects in the 3D virtual navigation space. Users can navigate through this virtual navigation space examining individual documents and clusters of documents at various levels of detail. Users can utilize their natural sense of space to interact with the system.
Modal navigation for hypermedia applications BIBAFull-Text 59-66
  Franca Garzotto; Luca Mainetti; Paolo Paolini
Hypermedia applications combine the flexibility of navigation based-access to information, typical of hypertext, with the communication power of multiple media, typical of multimedia systems. By their very nature, hypermedia applications support multimode interaction, i.e., interaction based on a combination of multiple modalities that are induced by different media and different navigation paradigms. The potentially huge number of mode combinations in hypermedia can accommodate a large variety of user needs and tasks. Multimode interaction, however, is intrinsically complex for the users if several multimode paradigms coexist within the same application. This paper discusses the concept of modal navigation as a technique that allows to achieve both simplicity in user interaction and flexibility in tuning navigation styles to specific needs of different categories of users. According to modal navigation, the semantics of navigation commands depends upon the current setting of modes. Various paradigms are discussed for modal navigation that take into account different degrees of user's control in the definition of mode configuration and mode resetting. The approach will be exemplified by discussing a real life hypermedia application under development at HOC in cooperation with the Poldi Pezzoli Museum in Milano.
Table lens as a tool for making sense of data BIBAKFull-Text 67-80
  Peter Pirolli; Ramana Rao
The Table Lens is a visualization for searching for patterns and outliers in multivariate datasets. It supports a lightweight form of exploratory data analysis (EDA) by integrating a familiar organization, the table, with graphical representations and a small set of direct manipulation operators. We examine the EDA process as a special case of a generic process, which we call sensemaking. Using a GOMS methodology, we characterize a few central EDA tasks and compare performance of the Table Lens and one of the best of the more traditional graphical tools for EDA i.e. Splus. This analysis reveals that Table Lens is more or less on par with the power of Splus, while requiring the use of fewer specialized graphical representations. It essentially combines the graphical power of Splus with the direct manipulation and generic properties of spreadsheets and relational database front ends. We also propose a number of design refinements that are suggested by our task characterizations and analyses.
Keywords: GOMS, database visualization, evaluation, exploratory data analysis, information visualization, multivariate visualization

Interfaces to databases

A framework for user-interfaces to databases BIBAKFull-Text 81-90
  Kenneth J. Mitchell; Jessie B. Kennedy; Peter J. Barclay
A framework for user-interfaces to databases (IDSs) is proposed which draws from existing research on human computer interaction (HCI) and database systems. The framework is described in terms of a classification of the characteristic components of an IDS. These components, when progressively refined, may be mapped to a conceptual object-oriented language for the precise specification of the IDS. A prototype system is presented, showing the potential for automated mapping of a language specification to a fully functional implementation. As well as providing general support to any database interface developer, we believe that the framework will prove useful for researching a number of IDS issues.
Keywords: conceptual modelling, direct manipulation interfaces, human-computer interaction (HCI), user-interfaces to databases
A semantics-based approach to designing presentations for multimedia database query results BIBAFull-Text 91-100
  N. Aloia; M. Matera; F. Paternò
The problem of presenting database query results has not sufficiently been investigated. The purpose of this work is to propose an approach which is able to identify the semantically correct presentations by composing Elementary Presentation Types corresponding to data attributes. Some effectiveness criteria are considered in order to identify those presentations which best match the user's goals and cognitive abilities.
Visualization of large answers in text databases BIBAKFull-Text 101-107
  Ricardo Baeza-Yates
Current user interfaces of full text retrieval systems do not help in the process of filtering the result of a query, usually very large. We address this problem and we propose a visual interface to handle the result of a query, based on a hybrid model for text. This graphical user interface provides several visual representations of the answer and its elements (queries, documents, and text), easing the analysis and the filtering process.
Keywords: set visualization, visual analysis, visual browsing, visual query languages, visual representations, visual text database, visual tools

Interacting with the WWW

Flexible, dynamic user interfaces for Web-delivered training BIBAKFull-Text 108-118
  Srdjan Kovacevic
One of the critical parts of a tutoring system is its user interface (UI), which must neither constrain an author in developing lessons, nor impede a student during practice. A system providing training over the Web must also address issues of interface transport, providing feedback and managing local context. We have developed a system, MUSE, that applies a model-based technology to address the above requirements. It supports a wide range of interface styles. Resulting Uls can be customized and capture enough application semantics to provide local feedback and manage the context required for evaluating a student's work and providing coaching.
Keywords: UI components, UI design tools, UI models, UI representation, Web interfaces, Web-delivered training, application semantics, intelligent tutoring system, model-based design
Looking for convenient alternatives to forms for querying remote databases on the Web: a new iconic interface for progressive queries BIBAFull-Text 119-124
  Fabrizio Capobianco; Mauro Mosconi; Lorenzo Pagnin
The enormous popularity of the World Wide Web has made putting public access databases on the Web practically mandatory. Forms embedded within the Web clients (e.g. Netscape) are therefore emerging as the most common interfaces in database querying. Should this solution be considered completely satisfactory?
   We highlight some of the important limits we experienced with forms and we propose a convenient alternative solution, based on direct manipulation of icons. The system we have developed is easy to use and provides comfortable mechanisms for browsing, manipulating and reusing queries results as well as previous queries thus making feasible effective non-motonic, progressive query processes.

Interface tools

Interacting with a visual editor BIBAKFull-Text 125-131
  Roberta Mancini
In this paper, we investigate the problem of querying a database of images. In order to improve the communication between human and computer, we propose a visual editor as an interaction tool. Really, the most simple way to formulate a query to a database of images is to allow the user to draw a sketch of the picture he is interested in. This sketch will be used to formulate a query within the visual query system. This editor, called VisEd, has been developed following a formal model (the PIE model), where properties such as completeness, reachability, and particularly undo, hold.
Keywords: formal model, reachability, undo, visual editor, visual query system
Distributed architectures for pen-based input and diagram recognition BIBAKFull-Text 132-140
  Wayne Citrin; Mark D. Gross
We present a system supporting pen-based input and diagram recognition that employs a personal digital assistant (PDA) as an intelligent input device for the system. Functionality is distributed between the PDA and the main computer, with the PDA performing low-level shape recognition and editing functions, and the back-end computer performing high-level recognition functions, including recognition of spatial relations between picture elements. This organization provides a number of advantages over conventional pen-based systems employing simple digitizing tablets. It provides the opportunity to use hardware specially designed for shape recognition and editing in a general diagram recognition system, it allows for improved performance through parallel processing, and it allows diagram entry to be performed remotely through use of the PDA front end in the field, with recognized shapes subsequently downloaded to the main diagram recognizer. We discuss the overall organization of the system, as well as the individual pieces and the communication between them, and describe two ongoing projects employing this architecture.
Keywords: diagram recognition, graphical editors, pen-based interfaces
Dynamic interpretations in translucent patches: representation-based applications BIBAKFull-Text 141-147
  Axel Kramer
Our goal is to empower individuals involved in design activities using the written medium, by amending it carefully with computational facilities. To preserve the fluidity and swiftness of design activities, we let users dynamically associate marks on the display surface with interpretations that provide interesting operations to the user.
   Inherent to typical computer applications is a very static relationship between internal data structures and presentation. In contrast, applications in our system (we call them interpretations), have to be able to deal with a much more dynamic relationship between those areas.
   This paper motivates this idea, presents challenges faced by such an approach, explains a framework for designing and implementing such interpretations, and illustrates how exemplary interpretations make use of this framework.
Keywords: application design, gestural interfaces, interaction techniques, interpretations, pen based interfaces, translucent patches

Applications

Expanding the utility of spreadsheets through the integration of visual programming and user interface objects BIBAFull-Text 148-155
  Trevor J. Smedley; Philip T. Cox; Shannon L. Byrne
One of the primary uses of spreadsheets is in forecasting future events. This involves investigating "what-if" scenarios -- creating a spreadsheet, experimenting with different values for inputs, and observing how they effect the computed values. Unfortunately, current spreadsheets provide little support for this type of interaction. Data values must be typed in, and computed values can be observed only as numbers, or on simple charts. In this work we extend a spreadsheet which makes use of a visual language for expressing formulae to also incorporate the use of user interface objects. This allows the user to create any type of input and output interfaces they wish, increasing the utility of spreadsheets for investigating "what-if" scenarios.
A visual interface for synchronous collaboration and negotiated transactions BIBAFull-Text 156-165
  Lutz Wegner; Manfred Paul; Jens Thamm; Sven Thelemann
This paper introduces a visual interface for computer-supported cooperative work (CSCW). The interface is an extension of the editor interface of ESCHER, a prototype database system based on the extended non-first-normal-form data model. In ESCHER, the nested table approach is the paradigm for presenting data, where presenting includes browsing, editing and querying the database. Interaction is achieved by fingers generalising the well-known cursor concept. When several users are involved, the concept permits synchronous collaboration with the nested table acting as "whiteboard". We discuss its use in applications which require negotiated transactions, i.e. where the isolation principle of ACID-transactions gives way to negotiations. We also give examples of how interactive query formulation in a QBE-like fashion can support the collaboration. The arguments in the paper are mainly supported with screenshots taken from two applications, one of them also with non-textual data types which are seamlessly integrated into the nested tabular display paradigm.
Exploring virtual ecosystems BIBAKFull-Text 166-174
  Antão Vaz Almada; António Eduardo Dias; João Pedro Silva; Emanuel Marques dos Santos; Pedro José Pedrosa; António Sousa Câmara
Browsing In Time & Space (BITS) is an interface designed to explore virtual ecosystems. A virtual ecosystem includes a three dimensional terrain model background, collections of man-made and natural objects, and behavior and interaction rules between the objects and the background. BITS is based on a virtual notepad and pen metaphor and is inspired in the concept of logging. Physical props are used to represent the notepad and the pen. The notepad includes a Time & Space Slider to facilitate time and space traveling, a set of buttons and a list of commands to control the interaction and enable the manipulation of objects, and a Notes Area. The handwritten notes can be referenced in time and space with the use of logging marks. BITS is being implemented on a PC-based architecture using sensors to track the pen's movement and the notepad's position. BITS major problem is related to the poor representation of the notes written in the notepad using the sensor based tracking system.
Keywords: browsing in time & space, logging, metaphors, pen-based input, props, user interface components, virtual ecosystems, virtual reality

Empowering the interface

Cocktailmaps: a space-filling visualization method for complex communicating systems BIBAKFull-Text 175-183
  Christopher Ahlberg
Cocktailmaps is a visualization method for visualization of communicative behavior in complex communication systems such as human conversation, cocktail parties, parallel computers, and telecommunication networks. Cocktailmaps are space-filling in that they effectively utilize the available screen real estate to communicate properties such as what communicators dominate a communication over time, what topics are communicated, and how agents move between subcommunications. Cocktailmaps have been implemented utilizing the Information Visualization and Exploration Environment (IVEE) which provides users of cocktailmaps with interactive techniques such as zooming, panning, filtering, and details-on-demand.
Keywords: cocktailmap, dynamic queries, information visualization, spoken communication
User-oriented visual layout at multiple granularities BIBAFull-Text 184-193
  Yannis Ioannidis; Miron Livny; Jian Bao; Eben M. Haber
Among existing tools for laying out large collections of visual objects, some perform automatic layouts, possibly following some rules prespecified by the user, e.g., graph layout tools, while others let users specify layouts manually, e.g., CAD design tools. Most of them can only deal with specific types of visualizations, e.g., graphs, and some of them allow users to view visual objects at various levels of detail, e.g., tree-structure visualization tools. In this paper, we develop techniques that strike a balance between user specification and automatic generation of layouts, work at multiple granularities, and are generally applicable. In particular, we introduce a general framework and layout algorithm that (a) deals with arbitrary types of visual objects, (b) allows objects to be viewed in any one of several different visual representations (at different levels of detail), and (c) uses a small number of user-specified layouts to guide heuristic decisions for automatically deriving many other layouts in a manner that attempts to be consistent with the user's preferences. The algorithm has been implemented within the OPOSSUM database schema manager and has been rather effective in capturing the intuition of scientists from several disciplines who have used it to design their database and experiment schemas.
A seamless integration of algorithm animation into a visual programming language BIBAFull-Text 194-202
  Paul Carlson; Margaret Burnett; Jonathan Cadiz
Until now, only users of textual programming languages have enjoyed the fruits of algorithm animation. Users of visual programming languages (VPLs) have been deprived of the unique semantic insights algorithm animation offers, insights that would foster the understanding and debugging of visual programs. To begin solving this shortcoming, we have seamlessly integrated algorithm animation capabilities into Forms/3, a declarative VPL in which evaluation is the continuous maintenance of a network of one-way constraints. Our results show that a VPL that uses this constraint-based evaluation model can provide features not found in other algorithm animation systems.
Algorithm animation over the World Wide Web BIBAFull-Text 203-212
  James E. Baker; Isabel F. Cruz; Giuseppe Liotta; Roberto Tamassia
In this paper we propose a new model, called Mocha, for providing algorithm animation over the World Wide Web. Mocha is a distributed model with a client-server architecture that optimally partitions the software components of a typical algorithm animation system, and leverages the power of the Java language, an emerging standard for distributing interactive platform-independent applications across the Web.
   Mocha provides high levels of security, protects the algorithm code, places a light communication load on the Internet, and allows users with limited computing resources to access animations of computationally expensive algorithms. The user interface combines fast responsiveness and user friendliness with the powerful authoring capabilities of hypertext narratives.
   We describe the architecture of Mocha and show its advantages over previous methods for algorithm animation over the Internet. We also present a prototype of an animation system for geometric algorithms that can be accessed by any user with a WWW browser supporting Java (currently Netscape 2.0 and HotJava) at URL http://www.cs.brown.edu/people/jib/Mocha.html.

Pictorial interfaces

Image query by semantical color content BIBAFull-Text 213-222
  J. M. Corridoni; A. Del Bimbo; S. De Magistris
The availability of large image databases is emphasizing the relevance of filters, which permit to focus the interest on a small subset of data. Taking advantage of the pictorial features of images, visual specification of such filters provides a powerful and natural way to express content-oriented queries. Albeit direct, the by example paradigm, does not allow to express high-level assertions on the pictorial content of images and specifically, paintings. To support the visuality, without losing power of expression, an original visual language is herein proposed, for the symbolic representation of the semantics induced by the colour quality and arrangement over a painting. The proposed language is based on the concepts of color semantics introduced by artists in the 900 and is developed to support a visual query paradigm. The present paper formalizes the grammar of the language and describes its implementation into a prototype system of painting retrieval by colour content.
Assisted browsing in a diagnostic image database BIBAFull-Text 223-232
  A. F. Abate; M. Nappi; G. Tortora; M. Tucci
The paper describes a significant part of an experimental system for producing digital medical images, processing them to extract suitable spatial indexes, and to store and retrieve by content such images in order to provide users with an assisted visual browser to navigate a distributed archive. A prerequisite for the system described in this paper is that a physician should be able to manipulate the diagnostic images by simple visual commands that allow content-based access. In particular, the physician have to identify abnormalities (hot spots) in each image by determining their spatial locations, opacities, shapes and geometrical measures.
   Since our system needs the capability of retrieving images based on the presence of given patterns, it is necessary to define a similarity matching between the query and an image to retrieve. To efficiently perform such a matching, each image is stored together with a collection of metadata that are a very compact representation of the spatical contents of the image. These metadata forms the index of the image.
   We illustrate an experimental image browser for medical imaging diagnosis implementing the query-by-pictorial-example philosophy for user interface.
A pictorial query language for geographical databases BIBAKFull-Text 233-244
  Fabrizio Di Loreto; Fernando Ferri; Fernanda Massari; Maurizio Rafanelli
In this paper a Pictorial Query Language (PQL) for Geographic Information Systems (GIS) is proposed. The user queries the GIS drawing symbolic objects, combining them together and selecting the derived result among those ones proposed by the PQL. The used interface is part of the Scenario GIS developed using an object oriented environment. This PQL makes easier the formulation of a complex query and simplifies user approach to the system, maintaining a strong expressive power. An overview on the data structure type, on the operators and on the relations among geographic entities is briefly made. The Visual Algebra and the relative operators are defined. The pictorial operations associated to the above mentioned algebra are described. Finally, an example of query and its visual composition on the screen is shown.
Keywords: geographical database, pictorial query language, user friendly interface

Descriptions of prototypes

The PPP persona: a multipurpose animated presentation agent BIBAFull-Text 245-247
  Elisabeth André; Jochen Müller; Thomas Rist
Animated agents -- either based on real video, cartoon-style drawings or even model-based 3D graphics are likely to become integral parts of future user interfaces. We present the PPP Persona, a tool which can be used for showing, explaining, and verbally commenting textual and graphical output on a window-based interface. The realization of the module follows the client/server paradigm, i.e, some client applications can send requests for executing presentation tasks to the server. However, to achieve a lively and appealing behaviour of the animated agent, the server autonomously performs some actions, eg. to span pauses or to react immediately to user interactions.
The Mocha algorithm animation system BIBAFull-Text 248-250
  James E. Baker; Isabel F. Cruz; Giuseppe Liotta; Roberto Tamassia
We describe the implementation of a new system, called Mocha, for providing algorithm animation over the World Wide Web. Mocha is a distributed system with a client-server architecture that optimally partitions the software components of a typical algorithm animation system, and leverages the power of the Java language, an emerging standard for distributing interactive platform-independent applications across the Web.
Model-oriented visual interface design BIBAFull-Text 251-253
  Licia Calvi
The paper presents a collaborative effort towards the implementation of a hypermedia application provided with a model-oriented visual interface.
Automatic construction of user interfaces for pen-based computers BIBFull-Text 254-256
  Sitt Sen Chok; Kim Marriott
VICKI: the VIsualisation Construction KIt BIBAFull-Text 257-259
  Huw Dawkes; Lisa A. Tweedie; Bob Spence
The human acquisition of insight into multivariate data can be greatly enhanced if users can view and interact with that data graphically. Many Interactive Visualisation Artifacts (IVAs) have been developed for such activities, but they tend to focus on a single task. The flexibility of the VICKI (the Visualisation Construction Kit) environment allows users to create IVAs, with a level of functionality and appearance, suitable for their specific needs. This paper introduces the concepts behind VICKI and discusses issues of future development.
Hyperlog: a system for database querying and browsing BIBFull-Text 260-262
  Stefan G. Hild; Alexandra Poulovassilis
Virgilio: a VR-based system for database visualization BIBAFull-Text 263-265
  Antonio Massari; Lorenzo Saladini
In this paper we introduce Virgilio, a system which generates VR-based visualizations of complex data objects representing the result of a query. Virgilio takes as input the dataset resulting from a query on a generic database and creates a corresponding visual representation composed of a collection of VRML (VR Modeling Language) scenes. The system uses a repository of real world objects (e.g., rooms, tables, portrait cases) which includes their visual aspect, the types of data they can support as well as a containment relationship among pairs of objects. Virgilio works in the following way: (i) attribute values of the dataset are displayed on virtual world objects according to the capability of these objects to represent the proper type of data, (ii) semantic relationships among the objects in the dataset are represented using the containment relationship.
   The main features of Virgilio are: to be parametric with respect to the explored database, to automatically produce an user oriented view of the dataset and to describe visualized data by means of the VRML language.
   A system prototype is currently being implemented. As an example, we provide a set of snapshots showing the scenes built by Virgilio to represent the result of queries defined on a database of musical CD records.
Interactive and visual environment supporting conceptual modeling of complex OODB applications BIBAFull-Text 266-268
  M. Missikoff; R. Pizzicannella
In this paper we present the graphical user interface of Mosaico, an environment for the analysis and conceptual modeling of Object-Oriented database applications. Mosaico is based on a formalism, the Object-Oriented conceptual language TQL++, that appears more friendly than others. Neverthless, to relieve the designer from knowing the details of TQL++, we developed an iconic interface that guides the construction of a database application specification. The output of the conceptual modeling phase is a knowledge base, which can be verified statically and, once transformed into executable code, can be tested with sample data. Furthermore, Mosaico is capable to present the content of a conceptual model in a diagrammatic form. This facility has been implemented within an abstract diagram approach, which guarantees a high level of independence with respect to the drawing tool.
Simulation of face-to-face interaction BIBAFull-Text 269-271
  Catherine Pelachaud
We present an implemented system that generates automatically verbal and nonverbal behaviors during a conversation between 3D synthetic agents. Dialogue with its appropriate intonation as well as the accompanying facial expression, gaze and gesture are computed. Our system integrates rules linking words and intonation, facial expression and intonation, gesture and words, gesture and intonation, gaze and intonation extracted from cognitive science studies. In the present paper we are concentrated on gaze pattern during speech.
HyperPro: an intelligent hypermedia system for learning logic programming BIBAFull-Text 272-274
  Teresa Roselli; Antonietta Di Donfrancesco; Stefania Loverro
Hypermedia technology has aroused considerable interest in didactic environments, owing to its versatility in realizing didactic software where the attention is shifted away from the teaching towards the learning process. Interacting with a didactic hypermedia, the learner can construct an entirely personal instructional path, in a wide variety of formats, tailored to his own aims and a priori knowledge.
   However, since hypertext/hypermedia makes it possible to construct learning paths through exploration, although it stimulates more talented students, it penalizes those less able to manage their learning paths alone. It is thus not the ideal environment for teaching such knowledge as programming languages, for instance.
   This consideration led us to realize HyperPro, an intelligent hypermedia system for learning logic programming and Prolog which contains a tutorial component built with A.I. techniques, which can follow the student along his instructional path and suggest the next step if necessary. As the system is integrated with the Prolog environment, it enables real training activities to be performed, monitored by the tutorial component. A fundamental characteristic of HyperPro is the graphical user interface which is easy to use, so the learner can concentrate on the instructional aims, without wasting time trying to understand the implementation environment.
   It is realized in the Toolbook-Openscript environment and integrated with the Prolog-2 interpreter by Expert Systems International.