HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 939798990001020304050607

Proceedings of the 1997 International Conference on Intelligent User Interfaces

Fullname:International Conference on Intelligent User Interfaces
Editors:Johanna Moore; Ernest Edmonds; Angel Puerta
Location:Orlando, Florida
Dates:1997-Jan-06 to 1997-Jan-09
Publisher:ACM
Standard No:ACM ISBN 0-89791-839-8; ACM Order Number 608970; ACM DL: Table of Contents hcibib: IUI97
Papers:41
Pages:271
  1. Plenary Address
  2. Planning Based Approaches
  3. Debate: Direct Manipulation vs. Interface Agents
  4. Interface Agents
  5. Presentation Aids/Coordination
  6. I/O Support/Spatial Awareness
  7. Panel
  8. Automation of Presentations
  9. Applications
  10. Panel
  11. Web / Hypermedia
  12. Plenary Address
  13. Short Papers

Plenary Address

Technology Transfer from University to Industry BIBAPDF 3-4
  James Foley
KEY QUESTIONS
  • 1. How should we define successful technology transfer? By the extent to which
        the research affects the success of a project in the marketplace. This is
        a very pragmatic answer based on the belief that the most important measure
        of success is impact.
  • 2. What should researchers, developers, and management do to make success more
        likely? Start with developing people relationships. Then, * Facilitate
        bottom-up initiation of projects; * Develop the prototype using the
        sponsor's hardware
        and software; * Help industry R&D staff understand
        the values,
        motivations, and structure of university R&D; * Help
        university researchers understand the tech
        transfer process and the
        entire product development
        cycle; * Provide the rewards and financial
        resources to
        encourage industry managers and staff to take the extra
       
        risk of establishing a research collaboration.
  • 3. What are some of the danger signals that suggest a research collaboration is
        in trouble? * Industry sponsor is too busy to come see you; * Professor is
        too busy to visit company; * Undefined tech transfer process; * Funding for
        "feel good" reasons; * A high-level manager directing that external funding
        be directed to a particular project or person or
       university.
  • Planning Based Approaches

    Local Plan Recognition in Direct Manipulation Interfaces BIBAKPDF 7-14
      Annika Wærn
    Plan recognition in direct manipulation interfaces must deal with the problem that the information obtained is of low quality with respect to the plan recognition task. There are two main reasons for this: the individual interactions from the user are on a low level as compared to the user's task and users may frequently change their intentions. We present two example applications where this is the case.
       The fact that users change their intentions could be used to motivate an explicit representation of user intentions. However, the low quality of available information makes such an approach unfeasible in direct manipulation interfaces. This paper addresses the same problem by maintaining a plan-parsing approach to plan recognition, but making it local to the user's most recent actions by imposing a limited attention span. Two different approaches to implementation are given, in the context of the two presented applications.
    Keywords: Plan recognition, Intelligent interfaces, Task adaptation
    Interaction with a Mixed-Initiative System for Exploratory Data Analysis BIBAKPDF 15-22
      Robert St. Amant; Paul R. Cohen
    Exploratory data analysis (EDA) plays an increasingly important role in statistical analysis. EDA is difficult, however, even with the help of modern statistical software. We have developed an assistant for data exploration, based on AI planning techniques, that addresses some of the strategic shortcomings of conventional software. This paper illustrates the behavior of the system, gives a high level description of its design, and discusses its experimental evaluation.
    Keywords: Artificial intelligence, Planning, Data exploration
    Segmented Interaction History in a Collaborative Interface Agent BIBAKPDF 23-30
      Charles Rich; Candace L. Sidner
    We have developed an application-independent toolkit, called Collagen, based on the SharedPlan theory of collaborative discourse, in which interaction histories are hierarchically structured according to a user's goals and intentions. We have used Collagen to implement an example collaborative interface agent with discourse processing, but not natural language understanding. In this paper, we concentrate on how a segmented interaction history supports user orientation, intelligent assistance, and transformations, such as returning to earlier points in the problem solving process and replaying segments in a new context.
    Keywords: Interaction history, Discourse, Segment, Collaboration, Interface agent, Undo, Replay

    Debate: Direct Manipulation vs. Interface Agents

    Direct Manipulation for Comprehensible, Predictable and Controllable User Interfaces BIBAKPDF 33-39
      Ben Shneiderman
    Direct manipulation user interfaces have proven their worth over two decades, but they are still in their youth. Dramatic opportunities exist to develop direct manipulation programming to create end-user programming tools, dynamic queries to perform information search in large databases, and information visualization to support network database browsing. Direct manipulation depends on visual representation of the objects and actions of interest, physical actions or pointing instead of complex syntax, and rapid incremental reversible operations whose effect on the object of interest is immediately visible. This strategy can lead to user interfaces that are comprehensible, predictable and controllable. Direct manipulation interfaces are seen as more likely candidates to influence advanced user interfaces than adaptive, autonomous, intelligent agents. User control and responsibility are highly desirable.
       Note: This paper is adapted, with permission of the publisher, from the forthcoming book: Designing the User Interface: Strategies for Effective Human-Computer Interaction (3rd Edition), Addison Wesley, Reading, MA (1997).
    Keywords: User interface, Direct manipulation, Agents
    Intelligent Software BIBPDF 41-43
      Pattie Maes

    Interface Agents

    The Selection Recognition Agent: Instant Access to Relevant Information and Operations BIBAKPDF 47-52
      Milind S. Pandit; Sameer Kalbag
    We present the Selection Recognition Agent (SRA), a personal computer application which recognizes meaningful words and phrases in text, and enables useful operations on them. The SRA includes six recognition modules for geographic names, dates, email addresses, phone numbers, Usenet newsgroup name components, and URLs, as well as a module that enables useful operations on text in general. The SRA runs on Microsoft Windows 95 and Windows NT and is currently available free from Intel's home page (http://www.intel.com).
    Keywords: Selection, Recognition, Agent, Object-oriented interface, Geographic name, Date, Email address, Phone number, Usenet news, URL, Web
    Using Agents to Personalize the Web BIBAKPDF 53-60
      Christoph G. Thomas; Gerhard Fischer
    Users build personal information spaces (stored as bookmarks, hotlists, or as a personal page of links) as their WWW-subset and interface to access the World-Wide Web. As the WWW is a living "creature" that evolves and grows permanently, users have to take care that their personal information spaces can be kept manageable and up-to-date.
       Our prototype system BASAR (Building Agents Supporting Adaptive Retrieval) provides users with assistance when managing their personal information spaces. This assistance is user-specific and done by software agents called web assistants and active views. Users delegate tasks to web assistants that perform actions on their views of the WWW, on the WWW itself, and on the history of all user actions.
       In this paper, we discuss aspects of the design-evaluation-redesign cycle of BASAR by focusing on questionnaires, assessment studies, and system evaluations.
    Keywords: Agents and agent-based interaction, Evaluation of agent-based interfaces, World-Wide Web, Information overload, Personal information spaces
    Multimodal User Interfaces in the Open Agent Architecture BIBAKPDF 61-68
      Douglas B. Moran; Adam J. Cheyer; Luc E. Julia; David L. Martin; Sangkyu Park
    The design and development of the Open Agent Architecture (OAA) system has focused on providing access to agent-based applications through an intelligent, cooperative, distributed, and multimodal agent-based user interfaces. The current multimodal interface supports a mix of spoken language, handwriting and gesture, and is adaptable to the user's preferences, resources and environment. Only the primary user interface agents need run on the local computer, thereby simplifying the task of using a range of applications from a variety of platforms, especially low-powered computers such as Personal Digital Assistants (PDAs). An important consideration in the design of the OAA was to facilitate mix-and-match: to facilitate the reuse of agents in new and unanticipated applications, and to support rapid prototyping by facilitating the replacement of agents by better versions.
       The utility of the agents and tools developed as part of this ongoing research project has been demonstrated by their use as infrastructure in unrelated projects.
    Keywords: Agent architecture, Multimodal, Speech, Gesture, Handwriting, Natural language

    Presentation Aids/Coordination

    Generating Web-Based Presentations in Spatial Hypertext BIBAKPDF 71-78
      Frank M., III Shipman; Richard Furuta; Catherine C. Marshall
    Presentations frequently include material appropriated from external sources; they may incorporate tabular data from published reports, photographs from books, or clip art from purchased collections. With the growing use of the World-Wide Web to disseminate information, there is the emerging potential for a new style of presentation: one that interprets and organizes materials produced by others and published on-line. Authoring such presentations requires the analysis of the source information. However, current presentation authoring software is designed to support traditional presentations, where analysis is assumed a separate task at best supported by separate software. This paper discusses experiences with using VIKI, a system designed to support information analysis, for the authoring of such presentations. VIKI includes a spatial parser to recognize implicit spatial structure generated during analysis. This paper describes how initial experiences with use for path authoring led to VIKI enhancements, including the adaptation of implicit spatial structure recognition for the creation of presentations.
    Keywords: Presentation authoring, Analysis tools, Spatial parsing, Implicit structure, Presentation models, Spatial hypertext, World-Wide Web, Walden's paths, VIKI
    Adding Animated Presentation Agents to the Interface BIBAKPDF 79-86
      Thomas Rist; Elisabeth Andre; Jochen Muller
    A growing number of research projects both in academia and industries have started to investigate the use of animated agents in the interface. Such agents, either based on real video, cartoon-style drawings or even model-based 3D graphics, are likely to become integral parts of future user interfaces. To be useful, however, interface agents have to be intelligent in the sense that they exhibit a reasonable behavior. In this paper, we present a system that uses a lifelike character, the so-called PPP Persona, to present multimedia material to the user. This material has been either automatically generated or fetched from the web and modified if necessary. The underlying approach is based on our previous work on multimedia presentation planning. This core approach is complemented by additional concepts, namely the temporal coordination of presentation acts and the consideration of the human-factors dimension of the added visual metaphor.
    Keywords: Animated user interface agents, Presentation techniques for web applications, Automated multimedia authoring
    Dynamic Dramatization of Multimedia Story Presentations BIBAKPDF 87-94
      Nikitas M. Sgouros; George Papakonstantinou; Panagiotis Tsanakas
    We describe a novel dynamic dramatization method for narrative presentations. This method accepts as input the original story material, along with a description of its plot written in a special-purpose language. It then analyzes the plot to identify interesting dramatic situations in the story. Based on this content analysis, a presentation manager organizes the presentation and enriches it with appropriate multimedia effects. These effects are associated with interesting dramatic situations, and serve to increase suspense and emphasize plot developments in the narrative. Our method can be used for the development of intelligent front-ends to story databases, for directing assistants in computer-based renditions of narrative works, or for real-time direction of interactive entertainment systems. We are integrating this system in an interactive storytelling environment for Greek mythology.
    Keywords: Art & entertainment, Intelligent front-ends for storytelling

    I/O Support/Spatial Awareness

    Description and Recognition Methods for Sign Language Based on Gesture Components BIBAKPDF 97-104
      Hirohiko Sagawa; Masaru Takeuchi; Masaru Ohki
    Sign language gestures are inflected in accordance with the context. To recognize such sign language properly, the structure of sign language must be made clear. It is well known that the structure of sign language is represented as a combination of basic components of gestures. Sign language can be recognized by using such components. In this paper, a format to describe sign language gestures and a method to recognize the meaning of the gesture based on the components of gestures are discussed.
    Keywords: Sign language, Gesture description, Chereme, Pattern recognition
    Haptic Output in Multimodal User Interfaces BIBAKPDF 105-112
      Stefan Munch; Rudiger Dillmann
    This paper presents an intelligent adaptive system for the integration of haptic output in graphical user interfaces. The system observes the user's actions, extracts meaningful features, and generates a user and application specific model. When the model is sufficiently detailed, it is used to predict the widget which is most likely to be used next by the user. Upon entering this widget, two magnets in a specialized mouse are activated to stop the movement, so target acquisition becomes easier and more comfortable. Besides the intelligent control system, we will present several methods to generate haptic cues which might be integrated in multimodal user interfaces in the future.
    Keywords: Haptic output, User modelling, Adaptive interfaces, Intelligent feedback, Multimodality
    Helping Users Think in Three Dimensions: Steps Toward Incorporating Spatial Cognition in User Modelling BIBAKPDF 113-120
      Michael Eisenberg; Ann Nishioka; M. E. Schreiner
    Historically, efforts at user modelling in educational systems have tended to employ knowledge representations in which symbolic (or "linguistic") cognition is emphasized, and in which spatial/visual cognition is underrepresented. In this paper, we describe our progress in developing user models for an explicitly "spatial" educational application named HyperGami, in which students design (and construct, out of paper) an endless variety of three-dimensional polyhedra. This paper gives a brief description of the HyperGami system; discusses our observations (and experimental results) in understanding what makes certain polyhedral shapes difficult or easy to visualize; and describes the ideas through which we plan to augment HyperGami with user models that could eventually form the computational basis for "intelligent spatial critics."
    Keywords: Spatial cognition, User modelling, HyperGami, Polyhedra

    Panel

    Computational Approaches to Interface Design: What Works, What Doesn't, What Should and What Might BIBAKPDF 123-126
      Christopher A. Miller; Kevin Corker; Mark Maybury; Angel R. Puerta
    Tools which make use of computational processes -- mathematical, algorithmic and/or knowledge-based -- to perform portions of the design, evaluation and/or construction of interfaces have become increasingly available and powerful. Nevertheless, there is little agreement as to the appropriate role for a computational tool to play in the interface design process. Current tools fall into broad classes depending on which portions, and how much, of the design process they automate. The purpose of this panel is to view and generalize about computational approaches developed to date, discuss the tasks which for which they are suited, and suggest methods to enhance their utility and acceptance. Panel participants represent a wide diversity of application domains and methodologies. This should provide for lively discussion about implementation approaches, accuracy of design decisions, acceptability of representational tradeoffs and the optimal role for a computational tool to play in the interface design process.
    Keywords: Interface design, Adaptive interfaces, Human performance modeling, User interface generation, Information management

    Automation of Presentations

    Top-Down Hierarchical Planning of Coherent Visual Discourse BIBAKPDF 129-136
      Michelle X. Zhou; Steven K. Feiner
    A visual discourse is a series of connected visual displays. A coherent visual discourse requires smooth transitions between displays, consistent design within and across displays, and successful integration of new information into existing displays. We present an approach for automatically designing a coherent visual discourse. A top-down, hierarchical-decomposition partial-order planner is used to efficiently plan the visual discourse. Visual representations are modelled as visual objects, graphical techniques are employed as planning operators, and design policies are encoded as constraints. This approach not only improves the computational efficiency compared to search-based approaches, but also facilitates knowledge encoding, and ensures global coherency.
    Keywords: Top-down hierarchical planning, Automated graphics generation, Knowledge-based user interfaces
    Declarative Models of Presentation BIBAKPDF 137-144
      Pablo Castells; Pedro Szekely; Ewald Salcher
    Current interface development tools cannot be used to specify complex displays without resorting to programming using a toolkit or graphics package. Interface builders and multi-media authoring tools only support the construction of static displays where the components of the display are known at design time (e.g., buttons, menus). This paper describes a presentation modeling system where complex displays of dynamically changing data can be modeled declaratively. The system incorporates principles of graphic design such as guides and grids, supports constraint-based layout and automatic update when data changes, has facilities for easily specifying the layout of collections of data, and has facilities for making displays sensitive to the characteristics of the data being presented and the presentation context (e.g., amount of space available). Finally, the models are designed to be amenable to interactive specification and specification using demonstrational techniques.
    Keywords: Model-based user interfaces, User interface design techniques, User interface development tools, Graphic design
    Integrating Planning and Task-Based Design for Multimedia Presentation BIBAKPDF 145-152
      Stephan Kerpedjiev; Giuseppe Carenini; Steven F. Roth; A Johanna D.Moore
    We claim that automatic multimedia presentation can be modeled by integrating two complementary approaches to automatic design: hierarchical planning to achieve communicative goals, and task-based graphic design. The interface between the two approaches is a domain and media independent layer of communicative goals and actions. A planning process decomposes domain-specific goals to domain-independent goals, which in turn are realized by media-specific techniques. One of these techniques is task-based graphic design. We apply our approach to presenting information from large data sets using natural language and information graphics.
    Keywords: Multimedia presentation, Information seeking tasks, Media allocation, Information graphics, Presentation planning

    Applications

    The Pedagogical Design Studio: Exploiting Artifact-Based Task Models for Constructivist Learning BIBAKPDF 155-162
      James C. Lester; Patrick J. FitzGerald; Brian A. Stone
    Intelligent learning environments that support constructivism should provide active learning experiences that are customized for individual learners. To do so, they must determine learner intent and detect misconceptions, and this diagnosis must be performed as non-invasively as possible. To this end, we propose the pedagogical design studio, a design-centered framework for learning environment interfaces. Pedagogical design studios provide learners with a rich, direct manipulation design experience. By exploiting an artifact-based task model that preserves a tight mapping between the interface state and design sub-tasks, they non-invasively infer learners' intent and detect misconceptions. The task model is then used to tailor problem presentation, produce a customized musical score, and modulate problem-solving intervention. To explore these notions, we have implemented a pedagogical design studio for a constructivist learning environment that provides instruction to middle school students about botanical anatomy and physiology. Evaluations suggest that the design studio framework constitutes an effective approach to interfaces that support constructivist learning.
    Keywords: Learning environments, Tutoring systems, Design, Task models
    Some Interface Issues in Developing Intelligent Communications Aids for People with Disabilities BIBAKPDF 163-170
      Kathleen F. McCoy; Patrick Demasco; Christopher A. Pennington; Arlene Luberoff Badman
    Augmentative and Alternative Communication (AAC) is the field of study concerned with providing devices and techniques to augment the communicative ability of a person whose disability makes it difficult to speak in an understandable fashion. For several years, we have been applying natural language processing techniques to the field of AAC in order to develop intelligent communication aids that attempt to provide linguistically "correct" output while speeding communication rate. In this paper we describe some of the interface issues that must be considered when developing such a device. We focus on a project aimed at a group of users who have cognitive impairments that affect their linguistic ability. A prototype system is under development which will hopefully not only prove to be an effective communication aid, but may provide some language intervention benefits for this population.
    Keywords: Intelligent augmentative communication devices, Natural language processing, Interfaces for people with disabilities

    Panel

    Compelling Intelligent User Interfaces: How Much AI? BIBAPDF 173-175
      Joe Marks; Larry Birnbaum; Eric Horvitz; David Kurlander; Henry Lieberman; Steve Roth
    Efforts to incorporate intelligence into the user interface have been underway for decades, but the commercial impact of this work has not lived up to early expectations, and is not immediately apparent. This situation appears to be changing. However, so far the most interesting intelligent user interfaces (IUIs) have tended to use minimal or simplistic AI. In this panel we consider whether more or less AI is the key to the development of compelling IUIs.
       The panelists will present examples of compelling IUIs that use a selection of AI techniques, mostly simple, but some complex. Each panelist will then comment on the merits of different kinds and quantities of AI in the development of pragmatic interface technology.

    Web / Hypermedia

    Evaluating the Utility and Usability of an Adaptive Hypermedia System BIBAKPDF 179-186
      Kristina Höök
    We have evaluated an adaptive hypermedia system, PUSH, and compared it to a non-adaptive variant of the same system. Based on an inferred information seeking task, PUSH chooses what to show and what to hide in a page using a stretchtext technique, thus attempting to avoid information overload.
       We studied how successful the subjects were in retrieving the most relevant information, and found that the subjects' solutions were influenced by the choice made by the adaptive system. We also studied how much the adaptivity reduced the amount of actions needed, and found that subjects made substantially fewer actions in the adaptive case. A third measurement was the subjects subjective preferences for the adaptive or the non-adaptive system, were we found that the subjects clearly preferred the adaptive system. It seems as if it requires less decisions on behalf of the subject, thereby reducing their cognitive load.
    Keywords: Adaptive hypermedia, Empirical evaluation, Intelligent interfaces, Usability
    Multi-Level User Support through Adaptive Hypermedia: A Highly Application-Independent Help Component BIBAKPDF 187-194
      L. Miguel Encarnacao
    Adaptive help components are an essential extension for complex systems that aim to provide usability to a broad range of users with different levels of expertise. The increasing availability of distributed hypermedia information generates new challenges for the development of adaptive help systems, including the realization of appropriate presentation, navigation, user modeling, and the integration with existing applications.
       After identifying the shortcomings of current hypermedia systems, we present an adaptive hypermedia help system that supports context-sensitive and user-adaptive presentation of hypermedia help variants on different levels of the user's dialog with the application. The system supports user-controlled help adaptation and agent-based retrieval of additional hypermedia help information, and can easily be integrated into new and existing applications. This has already been realized with sample application systems in the area of medical imaging and CAD.
    Keywords: Adaptive hypermedia help systems, Multi-level user support, Graphical user interfaces, Distributed hypermedia help, User modeling, Navigation support, User-controlled help adaptation, Help agent, Development framework, Medical and CAD applications
    Decision Making in Intelligent User Interfaces BIBAKPDF 195-202
      Constantine Stephanidis; Charalampos Karagiannidis; Adamantios Koumpis
    Intelligent user interfaces are characterised by their capability to adapt at run-time and make several communication decisions concerning 'what', 'when', 'why' and 'how' to communicate, through a certain adaptation strategy. In this paper, we present a methodological approach to assist this decision making process, which is based on a clear separation of the important attributes that characterise the adaptation strategy, namely the adaptation determinants, constituents, goals and rules. Based on this separation, we also present a methodological approach for the formulation of adaptation rules, which utilises techniques from the domain of multiple criteria decision making. It is argued that, following the proposed approach, the adaptation strategy can be easily customised to the requirements of different application domains and user groups, and can be re-used with minor modifications in different applications. As a result, developers of intelligent user interfaces can be significantly assisted, and users can be empowered to exploit the benefits of intelligent interfaces.
    Keywords: Run-time adaptation, Adaptation strategy, Decision making

    Plenary Address

    What Makes an Intelligent User Interface Intelligent? BIBAPDF 205
      Doug Riecken
    In this talk I wish to consider and examine both current research and "state of the art" technologies applied in advancing human-computer interaction. With a focus on user interfaces, a critical question will be addressed. What makes an Intelligent User Interface (IUI) intelligent? This question provides two distinct venues of investigation. First, the implications of a user's "human intelligence" as applied in a set of dialogs and goal-directed tasks performed collectively by both a user and a computer. Second, the potential ability for computers to perform in such a manner that they elicit users to interpret a computer's actions as providing a type of "conscious" behavior. The essence of this investigation will attempt to determine: "where is the intelligence?"
       During this investigation several key questions will act as guides. How can user interfaces engage users to act more intelligently? What empirical knowledge regarding the presentation of information can be applied in the design of user interfaces and dialog models? How will users perceive and accept the evolving ability of computers to perform surrogate tasks correctly? What are current and future potential models of interactive computing? How useful are sophisticated reasoning and knowledge-base technologies in UI design? What impact does the "WIRED" and "CYBER" generations of users have on the evolution of IUIs?
       A closing issue motivated in this talk addresses the "what if?" and the "why not now?" opportunities for research and development. As the year 2000 approaches, significant changes have occurred in academic and industrial research policies and practices. Future growth in the domain of IUIs is plausible based on the effective focus of research to: (1) identify the right problems and (2) introduce generalized solutions to the "real world." If IUIs are to be successful, then a pragmatic timeline defining their deployment as essential technologies for everyone is necessary.

    Short Papers

    An Adaptive Short List for Documents on the World Wide Web BIBAKPDF 209-211
      Matjaz Debevc; Beth Meyer; Rajko Svecko
    Since the World Wide Web (WWW) is so popular and growing so quickly, users have almost an infinite number of sites to choose from. Bookmark features in web browsers allow users to easily record the sites that they would like to be able to view again, without having to repeatedly search through the WWW. However, bookmark lists for active web users can grow very long very quickly. Since user-maintained bookmark lists can easily grow long and somewhat harder to use, it is useful to have an automatically maintained shorter list of useful sites.
       This paper describes an Adaptive Short List of commonly used sites. This feature, when integrated into web browsing software, would enable users to check the most probable sites quickly, without having to search through every bookmark they've ever created. We also present a decision algorithm for selecting sites to include in this list. The goal of this system is to determine the most appropriate sites to include in the Adaptive Short List, based on usage data which the system collects and analyzes while the user works.
    Keywords: User interface, Adaptive user interface, World Wide Web, Intelligent system, User modelling
    An Interface Agent for Nonroutine Tasks BIBAKPDF 213-216
      Yuzo Fujishima
    To assist an application software user in nonroutine tasks, a method named GIO was devised. By the method, the user can automatically achieve again a goal that was once achieved. The goal is represented in terms of input to and output from the application. The pieces of input that are collectively sufficient to achieve the goal are extracted from a record of input to and output from the application by a sub-method called input slicing. In performing input slicing, the knowledge about the relevance between pieces of input is used. Then the extracted pieces are input to the application and the goal is achieved. To evaluate the method, an interface agent named DIA was implemented. DIA assists a program debugger user in reproducing a state of the program to be debugged.
    Keywords: Interface agent, Programming-by-demonstration, Goal-oriented interface, Debugger
    An Interface for Collaborative and Coached Approaches to Learning Critical Inquiry BIBAKPDF 217-220
      Dan Suthers
    The software to be demonstrated has been designed to support learning of critical inquiry skills, particularly in science. The software includes a graphical "inquiry diagram" interface for the construction of arguments, a coach that comments on the inquiry process via this interface, and associated tools for collaborative learning. Research issues include interface design in support of coaching and collaborative learning.
    Keywords: Coaching, Collaborative learning, Critical inquiry
    Automating a Classification Task Based on an Augmented Thesaurus BIBAKPDF 221-224
      Eunok Paek; Hye-Jeong Jeon
    Most classification tasks that have been tackled for automation are ones involving complex chains of causal reasoning. However, our daily lives are filled with simple classification tasks. We propose a lexicon-based solution to the problem of automating simple classifications, and hence provide an intelligent user interface for personal information management. Although a classical organization of a lexical database constitutes the backbone for our system as it provides a basis for semantic inheritance, we found that its information content was far from being sufficient. In particular, we felt it necessary to augment the existing lexical database with various kinds of contextual information such as the user's preference for classification criteria, personal information regarding the user, knowledge about the actions and objects associated with certain lexical items, and certain kinds of commonsense knowledge. We believe that our system provides a simple and easy-to-use interface to the personal information management task.
    Keywords: Automatic classification, Knowledge-based tools, Lexical semantics, Inheritance
    Easing Interaction through User-Awareness BIBAKPDF 225-228
      Alain Karsenty
    In the context of CSCW (Computer Supported Cooperative work), we aim to ease the interaction between users through the use of user-aware agents. The purpose of those agents is to be aware of the user's state (e.g. is the user typing on the keyboard, meeting with other people, on the phone, etc.). We will first describe an application we developed on top of a Mediaspace, EasyMeeting, based on user-aware agents. Second, we will present the implementation (multi-agent architecture, language) and conclude with a discussion of the various aspects of the agents. We believe, the user-aware agents are a step toward a better communication man-machine. Instead of the usual approach in which users consciously interact with the machine, we make the computer aware of the users and thus make users unconsciously interact with the computer.
    Keywords: Groupware, Computer-supported cooperative work, Computer-human interaction, Intelligent agents, Mediaspace
    Individual User Interfaces and Model-Based User Interface Software Tools BIBAKPDF 229-232
      Egbert Schlungbaum
    Currently, most of model-based user interface software tools use task, application, and presentation models to generate the running user interface. The point of this paper is to use an additional user model to create individual user interfaces. For it, individual user interfaces and model-based tools are analyzed briefly to define the starting point for this research work.
       The viability of this approach is discussed with an example using the TADEUS environment. Furthermore, some ideas are presented to extend the MASTERMIND system.
    Keywords: Model-based user interface software tools, Model-based user interface development, Explicit user model, MASTERMIND, TADEUS
    Inductive Task Modeling for User Interface Customization BIBAKPDF 233-236
      David Maulsby
    This paper describes ActionStreams, a system for inducing task models from observations of user activity. The model can represent several task structures: hierarchy, variable sequencing, mandatory vs. optional actions, and interleaved sequences. The task models can be used for just-in-time automation and for guidance in user interface design.
    Keywords: Adaptive user interface, Machine learning, Task models, Model-based user interface design, Programming by demonstration
    Intelligent Network News Reader BIBAKPDF 237-240
      Hitoshi Isahara; Hiromi Ozaku
    We are developing an Intelligent Network News Reader which extracts news articles for the user. In contrast to ordinary information retrieval and abstract generation, this method utilizes an "information context" to select articles from newsgroups on the internet.
       The salient feature of this system is that it retrieves articles dynamically, adapting to the user's interests, not classifying them beforehand. Since this system measures the semantic distance between articles, it is possible to refer to the necessary information without being constrained within a particular news group.
       We will finish our prototype of the Intelligent Network News Reader in March 1997 and its practical version in March 1998.
    Keywords: Network news reader, Natural language processing, Information retrieval
    Intelligent Word-Prediction to Enhance Text Input Rate (A Syntactic Analysis-Based Word-Prediction Aid for People with Severe Motor and Speech Disability) BIBAKPDF 241-244
      Nestor Garay-Vitoria; Julio Gonzalez-Abascal
    Word-prediction is a technique commonly used to reduce the amount of keystrokes needed to input text by people with severe physical disabilities. Several methods based on word frequencies have been developed so far. Many of them do not take advantage of the information inherent to the syntactic sentence structure. This paper puts forth a word-prediction method based on the syntactical analysis of a sentence, carried out using the "chart" parsing method proposed by Allen. This method also adapts its behaviour to the user's lexicon. The obtained results are compared to others obtained from a pure statistical predictor.
    Keywords: Motor disabilities, Input speed enhancement, Word-prediction, Syntax analysis, Chart technique, Adaptation
    Interactive Model-Based Coding for Face Metaphor User Interface in Network Communications BIBAKPDF 245-248
      Kazuo Ohzeki; Takahiro Saito; Masahide Kaneko; Hiroshi Harashima
    The model-based coding is a new semantic-based coding which utilizes a common knowledge database being both at a transmitter and its receiver. We have proposed the interactive model-based coding for the face metaphor user interface applications such as multimedia e-mail, WWW and agents. Three points are addressed in this paper. (1) We have constructed a framework of interactive model-based coding tool in network communications environment. (2) We show that motion parameter description for face expressions should have both CC-based synthesis and natural image-based synthesis. The corresponding descriptions by relative and absolute way are necessary for sufficient face metaphor expressions. (3) We have developed the encoder and decoder. The motion parameters are transmitted from experimental server to a receiver. The decoded motion picture can be demonstrated both by a videotape and by a PC on-line browsing through the internet.
    Keywords: Interactive, Intelligent coding, Motion detection, Face metaphor, Face expression, Agent
    Management of Interface Design Knowledge with MOBI-D BIBAKPDF 249-252
      Angel R. Puerta; David Maulsby
    Effective guidelines for interface construction require developers to apply a user-centered approach in their designs. Yet, developers lack integrated tools that would allow them to work with high-level concepts, such as user tasks, and to relate them to lower level elements, such as widgets, in their interface designs.
       The Model-Based Interface Designer (MOBI-D) is a suite of tools for the management, visualization, editing, and interactive refinement of interface-design knowledge at multiple levels of abstraction. MOBI-D represents knowledge via declarative interface models that assign specific knowledge roles to each model component. Developers work in an integrated environment with abstract concepts such as user tasks, domain objects, presentation styles, dialogs, and user types while being able to relate those concepts to concrete interface elements such as push buttons. MOBI-D is the first development environment to integrate the disparate elements of interface design into structured conceptual units -- interface models -- and to define an interface design as an explicit declarative element of such units.
    Keywords: Model-based interface development, Interface models, User interface development tools
    Providing User Support for Interactive Applications with FUSE BIBAKPDF 253-256
      Frank Lonczewski
    FUSE (Formal User Interface Specification Environment) is an integrated user interface development environment that offers tool-based support for all phases of the interface design process. PLUG-IN forms one part of FUSE. Its purpose is to provide support for the end-user working with user interfaces generated by FUSE. PLUG-IN produces dynamic on-line help pages and animation sequences on the fly. On the dynamic help pages textual help for the user is displayed whereas the animation sequences are used to show how the user can interact with the application. In the presentation the architecture of FUSE is discussed. Furthermore PLUG-IN's user guidance capabilities are demonstrated by looking at the user interface of an interactive ISDN telephone simulation.
    Keywords: Intelligent user interfaces, Model-based user interface design, User guidance, Generated on-line help systems
    A Response Model for a CG Character Based on Timing of Interactions in a Multimodal Human Interface BIBAKPDF 257-260
      Kenji Sakamoto; Haruo Hinode; Keiko Watanuki; Susumu Seki; Jiro Kiyama; Fumio Togawa
    In this paper, we propose a response model for a multimodal human interface, by inserting listener responses particular times by detecting keywords from the user's utterances and controlling the face direction of a human-like Computer Graphics (CG) character according to the direction of the user's attention which is determined by tracing user's face. Then we integrated the response model into a prototype system, and evaluated its efficiency.
    Keywords: Multimodal human interface, Timing of interaction, Listener response, Face direction
    The Stick-e Note Architecture: Extending the Interface Beyond the User BIBAKPDF 261-264
      Jason Pascoe
    This paper proposes a redefinition of the human-computer interface, extending its boundaries to encompass interaction with the user's physical environment. This extension to the interface enables computers to become aware of their context of use and intelligently adapt their activities and interface to suit their current circumstances.
       Context-awareness promises to greatly enhance user interfaces, but the complexity of capturing, representing and processing contextual data, presents a major obstacle to its further development. The Stick-e Note Architecture is proposed as a solution to this problem, offering a universal means of providing context-awareness through an easily understood metaphor based on the Post-It note.
    Keywords: Context-aware computing, Stick-e note architecture, Mobile computing, Ubiquitous computing, Situated information spaces
    Wizards, Guides, and Beyond: Rational and Empirical Methods for Selecting Optimal Intelligent User Interface Agents BIBAKPDF 265-268
      D. Christopher Dryer
    User interface (UI) agents are new intelligent user interface technologies that can help prevent people from making mistakes by guiding them through information system tasks. To be effective, UI agents must be applied to tasks that exploit the potentials of UI agents without expecting them to perform beyond their constraints. The potentials and constraints of two kinds of UI agents (wizards and guides) were considered, and criteria for the application of these UI agents were outlined. An empirical study of 61 OS/2 Warp beta testers with varying degrees of computer experience is reported that assessed respondents' perceptions of the importance, frequency, and difficulty of 57 information system tasks. Empirical and rational analyses were used to select a set of tasks to which UI agent technologies were applied in the latest release of IBM's OS/2 Warp operating system.
    Keywords: Agents, Artificial intelligence, Graphical user interface, Guides, Intelligent user interfaces, Wizards