HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 9397989900010203040506070809101112

Proceedings of the 2002 International Conference on Intelligent User Interfaces

Fullname:International Conference on Intelligent User Interfaces
Editors:Yolanda Gil; David Leake
Location:San Francisco, California, USA
Dates:2002-Jan-13 to 2002-Jan-16
Publisher:ACM
Standard No:ACM ISBN 1-58113-382-0 ACM Order Number 459010; ACM DL: Table of Contents hcibib: IUI02
Papers:61
Pages:243
Links:Conference Home Page
  1. Plenary Speakers
  2. Full Papers
  3. Short Papers
  4. Demonstration Descriptions

Plenary Speakers

The interactive conversation interface (ICI): a proposed successor to GUI for an interactive broadband world BIBAFull-Text 2
  Harry Gottlieb
This presentation will demonstrate and discuss the design principles to create interactive programs featuring a doctor talking to you about healthcare...to a florist talking to you about flowers...to a teacher talking to you about geometry. The Interactive Conversation Interface is a more engaging way to provide services and information for nearly any major site on the Web and provide an answer to the question: "What is an interactive television show?" (And no, it isn't clicking on Jennifer Aniston sweater so your daughter can stop the program, right in the middle of a funny scene, and buy it while you sit there waiting to get back to the show). ICI is a form of interactive mass communication that can be used on any platform, whether it is a PC, wireless device, interactive television or just a regular phone. It's cool: why interact with a "page" when you can interact with a "person".
Uncertainty, intelligence, and interaction BIBAFull-Text 3
  Eric Horvitz
Uncertainty about a user's knowledge, intentions, and attention is inescapable in human-computer interaction. I will survey challenges and opportunities of harnessing explicit representations of uncertainty and preferences in intelligent user interfaces. After reviewing representative projects at Microsoft, I will describe longer-term research directions aimed at embedding representation, inference, and learning under uncertainty more deeply into the fabric of computer systems and interfaces.
Complexity versus difficulty: where should the intelligence be? BIBAFull-Text 4
  Don Norman
Complexity refers to the internal workings of the system, difficulty to the face provided to the user -- the factors that affect ease of use. The history of technology demonstrates that the way to make simpler, less difficult usage often requires more sophisticated, more intelligent, and more complex insides. Do we need intelligent interfaces? I don't think so: The intelligence should be inside, internal to the system. The interface is the visible part of the system, where people need stability, predictability and a coherent system image that they can understand and thereby learn.

Full Papers

A writer's collaborative assistant BIBAFull-Text 7-14
  Tamara Babaian; Barbara J. Grosz; Stuart M. Shieber
In traditional human-computer interfaces, a human master directs a computer system as a servant, telling it not only what to do, but also how to do it. Collaborative interfaces attempt to realign the roles, making the participants collaborators in solving the person's problem. This paper describes Writer's Aid, a system that deploys AI planning techniques to enable it to serve as an author's collaborative assistant. Writer's Aid differs from previous collaborative interfaces in both the kinds of actions the system partner takes and the underlying technology it uses to do so. While an author writes a document, Writer's Aid helps in identifying and inserting citation keys and by autonomously finding and caching potentially relevant papers and their associated bibliographic information from various on-line sources. This autonomy, enabled by the use of a planning system at the core of Writer's Aid, distinguishes this system from other collaborative interfaces. The collaborative design and its division of labor result in more efficient operation: faster and easier writing on the user's part and more effective information gathering on the part of the system. Subjects in our laboratory user study found the system effective and the interface intuitive and easy to use.
A resource-adaptive mobile navigation system BIBAFull-Text 15-22
  Jorg Baus; Antonio Kruger; Wolfgang Wahlster
The design of mobile navigation systems adapting to limited resources will be an important future challenge. Since typically several different means of transportation have to be combined in order to reach a destination, the user interface of such a system has to adapt to the user's changing situation. This applies especially to the alternating use of different technologies to detect the user's position, which should be as seamless as possible. This article presents a hybrid navigation system that relies on different technologies to determine the user's location and that adapts the presentation of route directions to the limited technical resources of the output device and the limited cognitive resources of the user.
Domain, task, and user models for an adaptive hypermedia performance support system BIBAFull-Text 23-30
  Peter Brusilovsky; David W. Cooper
Electronic Performance Support Systems (EPSS) is a challenging application area for developing intelligent interfaces. Some possible scenarios for using domain, task, and user models for adaptive performance support were explored in the context of the Adaptive Diagnostics and Personalized Technical Support (ADAPTS) project. ADAPTS provides an intelligent, adaptive EPSS for maintaining complex equipment.
Navigational blocks: navigating information space with tangible media BIBAFull-Text 31-38
  Ken Camarata; Ellen Yi-Luen Do; Brian R. Johnson; Mark D. Gross
The Navigational Blocks project demonstrates a tangible user interface that facilitates retrieval of historical stories in a tourist spot. Orientation, movement, and relative positions of physical Blocks support visitor navigation and exploration in a virtual gallery. The Navigational Blocks system provides a physical embodiment of digital information through tactile manipulation and haptic feedback. The simple cubic form of the Blocks is easy to understand and therefore easy to use to manipulate complex digital information. Electromagnets embedded in the Blocks and wireless communication encourage users to quickly rearrange the Blocks to form different database queries.
New paradigms in problem solving environments for scientific computing BIBAFull-Text 39-46
  George, Jr. Chin; L. Ruby Leung; Karen Schuchardt; Debbie Gracio
Computer and computational scientists at Pacific Northwest National Laboratory (PNNL) are studying and designing collaborative problem solving environments (CPSEs) for scientific computing in various domains. Where most scientific computing efforts focus at the level of the scientific codes, file systems, data archives, and networked computers, our analysis and design efforts are aimed at developing enabling technologies that are directly meaningful and relevant to domain scientist at the level of the practice and the science. We seek to characterize the nature of scientific problem solving and look for innovative ways to improve it. Moreover, we aim to glimpse beyond current systems and technical limitations to derive a design that expresses the scientist's own perspective on research activities, processes, and resources. The product of our analysis and design work is a conceptual scientific CPSE prototype that specifies a complete simulation and modeling user environment and a suite of high-level problem solving tools.
Agents and GUIs from task models BIBAFull-Text 47-54
  Jacob Eisenstein; Charles Rich
This work unifies two important threads of research in intelligent user interfaces which share the common element of explicit task modeling. On the one hand, longstanding research on task-centered GUI design (sometimes called model-based design) has explored the benefits of explicitly modeling the task to be performed by an interface and using this task model as an integral part of the interface design process. More recently, research on collaborative interface agents has shown how an explicit task model can be used to control the behavior of a software agent that helps a user perform tasks using a GUI. This paper describes a collection of tools we have implemented which generate both a GUI and a collaborative interface agent from the same task model. Our task-centered GUI design tool incorporates a number of novel features which help the designer to integrate the task model into the design process without being unduly distracted. Our implementation of collaborative interface agents is built on top of the COLLAGEN middleware for collaborative interface agents.
Device-dependant modality selection for user-interfaces: an empirical study BIBAFull-Text 55-62
  Christian Elting; Jan Zwickel; Rainer Malaka
The presentation of information using multiple modalities influences the perception of users, their comfort, and their performance in using a computer-based information system. This paper presents a user study investigating the effects of different output modality-combinations on the effectiveness to transport information and on the user's acceptance of the system. We chose a tourist information system as a test environment and conducted the study on three different devices (PDA, TV set, and desktop computer) to investigate whether the best modality-combination depends on the used device. It turned out that the modality-combination of spoken text in connection with a picture was the most effective regarding recall-performance. This effect was strongest for users working with PDAs, which can be explained by the cognitive load theory. In contrast to this, participants ranked different modality combinations as most appealing, namely those with written text.
Light widgets: interacting in every-day spaces BIBAFull-Text 63-69
  Jerry Alan Fails; Dan, Jr. Olsen
This paper describes a system for ubiquitous interaction that does not require users to carry any physical devices. In this system, the environment is instrumented with camera/processor combinations that watch users while protecting their privacy. Any visible surface can be turned into an interactive widget triggered by skin-colored objects. Light widgets are tied to the XWeb cross-modal interaction platform to empower them with interactive feedback.
Sketching for knowledge capture: a progress report BIBAFull-Text 71-77
  Kenneth D. Forbus; Jeffrey Usher
Many concepts and situations are best explained by sketching. This paper describes our work on sKEA, the sketching Knowledge Entry Associate, a system designed for knowledge capture via sketching. We discuss the key ideas of sKEA: blob semantics for glyphs to sidestep recognition for visual symbols, qualitative spatial reasoning to provide richer visual and conceptual understanding of what is being communicated, arrows to express domain relationships, layers to express within-sketch segmentation (including a meta-layer to express subsketch relationships themselves via sketching), and analogical comparison to explore similarities and differences between sketched concepts. Experiences with sKEA to date and future plans are also discussed.
Plan-based interfaces: keeping track of user tasks and acting to cooperate BIBAFull-Text 79-86
  David Franklin; Jay Budzik; Kristian Hammond
The ability to reason about the activity of a user is crucial to the implementation of any Intelligent User Interface. If it is able to recognize what a user is doing, a computer can act to cooperate. Most computer systems limit themselves to command-response interactions-their trivial understandings of their users cannot support a more complicated interaction. However, by looking at the tasks that their users are performing and reasoning about sequences of actions, a computer system can provide a more interesting level of interaction that is more efficient and does not demand as much of its users. Furthermore, the understanding of the user's activity provides a context within which to better understand future actions and to tune the sensing systems to look and listen for the actions that the user is most likely to take next. Finally, in many domains, such computer systems can recognize user tasks and act to cooperate without requiring a deep, goal-oriented understanding. In this paper, we look at the process-based interface used in the Intelligent Classroom, focusing on how a human lecturer can control it by simply going about her presentation. Also, we look at how the general ideas have been adapted to Jabberwocky, a speech-based interface to Microsoft PowerPoint that automatically switches slides, and how they are being applied to extend the functionality of Watson, an autonomous web research tool that uses the document a user is viewing as a search context.
NuggetMine: intelligent groupware for opportunistically sharing information nuggets BIBAFull-Text 87-94
  Jeremy Goecks; Dan Cosley
NuggetMine is an intelligent groupware application that collaborates with a workgroup to increase information nugget sharing among the group. Information nuggets are small amounts of self-contained information, such as the URL of an interesting news article, a book title, or the time and location of a local art event. NuggetMine and the workgroup work together to build, maintain, and utilize a repository-or "mine"-of information nuggets. Group members submit nuggets to NuggetMine, which organizes and augments the submitted nuggets and provides a desktop interface to each group member. This interface makes it easy for group members to submit nuggets, view nuggets, and explore the mine. NuggetMine distributes the tasks necessary to share nuggets between it and the workgroup so as to best utilize the skills of each collaborator. In this paper, we describe the NuggetMine application and interface and present a pilot study of the application.
Annotating and sketching on 3D web models BIBAFull-Text 95-102
  Thomas Jung; Mark D. Gross; Ellen Yi-Luen Do
This paper reports on our progress and findings in building a Web annotation system for non-immersive 3D virtual environments. Over the last two years, we developed and tested two systems for collaborating designers to comment on virtual 3D models. Our first system, Redliner [12] lets design team members browse and leave text annotations on surfaces in three-dimensional models. Experience with Redliner, including two user evaluations in different settings, led us to develop Space Pen [13], a second annotation system with improved interaction capabilities. It goes beyond the post-it note metaphor, allowing users to draw in and on the virtual environment.
Multiple selections in smart text editing BIBAFull-Text 103-110
  Robert C. Miller; Brad A. Myers
Multiple selections, though heavily used in file managers and drawing editors, are virtually nonexistent in text editing. This paper describes how multiple selections can automate repetitive text editing. Selection guessing infers a multiple selection from positive and negative examples provided by the user. The multiple selection can then be used for inserting, deleting, copying, pasting, or other editing commands. Simultaneous editing uses two levels of inference, first inferring a group of records to be edited, then inferring multiple selections with exactly one selection in each record. Both techniques have been evaluated by user studies and shown to be fast and usable for novices. Simultaneous editing required only 1.26 examples per selection in the user study, approaching the ideal of 1-example PBD. Multiple selections bring many benefits, including better user feedback, fast, accurate inference, novel forms of intelligent assistance, and the ability to override system inferences with manual corrections.
Intelligent analysis of user interactions with web applications BIBAFull-Text 111-118
  Laila Paganelli; Fabio Paterno
In this paper, we describe a tool able to perform intelligent analysis of Web browser logs using the information contained in the task model of the application. We show how this approach supports remote usability evaluation of Web sites.
Design visual thinking tools for mixed initiative systems BIBAFull-Text 119-126
  Pearl Pu; Denis Lalanne
Visual thinking tools are visualization-enabled mixed initiative systems that empower people in solving complex problems by engaging them in the entire resolution process, suggesting appropriate actions with visual cues, and reducing their cognitive load with visual representations of their tasks. At the same time, the visual interaction style provides an alternative to the dialog-based model employed in most mixed-initiative (MI) systems. Visual thinking tools avoid complex analyses of turn taking, and put users in control all the time. We are especially interested in implementing visual "affordances" in such systems and present three examples used in COMIND, a visual MI system that we have developed. We show how humans can more effectively concentrate on synthesizing problems, selecting resolution paths that were unseen by the machine, and reformulating problems if solutions cannot be found or are unsatisfactory. We further discuss our evaluation of the techniques at the end of the paper.
Getting to know you: learning new user preferences in recommender systems BIBAFull-Text 127-134
  Al Mamunur Rashid; Istvan Albert; Dan Cosley; Shyong K. Lam; Sean M. McNee; Joseph A. Konstan; John Riedl
Recommender systems have become valuable resources for users seeking intelligent ways to search through the enormous volume of information available to them. One crucial unsolved problem for recommender systems is how best to learn about a new user. In this paper we study six techniques that collaborative filtering recommender systems can use to learn about new users. These techniques select a sequence of items for the collaborative filtering system to present to each new user for rating. The techniques include the use of information theory to select the items that will give the most value to the recommender system, aggregate statistics to select the items the user is most likely to have an opinion about, balanced techniques that seek to maximize the expected number of bits learned per presented item, and personalized techniques that predict which items a user will have an opinion about. We study the techniques thru offline experiments with a large pre-existing user data set, and thru a live experiment with over 300 users. We show that the choice of learning technique significantly affects the user experience, in both the user effort and the accuracy of the resulting predictions.
Toward automated exploration of interactive systems BIBAFull-Text 135-142
  Mark O. Riedl; Robert St. Amant
The ease with which a user interface can be navigated strongly contributes to its usability. In this paper we describe preliminary results of a project aimed at making the evaluation of user interfaces from this perspective more routine. We have designed a system to carry out an autonomous, exploratory navigation through the graphical user interface of interactive, off-the-shelf software applications. The system is not a robust tool, but rather a proof of concept that can exhibit interesting behaviors. The traversal process generates a representation of the connectivity of the user interface, as well as navigational paths to specific commands. The reasoning component of the system is based on the ACT-R architecture, while the perceptual and motor components of the system are built on top of the SegMan perception/action substrate. We present the design of the system and its use in exploring a simple user interface.
Hosting activities: experience with and future directions for a robot agent host BIBAFull-Text 143-150
  Candace L. Sidner; Myroslava Dzikovska
This paper discusses hosting activities. Hosting activities are a general class of collaborative activity in which an agent provides guidance in the form of information, entertainment, education or other services in the user's environment (which may be an artificial or the natural world) and may also request that the human user undertake actions to support the fulfillment of those services. This paper reports on experience in building a robot agent for hosting activities, both the architecture and applications being used. The paper then turns to a range of issues to be addressed in creating hosting agents, especially robotic ones. The issues include the tasks and capabilities needed for hosting agents, and social relations, especially human trust of agent hosts. Lastly the paper proposes a new evaluation metric for hosting agents.
Exposing document context in the personal web BIBAFull-Text 151-158
  David Wolber; Michael Kepe; Igor Ranitovic
Reconnaissance agents show context by displaying documents with similar content to the one(s) the user currently has open. Research paper search engines show context by displaying documents that cite or are cited by the currently open document(s). We present a tool that applies such ideas to the personal web, that is, the space rooted in user documents but tightly connected to web documents as well. The tool organizes the personal web with a single topic hierarchy based on direct links, instead of the traditional file, bookmark, and (hidden) direct link hierarchies. The tool allows a user to easily navigate through related user and web documents, no matter whether the documents are related by directory-document, bookmark-document, direct-link, or even similar content relationships.
Information delivery in support of learning reusable software components on demand BIBAFull-Text 159-166
  Yunwen Ye; Gerhard Fischer
An inherent dilemma exists in the design of high-functionality applications (such as repositories of reusable software components). In order to be useful, high-functionality applications have to provide a large number of features, creating huge learning problems for users. We address this dilemma by developing intelligent interfaces that support learning on demand by enabling users to learn new features when they are needed during work. We support learning on demand with information delivery by identifying learning opportunities of which users might not be aware. The challenging issues in implementing information delivery are discussed and techniques to address them are illustrated with the CodeBroker system. CodeBroker supports Java programmers in learning reusable software components in the context of their normal development environments and practice by proactively delivering task-relevant and personalized information. Evaluations of the system have shown its effectiveness in supporting learning on demand.
A semantic approach to the dynamic design of interaction controls in conversation systems BIBAFull-Text 167-174
  Michelle X. Zhou; Keith Houck
To support a full-fledged, multimedia human-computer conversation, we are building an intelligent framework, called Responsive Information Architect (RIA), which can automatically synthesize multimedia responses during the conversation. As part of its visual response generation, RIA dynamically creates context-sensitive interaction controls that are visual interfaces through which users can further interact with RIA. To enable the systematic design of interaction controls, we study and abstract control properties. In particular, in this paper we present a semantic model that captures the intentional, presentational, and behavioral characteristics of interaction controls. Using this model, we show how to systematically sketch the semantics of an interaction control.

Short Papers

Exploiting information access patterns for context-based retrieval BIBAFull-Text 176-177
  Travis Bauer; David B. Leake
In order for intelligent interfaces to provide proactive assistance, they must customize their behavior based on the user's task context. Existing systems often assess context based on a single snapshot of the user's current activities (e. g., examining the content of the document that the user is currently consulting). However, an accurate picture of the user's context may depend not only on this local information, but also on information about the user's behavior over time. This paper discusses work on a recommender system, Calvin, which learns to identify broader contexts by relating documents that tend to be accessed together. Calvin's text analysis algorithm, WordSieve, develops term vector descriptions of these contexts in real time, without needing to accumulate comprehensive statistics about an entire corpus. Calvin uses these descriptions (1) to index documents to suggest them in similar future contexts and (2) to formulate contextbased queries for search engines. Results of initial experiments are encouraging for the approach's improved ability to associate documents with the research tasks in which they were consulted, compared to methods using only local information. This paper sketches the project goals, the current implementation of the system, and plans for its continued development and evaluation.
User acceptance of a decision-theoretic location-aware shopping guide BIBAFull-Text 178-179
  Thorsten Bohnenberger; Anthony Jameson; Antonio Kruger; Andreas Butz
We are exploring a class of decision-theoretic handheld systems that give a user personalized advice about how to explore an indoor area in search of products or information. An initial user test in a simple mockup of a shopping mall showed that even novice PDA users accepted the system immediately and were able to achieve their shopping goals faster than when using a paper map of the mall. A key issue is the extent to which spontaneous user behavior can be accommodated within this framework.
Automatically indexing documents: content vs. reference BIBAFull-Text 180-181
  Shannon Bradshaw; Kristian Hammond
Authors cite other work in many types of documents. Notable among these are research papers and web pages. Recently, several researchers have proposed using the text surrounding citations (references) as a means of automatically indexing documents for search engines, claiming that this technique is superior to indexing documents based on their content [1,2]. While we ourselves have made this claim, we acknowledge that little empirical data has been presented to support it. Therefore, in the limited space available we present a terse overview of a study comparing reference to content as bases for indexing documents. This study indicates that reference identifies the value of documents more accurately and with a greater diversity of language than content.
An intelligent interface for sorting electronic mail BIBAFull-Text 182-183
  Elisabeth Crawford; Judy Kay; Eric McCreath
Classification of email is an important everyday task for a large and growing number of users. This paper describes the i-ems (Intelligent-Electronic Mail Sorter) mail interface, which offers a view of the inbox based on predicted classifications of messages. The interface is designed to ensure user control over the prediction processes by supporting scrutiny of the system's certainty and details of the mechanisms used.
Flytrap: intelligent group music recommendation BIBAFull-Text 184-185
  Andrew Crossen; Jay Budzik; Kristian J. Hammond
Flytrap is a group music environment that knows its users' musical tastes and can automatically construct a soundtrack that tries to please everyone in the room. The system works by paying attention to what music people listen to on their computers. Users of the system have radio frequency ID badges that let the system know when they are nearby. Using the preference information it has gathered from watching its users, and knowledge of how genres of music interrelate, how artists have influenced each other, and what kinds of transitions between songs people tend to make, the 'virtual DJ' finds a compromise and chooses a song. The system tries to satisfy the tastes of people in the room, but it also makes a playlist that fits its own notion of what should come next. Once it has chosen a song, music is automatically broadcast over the network and played on the closest machine.
Linking dynamic query interfaces to knowledge models BIBAFull-Text 186-187
  Maria De Carvalho; J. Tan; J. Domingue; H. Petursson
This research aims at improving dynamic query based information access by using knowledge modelling to narrow the search space. Our objective is to allow users to quickly browse web pages and get the information content related to his/her profile and within semantically relevant dimensions. We have developed an application to link a dynamic query interface to ontologies containing knowledge about customers, products and shopping tasks in an online shop.
Measuring task models in designing intelligent products BIBAFull-Text 188-189
  Elyon DeKoven; David V. Keyson
As part of a design process, designers may create a model of user tasks. If the task model were to be built into an intelligent product's reasoning capabilities, the product could provide timely assistance specific to the user's current tasks. Thus for both the design and the intelligence of the product, the more accurate the task model, the more likely such a product may fit the users' needs. In the current paper we describe a new approach for evaluating the usability of intelligent products, based on information available in the designer's task model. This measure is then used to determine the degree to which users are able to access the task support provided by the product, and to identify users' needs for additional assistance. In this way, the usability measures presented in this paper can contribute to an iterative user-centered design process for designing and building product intelligence.
Information programming for personal user interfaces BIBAFull-Text 190-191
  Stephen Farrell; Volkert Buchmann; Christopher S. Campbell; Paul P. Maglio
With widespread access to e-mail, the world-wide web, and other information sources, people now use computers more for managing information than for managing applications. To support how people naturally and routinely organize information, computers ought to be able to reflect the categories, relationships, and cues that people rely on when thinking about and remembering facts. Toward this end, we created an Information Programming Toolkit (IPtk) that collects application-independent properties, indexes documents along many dimensions to create a personal record of information use, and provides convenient means for information access. The IPtk enables the development of smart user interfaces that automatically tailor information to a user's history and context of information use.
An empirical evaluation of an adaptive web site BIBAFull-Text 192-193
  Cristina Gena
This paper describes the evaluation of an adaptive commercial web site offering a set of utilities tailored on the basis of user needs. We compared the site with the non-adaptive variant in order to study how the adaptivity increases the success in retrieving information and reduces the amount of actions needed to solve the tasks. Moreover, we considered the preference towards the two alternative versions and the user satisfaction.
Language modeling for soft keyboards BIBAFull-Text 194-195
  Joshua Goodman; Gina Venolia; Keith Steury; Chauncey Parker
Language models predict the probability of letter sequences. Soft keyboards are images of keyboards on a touch screen for input on Personal Digital Assistants. When a soft keyboard user hits a key near the boundary of a key position, the language model and key press model are combined to select the most probable key sequence. This leads to an overall error rate reduction by a factor of 1.67 to 1.87. An extended version of this paper [4] is available.
Themometers and themostats: characterizing and controlling thematic attributes of information BIBAFull-Text 196-197
  Marko Krema; Larry Birnbaum; Jay Budzik; Kristian J. Hammond
Contents of documents can be characterized not just by subject matter or topic, but also in terms of more thematic or even stylistic attributes, such as level of detail, emotional vs. unemotional presentation, personal vs. impersonal voice, positive or negative attitude or "spin," theoretical vs. practical presentation, etc. As implied by these examples, such attributes often form continuous dimensions along which documents can be distinguished. In the best case, the appropriate place on these dimensions -- i.e., which documents will be of most value to the user at the current time -- can be determined by the user's current task, prior knowledge, etc. Does the user need more detail, or less, for example? In the current state of the art, it is difficult to compute this automatically. Instead, we propose that users should be enabled view these dimensions using themometers and to navigate them using themostats -- controls that offer users a choice of documents on the same topic, but with differing levels of detail, positive or negative spin, personal or impersonal voice, etc. The dimensions along which we can provide this kind of control must be easily computable. We have been able to characterize documents with respect to a number of these dimensions using simple statistical measures as well as specialized dictionaries, and describe these methods herein.
Generating and presenting user-tailored plans BIBAFull-Text 198-199
  Detlef Kupper; Alfred Kobsa
The paper describes methods for generating user-tailored advice and for suitably presenting it to users, taking their capabilities and knowledge into account.
IIPS: an intelligent information presentation system BIBAFull-Text 200-201
  Yuangui Lei; Enrico Motta; John Domingue
This paper presents the framework of an Intelligent Information Presentation System (IIPS), which provides intelligent interface presentation support for data-intensive web-based applications through the use of ontologies to drive the web site generation and maintenance process. IIPS defines a comprehensive set of ontologies to model the navigational structure, the compositional structure, and the user interfaces of data-intensive web sites, and provides a suit of tools to support site generation, maintenance, and personalization.
The AIL automated interface layout system BIBAFull-Text 202-203
  Simon Lok; Steven K. Feiner
We describe an automated layout system called AIL that generates the user interface for the PERSIVAL digital library project. AIL creates a layout based on a variety of content components and associated meta-data information provided by the PERSIVAL generation and retrieval modules. By leveraging semantic links between the content components, the layout that AIL provides is both context and user-model aware. In addition, AIL is capable of interacting intelligently with the natural language generation components of PERSIVAL to tailor the length of the text content for a given layout.
Design and evaluation of just-in-time help in a multi-modal user interface BIBAFull-Text 204-205
  Judith Masthoff; Ashok Gupta
In order to optimally support learning, help should be given at an appropriate level: providing the users with new information, relevant to and needed for their task. This paper discusses the design and evaluation of such a help system, applied in the Radiology domain.
Storyboard frame editing for cinematic composition BIBAFull-Text 206-207
  Scott McDermott; Junwei Li; William Bares
We are developing an intelligent virtual cinematography interface that can be used to compose sequences of shots and automatically evaluate individual shots and transitions, reporting possible deviations from widely accepted cinematic composition guidelines. Authors compose shots using a Storyboard Frame Editor to place subject objects as they should appear in the frame. Then the Storyboard Sequencer is used to design shot-to-shot transitions.
User interface tailoring for multi-platform service access BIBAFull-Text 208-209
  Guido Menkhaus; Wolfgang Pree
Due to the diversity of display capabilities and input devices, mobile computing gadgets have caused a dramatic increase in the development effort of interactive services. User interface (UI) tailoring and multi platform access represent two promising concepts for coping with this challenge. The paper presents the MUSA (multiple user interfaces, single application) prototype system that addresses both issues by introducing an event-graph as basis of a UI tailoring process.
Dynamic "intelligent handler" of frequently asked questions BIBAFull-Text 210-211
  Dick Ng'ambi
In this paper an ongoing research into dynamic intelligent handler of frequently asked questions (FAQs) is described. Although, the use of FAQs is widely used, there is no evidence that most FAQs contain frequently asked questions. This doubt arises due to the lack of a count of the number of times particular questions are asked; lack of indicators of the most recently asked question and a profile of users who asked these questions, and when these questions were last asked. These inadequacies render FAQs less useful for gauging user information needs and for devising appropriate interventions for different categories of users. Thus, a consulting environment in which an "intelligent handler" gets questions, dynamically creates FAQs with views based on user profiles, allows users to respond to questions and choose best response to questions is being developed.
Personalized navigation of heterogeneous product spaces using SmartClient BIBAFull-Text 212-213
  Pearl Pu; Boi Faltings
Personalization in e-commerce has so far been server-centric, requiring users to create a separate individual profile on each server that they like to access. As product information is increasingly coming from multiple and heterogeneous sources, the number of profiles becomes unmanageably large. We present SmartClient, a technology based on constraint programming where a thin but intelligent client provides personalized information access for its user. As the process can run on the user's side, it allows much stronger filtering and visualization support with a wider range of personalization options than existing tools. It also eliminates the need to personalize many sites individually with different parameters, and supports product configuration and integration of different information sources in the same framework. We illustrate the technology using an application in travel e-commerce, which is currently under commercial deployment.
XIML: a common representation for interaction data BIBAFull-Text 214-215
  Angel Puerta; Jacob Eisenstein
We introduce XIML (eXtensible Interface Markup Language), a proposed common representation for interaction data. We claim that XIML fulfills the requirements that we have found essential for a language of its type: (1) it supports design, operation, organization, and evaluation functions, (2) it is able to relate the abstract and concrete data elements of an interface, and (3) it enables knowledge-based systems to exploit the captured data.
Do users tolerate errors from their assistant?: experiments with an E-mail classifier BIBAFull-Text 216-217
  Jean-David Ruvini; Jean-Marc Gabriel
Smartlook, an e-mail classifier assistant, helps users filing their e-mails into folders. For a given message, it predicts the six most likely folders for that message and provides shortcut buttons that facilitate filing into one of the predicted folders. In this paper, we report results from user tests that show that although Smartlook does not achieve 100% prediction accuracy, a small percentage of errors does not hurt since users tolerate some errors from such an assistant.
Camera agents in a theatre of work BIBAFull-Text 218-219
  Leonie Schafer; Stefan Kuppers
We describe a novel approach for personalized navigation in a social virtual environment, in which a camera agent enables context-dependent exploration in a three-dimensional information landscape. This type of concept has potential applications for dynamic virtual environments, virtual narratives and awareness in virtual teams. We apply this approach within TOWER, a Theatre of Work, which allows project members to be aware of project relevant activities as well as to establish social relationships to intensify team coherence.
Exploiting visual information in programming by demonstration BIBFull-Text 220-221
  Eric Schwarzkopf; Mathias Bauer; Dietmar Dengler
GUI prototype generation by merging use cases BIBAFull-Text 222-223
  Junko Shirogane; Yoshiaki Fukazawa
In developing an application software, it is important to pay attention to end-users' viewpoints. In particular, in designing Graphical User Interfaces (GUIs), it is effective to develop a prototype and to show it to end-users in order to reflect end-users' viewpoints, because the design of the GUI strongly affects the usability of the application. Use case diagrams and scenarios are described from end-users' viewpoints, so it is considered that GUIs reflecting end-users' viewpoints can be developed using use case diagrams. In this paper, we propose a method for generating GUI prototypes from given use case diagrams and scenarios. GUI prototypes can be generated based on the extracted control structure with appended simple widget information.
Shared reality: spatial intelligence in intuitive user interfaces BIBAFull-Text 224-225
  Tom Stocky; Justine Cassell
In this paper, we describe an interface that demonstrates spatial intelligence. This interface, an embodied conversational kiosk, builds on research in embodied conversational agents (ECAs) and on information displays in mixed reality and kiosk format. ECAs leverage people's abilities to coordinate information displayed in multiple modalities, particularly information conveyed in speech and gesture. Mixed reality depends on users' interactions with everyday objects that are enhanced with computational overlays. We describe an implementation, MACK (Media lab Autonomous Conversational Kiosk), an ECA who can answer questions about and give directions to the MIT Media Lab's various research groups, projects and people. MACK uses a combination of speech, gesture, and indications on a normal paper map that users place on a table between themselves and MACK. Research issues involve users' differential attention to hand gestures, speech and the map, and how reference using these modalities can be fused in input and generation.
Intelligent elicitation of military lessons BIBAFull-Text 226-227
  Rosina Weber; David W. Aha
We introduce LET (Lesson Elicitation Tool), which uses domain and linguistic knowledge to guide users during their submission of lessons learned. LET can detect a user's need for instructions and disambiguates expressions while collecting taxonomic domain knowledge.
Designing dynamic web pages and persistence in the WYSIWYG interface BIBAFull-Text 228-229
  David Wolber; Yingfeng Su; Yih Tsung Chiang
WebSheets is a programming in the WYSIWYG interface tool for building dynamic web pages that access and modify databases. Without programming, designers can specify not only the presentation of a page, but the dynamic content as well. This capability is facilitated through a novel application of Programming by Example (PBE), Query by Example (QBE), and spreadsheet formulas within the WYSIWYG HTML editor environment.
Intelligent user interface for a web search engine by organizing page information agents BIBAFull-Text 230-231
  Seiji Yamada; Fumihiko Murase
This paper describes an organization method of page information agents for adaptive interface between a user and a Web search engine. Though a Web search engine indicates a hit list of relevant Web pages, it includes many useless ones. Thus a user often needs to select useful Web pages from them with page information like the title, the URL on the hit list, and actually fetch the Web pages for checking relevance. Since the page information is neither sufficient nor necessary for a user, adequate information is necessary for valid selection. Hence we propose adaptive interface AOAI in which different page information agents are organized through human evaluation.

Demonstration Descriptions

The interactive chef: a task-sensitive assistant BIBAFull-Text 234
  Leonard Chen; Sandra Cheng; Larry Birnbaum; Kristian J. Hammond
In this paper, we describe the Interactive Chef (IChef), an intelligent system that attempts to emulate a human cooking assistant. The user navigates through a recipe using voice, while IChef reads aloud each step. IChef is designed to be aware of the present context and exploit this knowledge by answering questions about ingredients or techniques posed by the user in natural spoken language.
mpME!: music recommendation and exploration BIBAFull-Text 235
  Jared Dunne; Louis Lapat; Marc Flury; Mustafa Shabib; Tom Warner; Kris Hammond; Lawrence Birnbaum
mpME! introduces users to new music and interweaves old music the listener has heard of before and enjoys. mpME! exposes listeners to new music by using existing music information on the Internet such as musical artist directories. mpME! also provides complementary information about the artist to educate the user throughout the listening experience. MpME! uses a feature intersection algorithm to make recommendations for artists using features of artists found on the web.
A GUI editor that generates tutoring agents BIBAFull-Text 236
  Jacob Eisenstein; Charles Rich
Tutoring agents can provide a dynamic and engaging way to help users understand an application. However, integrating tutoring agents into applications is difficult. It requires the expertise to create the tutoring agent, and also an understanding of the inner workings of the application itself. This demo presents a task-based GUI editor that produces a software agent tutor for free. The designer need only create a task model, and then use the editor to produce the GUI. A tutoring agent will automatically be included in the new application.
Sketching for knowledge capture: a demonstration BIBAFull-Text 237
  Kenneth D. Forbus; Jeffrey Usher
Many concepts and situations are best explained by sketching. This demonstration will show the key ideas underlying sKEA, the sketching knowledge entry associate, a system we have built for knowledge capture via sketching. In particular, we will demonstrate
  • How glyph bars and blob semantics are used to sidestep the need for
       recognition of visual symbols.
  • The use of qualitative spatial reasoning to provide richer visual and
       conceptual understanding of what is being communicated
  • How arrows are used to express domain relationships
  • The use of layers to express within-sketch segmentation, including a
       meta-layer to express subsketch relationships themselves via sketching
  • Using analogical comparison to explore similarities and differences between
       sketched concepts.
  • The active learning framework BIBAFull-Text 238
      Russell Maulitz; Debra McGrath
    The Active Learning Framework (ALF) creates a technologically and educationally sophisticated learning environment in which nurse practitioners, medical students and other health professional students can participate in virtual patient encounters anytime, anywhere. Learners are presented with web-based case studies and engage in open-ended, interactive learning activities that are orchestrated by intelligent agent technology. Formative evaluation is provided. ALF has been tested for usability and learner satisfaction by 13 medical students, with generally very positive results.
    Jambalaya: an interactive environment for exploring ontologies BIBFull-Text 239
      Margaret-Anne Storey; Natasha F. Noy; Mark Musen; Casey Best; Ray Fergerson; Neil Ernst
    Java settlers: a research environment for studying multi-agent negotiation BIBAFull-Text 240
      Robert Thomas; Kristian Hammond
    Java Settlers is an environment for doing research in the area of multi-agent negotiation. The vehicle for this research is a Web-deployed java program for playing the popular German board game, Settlers of Catan.
    Emotional dialogue simulator BIBAFull-Text 241
      William R. Wiltschko
    Demonstration of a fielded web-based mixed-initiative emotional natural language dialogue simulator in a game-like computer interface for pedagogical purposes.
    autoCAID: a model-based GUI tool for machine tools BIBFull-Text 242
      Detlef Zuehlke; Martin Wahl