| Preface | | BIB | Full-Text | v-viii | |
| Alfred Kobsa | |||
| User modelling and user adapted interaction in an intelligent tutoring system | | BIBAK | Full-Text | 1-32 | |
| Hyacinth S. Nwana | |||
| User modelling and user-adapted interaction are crucial to the provision of
true individualised instruction, which intelligent tutoring systems strive to
achieve. This paper presents how user (student) modelling and student adapted
instruction is achieved in FITS, an intelligent tutoring system for the
fractions domain. Some researchers have begun questioning both the need for
detailed student models as well as the pragmatic possibility of building them.
The key contributions of this paper are in its attempt to rehabilitate student
modelling/adaptive tutoring within ITSs and in FITS's practical use of simple
techniques to realise them with seemingly encouraging results; some
illustrations are given to demonstrate the latter. Keywords: student modelling; tutoring strategies; intelligent tutoring systems | |||
| Beliefs, stereotypes and dynamic agent modeling | | BIBAK | Full-Text | 33-65 | |
| Afzal Ballim; Yorick Wilks | |||
| In many domains (such as dialogue participation and multi-agent cooperative
planning) it is often necessary that the system maintains complex models of the
beliefs of agents with whom it is interacting. In particular, it is normally
the case that models of the beliefs of agents about another agent's beliefs
must be modeled. While in limited domains it is possible to have such nested
belief models pregenerated, in general it is more reasonable to have a
mechanism for generating the nested models on demand. Two methods for such
generation are discussed, one based on triggering stereotypes, and the other
based on perturbation of the system's beliefs. Both of these approaches have
limitations. An alternative is proposed that merges the two approaches, thus
gaining the benefits of each and using those benefits to avoid the problems of
either of the individual methods. Keywords: belief modeling; nested beliefs; stereotypes; multi-agent cooperation | |||
| Conversation with and through computers | | BIBAK | Full-Text | 67-86 | |
| Susan E. Brennan | |||
| People design what they say specifically for their conversational partners,
and they adapt to their partners over the course of a conversation. A
comparison of keyboard conversations involving a simulated computer partner (as
in a natural language interface) with those involving a human partner (as in
teleconferencing) yielded striking differences and some equally striking
similarities. For instance, there were significantly fewer acknowledgments in
human/computer dialogue than in human/human. However, regardless of the
conversational partner, people expected connectedness across conversational
turns. In addition, the style of a partner's response shaped what people
subsequently typed. These results suggest some issues that need to be addressed
before a natural language computer interface will be able to hold up its end of
a conversation. Keywords: discourse modeling; human/computer interaction; natural language interfaces;
recipient design | |||
| Revising deductive knowledge and stereotypical knowledge in a student model | | BIBAK | Full-Text | 87-115 | |
| Xueming Huang; Gordon I. McCalla | |||
| A user/student model must be revised when new information about the
user/student is obtained. But a sophisticated user/student model is a complex
structure that contains different types of knowledge. Different techniques may
be needed for revising different types of knowledge. This paper presents a
student model maintenance system (SMMS) which deals with revision of two
important types of knowledge in student models: deductive knowledge and
stereotypical knowledge. In the SMMS, deductive knowledge is represented by
justified beliefs. Its revision is accomplished by a combination of techniques
involving reason maintenance and formal diagnosis. Stereotypical knowledge is
represented in the Default Package Network (DPN). The DPN is a knowledge
partitioning hierarchy in which each node contains concepts in a sub-domain.
Revision of stereotypical knowledge is realized by propagating new information
through the DPN to change default packages (stereotypes) of the nodes in the
DPN. A revision of deductive knowledge may trigger a revision of stereotypical
knowledge, which results in a desirable student model in which the two types of
knowledge exist harmoniously. Keywords: user/student model revision; deductive knowledge; stereotypical knowledge;
reason maintenance; diagnosis; default package network | |||
| Introduction to the special issues on plan recognition | | BIB | Full-Text | 121-123 | |
| David N. Chin | |||
| Exploiting temporal and novel information from the user in plan recognition | | BIBAK | Full-Text | 125-148 | |
| Robin Cohen; Fei Song; Bruce Spencer | |||
| This paper is concerned with the general topic of recognizing the plan of a
user, to include a representation of the user's plans as part of a user model.
We focus on extending the coverage of plan recognition, by allowing for
additional detail in the user's plan beyond fixed specifications of possible
plans in a system's library. We provide procedures for handling two distinct
extensions: recognizing temporal constraints from the user and admitting novel
information. We conclude by commenting on the importance of these extensions
when including plans in a user model in order to enhance communication between
the system and the user. Keywords: plan recognition; response tailoring; temporal analysis; novel plans;
representation of a user's plan | |||
| Active acquisition of user models: Implications for decision-theoretic dialog planning and plan recognition | | BIBAK | Full-Text | 149-172 | |
| Dekai Wu | |||
| This article investigates the implications of active user model acquisition
upon plan recognition, domain planning, and dialog planning in dialog
architectures. A dialog system performs active user model acquisition by
querying the user during the course of the dialog. Existing systems employ
passive strategies that rely on inferences drawn from passive observation of
the dialog. Though passive acquisition generally reduces unnecessary dialog, in
some cases the system can effectively shorten the overall dialog length by
selectively initiating subdialogs for acquiring information about the user.
We propose a theory identifying conditions under which the dialog system should adopt active acquisition goals. Active acquisition imposes a set of rationality requirements not met by current dialog architectures. To ensure rational dialog decisions, we propose significant extensions to plan recognition, domain planning, and dialog planning models, incorporating decision-theoretic heuristics for expected utility. The most appropriate framework for active acquisition is a multi-attribute utility model wherein plans are compared along multiple dimensions of utility. We suggest a general architectural scheme, and present an example from a preliminary implementation. Keywords: active acquisition; decision-theoretic planning; decision theory; dialog
planning; dialog systems; expected utility; multi-attribute utility; plan
recognition; subdialogs; user modeling | |||
| Recognizing intentions, interactions, and causes of plan failures | | BIBAK | Full-Text | 173-202 | |
| Retz-Schmidt Gudula | |||
| Natural language systems for the description of image sequences have been
developed (e.g. Neumann and Novak, 1986; Herzog et al., 1989). Even though
these systems were used to verbalize the behaviour of human agents, they were
limited in that they could only describe the purely visual, i.e.
spatio-temporal, properties of the behaviour observed. For many applications of
such systems (e.g. co-driver systems in traffic, expert systems in
high-performance sports, tutorial systems that give "apprentices" instructions
in construction tasks, etc.), it seems useful to extend their capabilities to
cover a greater part of the performance of a human observer and thus make the
system more helpful to the user. In particular, an interpretation process ought
to be modelled that yields hypotheses about intentional entities from
spatio-temporal information about agents. Its results should be verbalized in
natural language.
This article presents an integrated approach for the recognition and natural language description of plans, intentions, interactions between multiple agents, plan failures, and causes of plan failures. The system described takes observations from image sequences as input. This type of input poses specific problems for the recognition process. After moving objects have been extracted from the image sequences by a vision system and spatio-temporal entities (such as spatial relations and events) have been recognized by an event-recognition system, a focussing process selects interesting agents to be concentrated on in the plan-recognition process. The set of plan hypotheses can be reduced by a hypothesis-selection component. Plan recognition serves as the basis for intention recognition, interaction recognition, and plan-failure analysis. The recognized intentional entities are described in natural language. The system is designed to extend the range of capabilities of the system SOCCER, which verbalizes real-world image sequences of soccer games in natural language. Keywords: observation of actions; plan recognition; intention recognition; multiple
agents; recognition of interactions; recognition and analysis of plan failures;
natural language description of intentional entities | |||
| Building a user model implicitly from a cooperative advisory dialog | | BIBAK | Full-Text | 203-258 | |
| Robert Kass | |||
| This paper reviews existing methods for building user models to support
adaptive, interactive systems, identifies significant problems with these
approaches, and describes a new method for implicitly acquiring user models
from an ongoing user-system dialog. Existing explicit user model acquisition
methods, such as user edited models or model building dialogs put additional
burden on the user and introduce artificial model acquisition dialogs. Hand
coding stereotypes, another explicit acquisition method, is a tedious and
error-prone process. On the other hand, implicit acquisition techniques such as
computing presuppositions or entailments either draw too few inferences to be
generally useful, or too many to be trusted.
In contrast, this paper describes GUMAC, a General User Model Acquisition Component that uses heuristic rules to make default inferences about users' beliefs from their interaction with an advisory expert system. These rules are based on features of human action and conversation that constrain people's behavior and establish expectations about their knowledge. The application of these rules is illustrated with two examples of extended dialogs between users and an investment advisory system. During the course of these conversations, GUMAC is able to acquire an extensive model of the users' beliefs about the aspects of the domain considered in the dialog. These models, in turn, provide the sort of information needed by an explanation generator to tailor explanations the advisory system gives to its users. Keywords: User Model Acquisition; Implicit Acquisition; Belief Modelling; User Model
Representation; General User Modelling; User Modelling Shells; Advisory
Systems; Explanation Tailoring | |||
| Modeling default reasoning using defaults | | BIBAK | Full-Text | 259-288 | |
| Paul Van Arragon | |||
| User modeling research can benefit from formal automated reasoning tools.
However existing formal tools may need to be modified to suit the needs of user
modeling. Theorist is a simple framework for default reasoning. It can be used
as a tool for building and maintaining a user model, and as a model of a user's
default reasoning. To apply Theorist to both tasks, we develop Nested Theorist
(NT), a simple tool based on Theorist that allows default reasoning on
arbitrarily-many levels. We extend NT in two ways: we allow prioritized
defaults, and we allow reasoning about agents with limited reasoning
capabilities. This paper focusses on applications, and uses wide-ranging
examples from user-modeling literature to illustrate the usefulness of the
tools presented. Keywords: User Modeling; Theorist; default reasoning; nested reasoning; limited
reasoning | |||
| Utilizing user models to handle ambiguity and misconceptions in robust plan recognition | | BIBAK | Full-Text | 289-322 | |
| Randall J. Calistri-Yeh | |||
| User models play a critical role in robust plan recognition by controlling
ambiguity. They allow an observer to prefer one plan explanation over another,
and they provide a means of measuring the believability of misconceptions when
the user's plan is flawed. This paper discusses the motivation for including a
user model in robust plan recognition, and shows how a probabilistic
interpretation offers a practical means of incorporating a user model into the
plan recognition process. Keywords: user modeling; plan recognition; probabilities; misconceptions | |||
| Generation and selection of likely interpretations during plan recognition in task-oriented consultation systems | | BIBAK | Full-Text | 323-353 | |
| Bhavani Raskutti; Ingrid Zukerman | |||
| This paper presents a mechanism which infers a user's plans from his/her
utterances by directing the inference process towards the more likely
interpretations of a speaker's statements among many possible interpretations.
Our mechanism uses Bayesian theory of probability to assess the likelihood of
an interpretation, and it complements this assessment by taking into
consideration two aspects of an interpretation: its coherence and its
information content. The coherence of an interpretation is determined by the
relationships between the different statements in the discourse. The
information content of an interpretation is a measure of how well defined the
interpretation is in terms of the actions to be performed on the basis of this
interpretation. This measure is used to guide the inference process towards
interpretations with higher information content. The information content of an
interpretation depends on the specificity and the certainty of the inferences
in it, where the certainty of an inference depends on the knowledge on which
the inference is based. Our mechanism has been developed for use in
task-oriented consultation systems. The particular domain that we have chosen
for exploration is that of travel booking. Keywords: Intelligent Interfaces; Plan Inference; Plan Recognition in NLI; Cooperative
Domains; Plausible Interpretations; Multiple Interpretations; Inference
Classification; Bayesian Theory | |||