| Preface to UMUAI Special Issue on Machine Learning for User Modeling | | BIB | Full-Text | 1-3 | |
| G. Webb | |||
| Bayesian Models for Keyhole Plan Recognition in an Adventure Game | | BIBAK | Full-Text | 5-47 | |
| David W. Albrecht; Ingrid Zukerman | |||
| We present an approach to keyhole plan recognition which uses a dynamic
belief (Bayesian) network to represent features of the domain that are needed
to identify users' plans and goals. The application domain is a Multi-User
Dungeon adventure game with thousands of possible actions and locations. We
propose several network structures which represent the relations in the domain
to varying extents, and compare their predictive power for predicting a user's
current goal, next action and next location. The conditional probability
distributions for each network are learned during a training phase, which
dynamically builds these probabilities from observations of user behaviour.
This approach allows the use of incomplete, sparse and noisy data during both
training and testing. We then apply simple abstraction and learning techniques
in order to speed up the performance of the most promising dynamic belief
networks without a significant change in the accuracy of goal predictions. Our
experimental results in the application domain show a high degree of predictive
accuracy. This indicates that dynamic belief networks in general show promise
for predicting a variety of behaviours in domains which have similar features
to those of our domain, while reduced models, obtained by means of learning and
abstraction, show promise for efficient goal prediction in such domains. Keywords: Plan recognition; Bayesian Belief Networks; language learning; abstraction;
performance evaluation | |||
| Bayesian Update of Recursive Agent Models | | BIBAK | Full-Text | 49-69 | |
| Piotr J. Gmytrasiewicz; Sanguk Noh | |||
| We present a framework for Bayesian updating of beliefs about models of
agent(s) based on their observed behavior. We work within the formalism of the
Recursive Modeling Method (RMM) that maintains and processes models an agent
may use to interact with other agent(s), the models the agent may think the
other agent has of the original agent, the models the other agent may think the
agent has, and so on. The beliefs about which model is the correct one are
incrementally updated based on the observed behavior of the modeled agent and,
as the result, the probability of the model that best predicted the observed
behavior is increased. Analogously, the models on deeper levels of modeling can
be updated; the models that the agent thinks another agent uses to model the
original agent are revised based on how the other agent is expected to observe
the original agent's behavior, and so on. We have implemented and tested our
method in two domains, and the results show a marked improvement in the quality
of interactions with the belief update in both domains. Keywords: Bayesian learning; probabilistic updating; agent models; coordination; air
defense; decision theory; multi-agent; artificial intelligence | |||
| Exploring Versus Exploiting when Learning User Models for Text Recommendation | | BIBAK | Full-Text | 71-102 | |
| Marko BalabanoviÄ | |||
| The text recommendation task involves delivering sets of documents to users
on the basis of user models. These models are improved over time, given
feedback on the delivered documents. When selecting documents to recommend, a
system faces an instance of the exploration/exploitation tradeoff: whether to
deliver documents about which there is little certainty, or those which are
known to match the user model learned so far. In this paper, a simulation is
constructed to investigate the effects of this tradeoff on the rate of learning
user models, and the resulting compositions of the sets of recommended
documents, in particular World-Wide Web pages. Document selection strategies
are developed which correspond to different points along the tradeoff. Using an
exploitative strategy, our results show that simple preference functions can
successfully be learned using a vector-space representation of a user model in
conjunction with a gradient descent algorithm, but that increasingly complex
preference functions lead to a slowing down of the learning process.
Exploratory strategies are shown to increase the rate of user model acquisition
at the expense of presenting users with suboptimal recommendations; in addition
they adapt to user preference changes more rapidly than exploitative
strategies. These simulated tests suggest an implementation for a simple
control that is exposed to users, allowing them to vary a system's document
selection behavior depending on individual circumstances. Keywords: Recommender systems; Information filtering; User modeling; Relevance
feedback; Selective Dissemination of Information; Machine learning; adaptive
information retrieval | |||
| Discovering Error Classes from Discrepancies in Novice Behaviors Via Multistrategy Conceptual Clustering | | BIBAK | Full-Text | 103-129 | |
| Raymund Sison; Masayuki Numao | |||
| The automatic discovery of classes of errors that represent misconceptions
and other knowledge errors underlying discrepancies in novice behavior is not a
trivial task. A novel approach to this problem is described, in which
relationships among behavioral discrepancies are analyzed and inductively
generalized via an unsupervised, incremental, relational multistrategy
conceptual clustering method that takes into account similarities as well as
causalities in the data. Performance results on the classification of
discrepancy sets and discovery of error classes from discrepancies of buggy
PROLOG programs demonstrate the potential of the approach. Keywords: student modeling; multistrategy learning; unsupervised learning; conceptual
clustering | |||
| Using Decision Trees for Agent Modeling: Improving Prediction Performance | | BIBAK | Full-Text | 131-152 | |
| Bark Cheung Chiu; Geoffrey I. Webb | |||
| A modeling system may be required to predict an agent's future actions under
constraints of inadequate or contradictory relevant historical evidence. This
can result in low prediction accuracy, or otherwise, low prediction rates,
leaving a set of cases for which no predictions are made. A previous study that
explored techniques for improving prediction rates in the context of modeling
students' subtraction skills using Feature Based Modeling showed a tradeoff
between prediction rate and predication accuracy. This paper presents research
that aims to improve prediction rates without affecting prediction accuracy.
The FBM-C4.5 agent modeling system was used in this research. However, the
techniques explored are applicable to any Feature Based Modeling system, and
the most effective technique developed is applicable to most agent modeling
systems. The default FBM-C4.5 system models agents' competencies with a set of
decision trees, trained on all historical data. Each tree predicts one
particular aspect of the agent's action. Predictions from multiple trees are
compared for consensus. FBM-C4.5 makes no prediction when predictions from
different trees contradict one another. This strategy trades off reduced
prediction rates for increased accuracy. To make predictions in the absence of
consensus, three techniques have been evaluated. They include using voting,
using a tree quality measure and using a leaf quality measure. An alternative
technique that merges multiple decision trees into a single tree provides an
advantage of producing models that are more comprehensible. However, all of
these techniques demonstrated the previous encountered trade-off between rate
of prediction and accuracy of prediction, albeit less pronounced. It was
hypothesized that models built on more current observations would outperform
models built on earlier observations. Experimental results support this
hypothesis. A Dual-model system, which takes this temporal factor into account,
has been evaluated. This fifth approach achieved a significant improvement in
prediction rate without significantly affecting prediction accuracy. Keywords: Agent modeling; Student modeling; Inductive learning; Decision tree | |||
| Context and Consciousness: Activity Theory and Human Computer Interaction, Bonnie A. Nardi (ed.) | | BIB | Full-Text | 153-157 | |
| Antonio Rizzo; Marco Palmonari | |||
| Case Based Reasoning, by Janet Kolodner | | BIB | Full-Text | 157-160 | |
| M. Sasikumar | |||
| Preface | | BIB | Full-Text | 167-170 | |
| Susan Haller; Susan McRoy | |||
| What is Initiative? | | BIBAK | Full-Text | 171-214 | |
| Robin Cohen; Coralee Allaby; Christian Cumbaa | |||
| This paper presents some alternate theories for explaining the term
'initiative', as it is used in the design of mixed-initiative AI systems.
Although there is now active research in the area of mixed initiative
interactive systems, there appears to be no true consensus in the field as to
what the term 'initiative' actually means. In describing different possible
approaches to the modeling of initiative, we aim to show the potential
importance of each particular theory for the design of mixed initiative
systems. The paper concludes by summarizing some of the key points in common to
the theories, and by commenting on the inherent difficulties of the exercise,
thereby elucidating the limitations which are necessarily encountered in
designing such theories as the basis for designing mixed-initiative systems. Keywords: Initiative; discourse; goals and plans | |||
| An Evidential Model for Tracking Initiative in Collaborative Dialogue Interactions | | BIBAK | Full-Text | 215-254 | |
| Jennifer Chu-Carroll; Michael K. Brown | |||
| In this paper, we argue for the need to distinguish between task initiative
and dialogue initiative, and present an evidential model for tracking shifts in
both types of initiatives in collaborative dialogue interactions. Our model
predicts the task and dialogue initiative holders for the next dialogue turn
based on the current initiative holders and the effect that observed cues have
on changing them. Our evaluation across various corpora shows that the use of
cues consistently provides significant improvement in the system's prediction
of task and dialogue initiative holders. Finally, we show how this initiative
tracking model may be employed by a dialogue system to enable the system to
tailor its responses to user utterances based on application domain, system's
role in the domain, dialogue history, and user characteristics. Keywords: Initiative; control; dialogue systems; collaborative interactions | |||
| An Analysis of Initiative Selection in Collaborative Task-Oriented Discourse | | BIBAK | Full-Text | 255-314 | |
| Curry I. Guinn | |||
| In this paper we propose a number of principles and conjectures for
mixed-initiative collaborative dialogs. We explore some methodologies for
managing initiative between conversational participants. We mathematically
analyze specific initiative-changing mechanisms based on a probabilistic
knowledge base and user model. We look at the role of negotiation in managing
initiative and quantify how the negotiation process is useful toward modifying
user models. Some experimental results using computer-computer simulations are
presented along with some discussion of how such studies are useful toward
building human-computer systems. Keywords: Dialog; mixed-initiative; collaboration; dialog initiative; task initiative;
negotiation; computer-computer dialogs | |||
| COLLAGEN: A Collaboration Manager for Software Interface Agents | | BIBAK | Full-Text | 315-350 | |
| Charles Rich; Candace L. Sidner | |||
| We have implemented an application-independent collaboration manager, called
Collagen, based on the SharedPlan theory of discourse, and used it to build a
software interface agent for a simple air travel application. The software
agent provides intelligent, mixed initiative assistance without requiring
natural language understanding. A key benefit of the collaboration manager is
the automatic construction of an interaction history which is hierarchically
structured according to the user's and agent's goals and intentions. Keywords: Agent; collaboration; mixed initiative; SharedPlan; discourse; segment;
interaction history | |||