| Lifelike Pedagogical Agents for Mixed-initiative Problem Solving in Constructivist Learning Environments | | BIBAK | Full-Text | 1-44 | |
| James C. Lester; Brian A. Stone | |||
| Mixed-initiative problem solving lies at the heart of knowledge- based
learning environments. While learners are actively engaged in problem-solving
activities, learning environments should monitor their progress and provide
them with feedback in a manner that contributes to achieving the twin goals of
learning effectiveness and learning efficiency. Mixed-initiative interactions
are particularly critical for constructivist learning environments in which
learners participate in active problem solving. We have recently begun to see
the emergence of believable agents with lifelike qualities. Featured
prominently in constructivist learning environments, lifelike pedagogical
agents could couple key feedback functionalities with a strong visual presence
by observing learners' progress and providing them with visually contextualized
advice during mixed-initiative problem solving. For the past three years, we
have been engaged in a large-scale research program on lifelike pedagogical
agents and their role in constructivist learning environments. In the resulting
computational framework, lifelike pedagogical agents are specified by
(1) a behavior space containing animated and vocal behaviors,
(2) a design-centered context model that maintains constructivist problem
representations, multimodal advisory contexts, and evolving problem-solving tasks, and (3) a behavior sequencing engine that in realtime dynamically selects and assembles agents' actions to create pedagogically effective, lifelike behaviors. To empirically investigate this framework, it has been instantiated in a full-scale implementation of a lifelike pedagogical agent for Design-A-Plant, a learning environment developed for the domain of botanical anatomy and physiology for middle school students. Experience with focus group studies conducted with middle school students interacting with the implemented agent suggests that lifelike pedagogical agents hold much promise for mixed-initiative learning. Keywords: Lifelike agents; pedagogical agents; animated agents; knowledge-based
learning environments; mixed-initiative interaction; intelligent tutoring
systems; intelligent multimedia presentation; intelligent interfaces; task
models | |||
| Mixed-Initiative Issues in an Agent-Based Meeting Scheduler | | BIBAK | Full-Text | 45-78 | |
| Amedeo Cesta; Daniela D'Aloisi | |||
| This paper concerns mixed-initiative interaction between users and agents.
After classifying agents according to their task and their interactivity with
the user, the critical aspects of delegation-based interaction are outlined.
Then MASMA, an agent system for distributed meeting scheduling, is described,
and the solutions developed to control interaction are explained in detail. The
issues addressed concern: the agent capability of adapting its behavior to the
user it is supporting; the solution adopted to control the shift of initiative
between personal agents, their users and other agents in the environment; the
availability of features, e.g. the inspection mechanism, that endow the user
with a further level of control to enhance his sense of trust in the agent. Keywords: personal assistants; mixed-initiative interaction; multi-agent systems;
human computer interaction | |||
| Exploring Mixed-Initiative Dialogue Using Computer Dialogue Simulation | | BIBAK | Full-Text | 79-91 | |
| Masato Ishizaki; Matthew Crocker | |||
| This paper experimentally shows that mixed-initiative dialogue is not always
more efficient than non-mixed initiative dialogue in route finding tasks. Based
on the dialogue model proposed in Conversation Analysis and Discourse Analysis
à la the Birmingham school and Whittaker and Stenton's definition of
initiative, we implement dialogue systems and obtain experimental results by
making the systems interact with each other. Across a variety of instantiations
of the dialogue model, the results show that with easy problems, the efficiency
of mixed-initiative dialogue is a little better than or equal to that of
non-mixed-initiative dialogue, while with difficult problems mixed-initiative
dialogue is less efficient than non-mixed-initiative dialogue. Keywords: mixed-initiative dialogue; computer dialogue simulation; efficiency of
dialogue; discourse analysis; task-oriented dialogue | |||
| A Computational Mechanism for Initiative in Answer Generation | | BIBAK | Full-Text | 93-132 | |
| Nancy Green; Sandra Carberry | |||
| Initiative in dialogue can be regarded as the speaker taking the opportunity
to contribute more information than was his obligation in a particular
discourse turn. This paper describes the use of stimulus conditions as a
computational mechanism for taking the initiative to provide unrequested
information in responses to Yes-No questions, as part of a system for
generating answers to Yes-No questions. Stimulus conditions represent types of
discourse contexts in which a speaker is motivated to add unrequested
information to his answer. Stimulus conditions may be triggered not only by the
discourse context at the time when the question was asked, but also by the
anticipated context resulting from providing part of the response. We define a
set of stimulus conditions based upon previous linguistic studies and a corpus
analysis, and describe how evaluation of these stimulus conditions makes use of
information from a User Model. Also, we show how the stimulus conditions are
used by the generation component of the system. An evaluation was conducted of
the implemented system. The results indicate that the responses generated by
our system containing extra information provided on the basis of this
initiative mechanism are viewed more favorably by users than responses without
the extra information. Keywords: natural language dialogue; discourse initiative | |||
| User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues | | BIBAK | Full-Text | 133-166 | |
| Adelheit Stein; Jon Atle Gulla; Ulrich Thiel | |||
| Intelligent dialogue systems usually concentrate on user support at the
level of the domain of discourse, following a plan-based approach. Whereas this
is appropriate for collaborative planning tasks, the situation in interactive
information retrieval systems is quite different: there is no inherent
plan-goal hierarchy, and users are known to often opportunistically change
their goals and strategies during and through interaction. We need to allow for
mixed-initiative retrieval dialogues, where the system evaluates the user's
individual dialogue behavior and performs situation-dependent interpretation of
user goals, to determine when to take the initiative and to change the control
of the dialogue, e.g., to propose (new) problem-solving strategies to the user.
In this article, we present the dialogue planning component of a concept-
oriented, logic-based retrieval system (MIRACLE). Users are guided through the
global stages of the retrieval interaction but may depart, at any time, from
this guidance and change the direction of the dialogue. When users submit
ambiguous queries or enter unexpected dialogue control acts, abductive
reasoning is used to generate interpretations of these user inputs in light of
the dialogue history and other internal knowledge sources. Based on these
interpretations, the system initiates a short dialogue offering the user
suitable options and strategies for proceeding with the retrieval dialogue.
Depending on the user's choice and constraints resulting from the history, the
system adapts its strategy accordingly. Keywords: Conversational retrieval; mixed initiative; dialogue planning; dialogue act
interpretation; abduction | |||
| An Approach to Mixed Initiative Spoken Information Retrieval Dialogue | | BIBAK | Full-Text | 167-213 | |
| Eli Hagen | |||
| We present an approach to mixed initiative dialogue in acoustic user
interfaces to databases. First, we discuss how we distinguish between
initiative and control in mixed initiative information retrieval dialogue and
how the notions of taking, keeping, and relinquishing initiative and control
are reflected in our approach. Based on this discussion, we develop a dialogue
planning algorithm. This algorithm distinguished between resources and routines
and between the type and the content of an utterance; type and content are
calculated separately by routines that reason on the resources -- a dialogue
model, a dialogue history, and an application description. Through this
division we achieve a dialogue where the system adapts to the user's attempts
at changing the direction of a dialogue. Finally, we argue that automatic
segmentation of the dialogue and automatic tracking of initiative and control
is inherent to our approach. Keywords: dialogue management; dialogue planning; mixed initiative dialogue; spoken
dialogue systems | |||
| Logic-Based Representation and Reasoning for User Modeling Shell Systems | | BIBAK | Full-Text | 217-282 | |
| Wolfgang Pohl | |||
| Core services of user modeling shell systems include the provision of
representations for user model contents and for other relevant knowledge, and
of reasoning mechanisms. These representation and reasoning facilities should
be powerful and flexible, in order to satisfy both complex and specialized
needs that developers of user modeling systems may have. This article first
identifies these needs through a comprehensive overview of logic-based
representation and reasoning in user modeling system. Then, the AsTRa
(Assumption Type Representation) framework for logic-based user model
representation and reasoning is presented. This framework obtains its power and
flexibility through an integration of the two main scientific approaches that
were pursued to date, namely the partition approach and the modal logic
approach. The central notion of the framework is the 'assumption type', a
partition-like partial knowledge base for storing all assumptions about the
user that are of the same type. Within assumption types, logic-based
representation formalisms can be employed. The semantics of assumption types
and content formalisms can be characterized in terms of modal logic, so that an
extension to full modal logic is possible. Moreover, special mechanisms for
handling so-called 'negative assumptions' are developed, which are also firmly
grounded in modal logic semantics. The paper concludes with a description of
the user modeling shell BGP-MS as a prototypical implementation of AsTRa, and a
discussion of the approach in the light of other user modeling shells. Keywords: modal logic approach; partition approach; user model representation and
reasoning; user modeling shell systems | |||
| Erratum | | BIB | Full-Text | 283 | |
| A Fuzzy-Based Approach to Stereotype Selection in Hypermedia | | BIBAK | Full-Text | 285-320 | |
| Luigi Di Lascio; Enrico Fischetti | |||
| This paper presents a stereotype-based user model for adaptive hypermedia
systems. We use a suitable algebraic fuzzy structure which can adequately
reflect some features of the user in the model, and apply this model to adapt
the navigation and the content of hypermedia nodes to the user's needs. The
model includes temporal representations of the user and approximates every real
user by a set of stereotypes, from which the one realizing the best
approximation can always be extracted. The set of stereotypes is the support
set of the structure and the operations defined therein may be supplied with
adequate semantics which allow for the selection of the stereotype. Keywords: fuzzy sets; user modeling; adaptive hypermedia systems; stereotype selection | |||
| Human Plausible Reasoning for Intelligent Help | | BIBAK | Full-Text | 321-375 | |
| Maria Virvou; Benedict Du Boulay | |||
| This paper is about providing intelligent help to users interacting with an
operating system. Its main focus is an investigation of Human Plausible
Reasoning Theory (Collins & Michalski, 1989) to infer the commands the user
should have typed, given what they did type. The theory has been adapted and
incorporated into a prototype Intelligent Help System (IHS) for UNIX users,
called RESCUER, and has been used for the generation and evaluation of
hypotheses about users' beliefs underlying the observed users' actions on the
UNIX file store. The hypotheses generated by RESCUER were compared to those
made by human experts on the sample scripts from UNIX user sessions. The
potential for Human Plausible Reasoning as a mechanism to reason about slips
and misconceptions is discussed. Keywords: user modelling; intelligent help systems; human plausible reasoning; error
diagnosis; plan recognition | |||