HCI Bibliography Home | HCI Journals | About UMUAI | Journal Info | UMUAI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
UMUAI Tables of Contents: 010203040506070809101112

User Modeling and User-Adapted Interaction 2

Editors:Alfred Kobsa
Dates:1992
Volume:2
Publisher:Springer
Standard No:ISSN 0924-1868 (print) EISSN 1573-1391 (online)
Papers:12
Links:link.springer.com | Table of Contents
  1. UMUAI 1992 Volume 2 Issue 1/2
  2. UMUAI 1992 Volume 2 Issue 3
  3. UMUAI 1992 Volume 2 Issue 4

UMUAI 1992 Volume 2 Issue 1/2

Weighted abduction for plan ascription BIBAKFull-Text 1-25
  Douglas E. Appelt; Martha E. Pollack
We describe an approach to abductive reasoning called weighted abduction, which uses inference weights to compare competing explanations for observed behavior. We present an algorithm for computing a weighted-abductive explanation, and sketch a model-theoretic semantics for weighted abduction. We argue that this approach is well suited to problems of reasoning about mental state. In particular, we show how the model of plan ascription developed by Konolige and Pollack can be recast in the framework of weighted abduction, and we discuss the potential advantages and disadvantages of this encoding.
Keywords: Plan recognition; Plan evaluation; Mental-state ascription; Abduction; Evaluation metrics
A meta-rule approach to flexible plan recognition in dialogue BIBAKFull-Text 27-53
  Rhonda Eller; Sandra Carberry
Although a number of researchers have demonstrated that reasoning on a model of the user's plans and goals is helpful in language understanding and response generation, current models of plan inference cannot handle naturally occurring dialogue. This paper argues that model building from less than ideal dialogues has a great deal in common with processing ill-formed input. It defines well-formedness constraints for information-seeking dialogues and contends that strategies for interpreting ill-formed input can be applied to the problem of modeling the user's plan during an ill-formed dialogue. It presents a meta-rule approach for hypothesizing the cause of dialogue ill-formedness, and describes meta-rules for relaxing the plan inference process and enabling the consideration of alternative hypotheses. The advantages of this approach are that it provides a unified framework for handling both well-formed and ill-formed dialogue, avoids unnatural interpretations when the dialogue is proceeding smoothly, and facilitates a nonmonotonic plan recognition system.
Keywords: Plan recognition; Dialogue; Ill-formedness
Controlling inference in plan recognition BIBAKFull-Text 55-82
  James Mayfield
An algorithm based on an assessment of the completeness of an explanation can be used to control inference in a plan recognition system: If the explanation is complete, inference is stopped. If the explanation is incomplete, inference is continued. If it cannot be determined whether the explanation is complete, then the system weighs the strength of its interest in continuing the analysis against the estimated cost of doing so. This algorithm places existing heuristic approaches to the control of inference in plan recognition into a unified framework. The algorithm rests on the principle that the decision to continue processing should be based primarily on the explanation chain itself, not on external factors. Only when an analysis of the explanation chain proves inconclusive should outside factors weigh heavily in the decision. Furthermore, a decision to discontinue chaining should never be final; other components of the system should have the opportunity to request that an explanation chain be extended. An implementation of the algorithm, called PAGAN, demonstrates the usefulness of this approach.
Keywords: plan recognition; abduction; control of inference; explanation; natural language understanding; consultation systems
On the interaction between plan recognition and intelligent interfaces BIBAKFull-Text 83-115
  Bradley A. Goodman; Diane J. Litman
Plan recognition is an active research area in automatic reasoning, as well as a promising approach to engineering interfaces that can exploit models of user's plans and goals. Much research in the field has focused on the development of plan recognition algorithms to support particular user/system interactions, such as found in naturally occurring dialogues. However, two questions have typically remained unexamined: 1) exactly what kind of interface tasks can knowledge of a user's plans be used to support across communication modalities, and 2) how can such tasks in turn constrain development of plan recognition algorithms? In this paper we present a concrete exploration of these issues. In particular, we provide an assessment of plan recognition, with respect to the use of plan recognition in enhancing user interfaces. We clarify how use of a user model containing plans makes interfaces more intelligent and interactive (by providing an intelligent assistant that supports such tasks as advice generation, task completion, context-sensitive responses, error detection and recovery). We then show how interface tasks in turn provide constraints that must be satisfied in order for any plan recognizer to construct and represent a plan in ways that efficiently support these tasks. Finally, we survey how interfaces are fundamentally limited by current plan recognition approaches, and use these limitations to identify and motivate current research. Our research is developed in the context of CHECS, a plan-based design interface.
Keywords: plan recognition; intelligent interfaces; user models; multimodal communication; computer-aided design
Student modeling to support multiple instructional approaches BIBAKFull-Text 117-154
  Robert V. London
Intelligent computer-assisted instruction (ICAI) systems have continually sought increased flexibility to respond appropriately to the multi-faceted interests of students. Research on the Image student modeler of the Guidon2 ICAI system has developed a multiple-anticipation approach to plan generation and interpretation that directly meets a wide range of communication goals: providing information support, encouraging exploration with interesting elaborations, recognizing strategic mistakes in actions and plans, evaluating success in domain tasks, diagnosing misconceptions, and recommending improvements for mistakes.
   In order to meet pragmatic system constraints, Image must provide its full range of advice simultaneously, continually, and quickly. It drops many of the simplifying assumptions typically used by plan recognition user modelers, including assumptions of closed-world knowledge and of the user's correctness, cooperation, and unified goal. To maintain efficiency for dynamic plan recognition, Image relies instead on two assumptions of cognitive economy, contextual relevance and conceptual easiness, which are operational forms of Grice's maxims of relation and quantity. Its multiple-anticipation approach to plan management provides all of the requisite information together and allows incremental updating and relaxation methods of interpretation, even when students are shifting focus frequently.
Keywords: Student modeling; user modeling; plan recognition; ICAI; instruction
Intention structure and extended responses in a portable natural language interface BIBAKFull-Text 155-179
  Judith Schaffer Sider; John D. Burger
This paper describes discourse processing in King Kong, a portable natural language interface. King Kong enables users to pose questions and issue commands to a back end system. The notion of a discourse is central to King Kong, and underlies much of the intelligent assistance that Kong provides to its users. Kong's approach to modeling discourse is based on the work of Grosz and Sidner (1986). We extend Grosz and Sidner's framework in several ways, principally to allow multiple independent discourse contexts to remain active at the same time. This paper also describes King Kong's method of intention recognition, which is similar to that described in Kautz and Allen (1986) and Carberry (1988). We demonstrate that a relatively simple intention recognition component can be exploited by many other discourse-related mechanisms, for example to disambiguate input and resolve anaphora. In particular, this paper describes in detail the mechanism in King Kong that uses information from the discourse model to form a range of cooperative extended responses to queries in an effort to aid the user in accomplishing her goals.
Keywords: discourse modeling; user modeling; plan recognition; misconception correction; presumption validation; cooperative responses

UMUAI 1992 Volume 2 Issue 3

Generating tailored definitions using a multifaceted user model BIBAFull-Text 181-210
  Margaret H. Sarner; Sandra Carberry
This paper presents a computational strategy for reasoning on a multifaceted user model to generate definitions tailored to the user's needs in a task-oriented dialogue. The strategy takes into account the current focus of attention in the user's partially constructed plan, the user's domain knowledge, and the user's receptivity to different kinds of information. It constructs a definition by weighting both the strategic predicates that might comprise a definition and the propositions that might be used to fill them. These weights are used to construct a definition that includes the information deemed most useful, using information of lesser importance as necessary to adhere to common rhetorical practices. This strategy reflects our hypothesis that beliefs about the appropriate content of a definition should guide selection of a rhetorical strategy, instead of the choice of a rhetorical strategy determining content.
Generating help for users of application software BIBAKFull-Text 211-248
  C. Tattersall
Help for users of Information Processing Systems (IPSs) is typically based upon the presentation of pre-stored texts written by the system designers for predictable situations. Though advances in user interface technology have eased the process of requesting advice, current on-line help facilities remain tied to a back-end of canned answers, spooled onto users, screens to describe queried facilities.
   This paper argues that the combination of a user's knowledge of an application and the particular states which a system can assume require different answers for the large number of possible situations. Thus, a marriage of techniques from the fields of text generation and Intelligent Help Systems research is needed to construct responses dynamically. Furthermore, it is claimed that the help texts should attempt to address not only the immediate needs of the user, but to facilitate learning of the system by incorporating a variety of educational techniques to specialise answers in given contexts.
   A computational scheme for help text generation based on schema of rhetorical predicates is presented. Using knowledge of applications programs and their users, it is possible to provide a variety of answers in response to a number of questions. The approach uses object-oriented techniques to combine different information from a variety of sources in a flexible manner, yielding responses which are appropriate to the state of the IPS and to the user's level of knowledge.
   Modifications to the scheme which resulted from its evaluation in the EUROHELP project are described, together with ongoing collaborative work and further research developments.
Keywords: Text generation; Intelligent help; knowledge representation
Modeling user action planning: A comprehension based approach BIBAKFull-Text 249-285
  Stephanie M. Doane; Suzanne M. Mannes
We review our efforts to model user command production in an attempt to characterize the knowledge users of computers have at various stages of learning. We modeled computer users with a system called NETWORK (Mannes and Kintsch, 1988; 1991) and modeled novice, intermediate, and expert UNIX command production data collected by Doane et al. (1990b) with a system called UNICOM (Doane et al., 1989a; 1991). We use the construction-integration theory of comprehension proposed by Kintsch (1988) as a framework for our analyses. By focusing on how instructions activate the knowledge rele/ant to the performance of the specified task, we have successfully modeled major aspects of correct user performance by incorporating in the model knowledge about individual commands and knowledge that allows the correct combination of elementary commands into complex, novel commands. Thus, experts can be modeled in both NETWORK and in UNICOM. We further show that salient aspects of novice and intermediate performance can be described by removing critical elements of knowledge from the expert UNICOM model. Results suggest that our comprehension-based approach has promise for understanding user interactions and implications for system design are discussed.
Keywords: levels of user expertise; human-computer interaction; novel plans; discourse comprehension

UMUAI 1992 Volume 2 Issue 4

Exploiting user feedback to compensate for the unreliability of user models BIBAKFull-Text 287-330
  Johanna D. Moore; Cécile L. Paris
Natural Language is a powerful medium for interacting with users, and sophisticated computer systems using natural language are becoming more prevalent. Just as human speakers show an essential, inbuilt responsiveness to their hearers, computer systems must "tailor" their utterances to users. Recognizing this, researchers devised user models and strategies for exploiting them in order to enable systems to produce the "best" answer for a particular user.
   Because these efforts were largely devoted to investigating how a user model could be exploited to produce better responses, systems employing them typically assumed that a detailed and correct model of the user was available a priori, and that the information needed to generate appropriate responses was included in that model. However, in practice, the completeness and accuracy of a user model cannot be guaranteed. Thus, unless systems can compensate for incorrect or incomplete user models, the impracticality of building user models will prevent much of the work on tailoring from being successfully applied in real systems. In this paper, we argue that one way for a system to compensate for an unreliable user model is to be able to react to feedback from users about the suitability of the texts it produces. We also discuss how such a capability can actually alleviate some of the burden now placed on user modeling. Finally, we present a text generation system that employs whatever information is available in its user model in an attempt to produce satisfactory texts, but is also capable of responding to the user's follow-up questions about the texts it produces.
Keywords: question answering; natural language generation; adaptive systems; text planning; explanation; expert systems; user modeling
Interactive user modeling: An integrative explicit-implicit approach BIBAKFull-Text 331-365
  Eyal Shifroni; Benny Shanon
User modeling issues are examined in the context of a user-adapted guidance system. The system provides users with instructions about natural tasks without introducing a special time-consuming sub-dialog to learn the user's knowledge. A model for providing such guidance is developed on the basis of a phenomenological analysis of human guidance, and illustrated by a system that gives directions in geographical domains. The main features of the user model design include: (1) Both implicit and explicit acquisition methods are employed in a flexible manner; (2) The guidance instructions and the user model are generated incrementally and interchangeably; (3) User's responses and no-responses are employed as a source of information for the user modeling. The model and the resulting system's performance are examined in light of recent development in the cognitive literature.
Keywords: User-Model; Guidance Systems; Advisory Systems; Man-Machine Interaction; Natural-Language Generation
Modeling the user knowledge by belief networks BIBAKFull-Text 367-388
  Fiorella De Rosis; Sebastiano Pizzutilo
This paper describes the user modeling component of EPIAIM, a consultation system for data analysis in epidemiology. The component is aimed at representing knowledge of concepts in the domain, so that their explanations can be adapted to user needs. The first part of the paper describes two studies aimed at analysing user requirements. The first one is a questionnaire study which examines the respondents' familiarity with concepts. The second one is an analysis of concept descriptions in textbooks and from expert epidemiologists, which examines how discourse strategies are tailored to the level of experience of the expected audience. The second part of the paper describes how the results of these studies have been used to design the user modeling component of EPIAIM. This module works in a two-step approach. In the first step, a few trigger questions allow the activation of a stereotype that includes a "body" and an "inference component". The body is the representation of the body of knowledge that a class of users is expected to know, along with the probability that the knowledge is known. In the inference component, the learning process of concepts is represented as a belief network. Hence, in the second step the belief network is used to refine the initial default information in the stereotype's body. This is done by asking a few questions on those concepts where it is uncertain whether or not they are known to the user, and propagating this new evidence to revise the whole situation. The system has been implemented on a workstation under UNIX. An example of functioning is presented, and advantages and limitations of the approach are discussed.
Keywords: formal representation of user models; user stereotypes; levels of user expertise; belief networks; discourse strategies; user tailored explanations