| A user model neural network for a personal news service | | BIBAK | Full-Text | 1-25 | |
| Andrew Jennings; Hideyuki Higuchi | |||
| User modelling has been widely applied to pedantic situations, where we are
attempting to infer the user's knowledge. In teaching it is important to know
that the user has mastered the elementary concepts before proceeding with the
advanced topics. However, the application of user modelling to information
retrieval demands a quite different type of user model. Here we construct a
user model for browsing, where the user is uncertain of exactly which
information he desires. This requires a more inexact and robust user model,
that can quickly give guidance to the system. We propose a user model based on
neural networks that can be constructed incrementally. Performance of the model
shows some promise for this approach. We discuss the advantages and limitations
of the approach and its implications for user modelling. Keywords: Neural networks; information retrieval; browsing | |||
| Using structural descriptions of interfaces to automate the modelling of user cognition | | BIBAK | Full-Text | 27-64 | |
| Jon May; Philip J. Barnard; Ann Blandford | |||
| One approach to user modelling (Barnard et al., 1988) involves building
approximate descriptions of the cognitive activity underlying task performance
in human-computer interactions. This approach does not aim to simulate exactly
what is going on in the user's head, but to capture the salient features of
their cognitive processing. The technique requires several sets of production
rules. One set maps from a real-world description of an interface design to an
internal theoretical description. Other rules elaborate the theoretical
description, while further rules map from the theoretical description to
properties of user behaviour. This paper is concerned primarily with the first
type of rule, for mapping from interface descriptions to theoretical
description of cognitive activity. Here we show how structural descriptions of
interface designs can be used to model user tasks, visual interface objects and
screen layouts. Included in our treatment are some indications of how
properties of cognitive activity and their behavioural consequences can be
inferred from such structural descriptions. An expert system implementation of
the modelling technique has been developed, and its structure is described,
together with some examples of its use in the evaluation of HCI design
scenarios. Keywords: Cognition; usability; interface; HCI; design; task structure; icons; screen
layout; expert systems | |||
| Adaptive systems: A solution to usability problems | | BIBAK | Full-Text | 65-87 | |
| David Benyon | |||
| Improving the usability of computer systems is perhaps the most important
goal of human-computer interaction research. Current approaches to usability
engineering tend to focus on simply improving the interface. An alternative is
to build intelligence into the system. However, in order to do this a more
comprehensive analysis is required and systems must be designed so that they
can be made adaptive. This paper examines the implications for systems
analysis, design and usability specification if adaptive systems are to be a
realistic solution to usability problems. Keywords: Usability; adaptive systems; analysis; adaptive system architecture | |||
| Model-based cognitive diagnosis | | BIBAK | Full-Text | 89-106 | |
| John Self | |||
| This paper considers the problem of cognitive diagnosis as an instance of
general diagnosis, as studied in artificial intelligence. Cognitive diagnosis
is the process of inferring a cognitive state from observations of performance.
It is thus a key component of any system which attempts to build a dynamic
model of the user of that system. Many issues in cognitive diagnosis,
previously discussed informally, are mapped onto formal techniques, with
consequent increased clarity and rigour. But it is concluded that the 'general'
theories for diagnosis must be broadened to fully encompass the problems of
cognitive diagnosis. Keywords: Student modelling; diagnosis; fault models; hierarchical abstraction;
default reasoning | |||
| Defining the semantics of extended genetic graphs | | BIBAK | Full-Text | 107-153 | |
| L. Niem; B. J. Fugére; P. Rondeau | |||
| In the present work, the semantics of the Extended Genetic Graph (EGG) is
defined in order to eliminate limitations inherent in these graphs in the
modelling of an ideal Student Model. The semantics of extended genetic graphs
can be defined at two representational levels: conceptual and transactional.
First, the student's knowledge as represented by EGG nodes is specified
explicitly at the conceptual level using the conceptual graphs (CGs) as a
representation. Secondly, the criteria for the definition and use of learning
processes such as analogy, generalization, refinement, component, and
deviation/correction are specified at the transactional level. These criteria
are then associated with the conditions of existence of different EGG links as
they are implicitly assumed in the semantics of these graphs. Once the
conditions of their creation are known, the semantics of EGG links can be
represented explicitly by the use of CGs and Predicate Transition Networks
(PrTNs). These representations are then used for detecting different types of
EGG links.
Conceptual graphs combined with PrTNs are able to describe the semantic structures equivalent to those contained implicitly in EGGs. However, the semantics of the combined graph which is based on the results of cognitive psychology, natural language processing, as well as logic, are richer than the semantics of the EGG. Furthermore, the operations provided by the conceptual graph theory combined with the constraint specifications as expressed by PrTNs allow the modification of the learner graph. Thus, our proposed representational framework provides the basis for the construction of a deep dynamical student model. An example from the Boolean Algebra domain demonstrates its feasibility. Keywords: information processing; learning; knowledge representation; CAI; ICAI; AI | |||
| Consulting a user model to address a user's inferences during content planning | | BIBAK | Full-Text | 155-185 | |
| Ingrid Zukerman; Richard Mcconachy | |||
| Most Natural Language Generation systems developed to date assume that a
user will learn only what is explicitly stated in the discourse. This
assumption leads to the generation of discourse that states explicitly all the
information to be conveyed, and does not address further inferences from the
discourse. In this paper, we describe a student model which provides a
qualitative representation of a student's beliefs and inferences, and a content
planning mechanism which consults this model in order to address the above
problems. Our mechanism performs inferences in backward reasoning mode to
generate discourse that conveys the intended information, and in forward
reasoning mode to draw conclusions from the presented information. The forward
inferences enable our mechanism to address possible incorrect inferences from
the discourse, and to omit information that may be easily inferred from the
discourse. In addition, our mechanism improves the conciseness of the generated
discourse by omitting information known by the student. The domain of our
implementation is the explanation of concepts in high school algebra. Keywords: content planning; student beliefs; inferences; backward reasoning; forward
reasoning | |||
| Adaptive hypertext navigation based on user goals and context | | BIBAK | Full-Text | 193-220 | |
| Craig Kaplan; Justine Fenwick; James Chen | |||
| Hypertext systems allow flexible access to topics of information, but this
flexibility has disadvantages. Users often become lost or overwhelmed by
choices. An adaptive hypertext system can overcome these disadvantages by
recommending information to users based on their specific information needs and
preferences. Simple associative matrices provide an effective way of capturing
these user preferences. Because the matrices are easily updated, they support
the kind of dynamic learning required in an adaptive system.
HYPERFLEX, a prototype of an adaptive hypertext system that learns, is described. Informal studies with HYPERFLEX clarify the circumstances under which adaptive systems are likely to be useful, and suggest that HYPERFLEX can reduce time spent searching for information by up to 40%. Moreover, these benefits can be obtained with relatively little effort on the part of hypertext authors or users. The simple models underlying HYPERFLEX's performance may offer a general and useful alternative to more sophisticated modelling techniques. Conditions under which these models, and similar adaptation techniques, might be most useful are discussed. Keywords: adaptive interface applications; hypertext; user models; human-computer
interaction; associative matrices; intelligent information retrieval; relevance
feedback | |||
| User modelling in interactive explanations | | BIBAK | Full-Text | 221-247 | |
| Alison Cawsey | |||
| In this paper I consider how user modelling can be used to improve the
provision of complex explanations, and discuss in detail the user modelling
component of the EDGE explanation system. This allows a user model to be both
updated and used in an explanatory dialogue with the user. The model is updated
based on the interactions with the user, relationships between concepts and a
reviseable expertise level. The model in turn influences the planning of the
explanation, allowing a more understandable explanation to be generated. I
argue that both user modelling and an "interactive" style of presentation are
important for explanations to be acceptable and understandable, and that each
reinforces the other. Keywords: user-modelling; explanation; dialogue; tutorial system | |||
| The intelligent help system COMFOHELP | | BIBAK | Full-Text | 249-282 | |
| Jürgen Krause; Eva Mittermaier | |||
| The paper is concerned with the question of whether and under what
conditions active help systems with plan recognition components that have been
developed in the environment of artificial intelligence research are able to
prove their value in the real context of commercial application programs. The
question is investigated using the development of the COMFOHELP intelligent
help system as an example. COMFOHELP supports the COMFOTEX graphical text
processing program and has been developed by the Linguistic Information Science
Group at the University of Regensburg since 1988. The system recognizes
erroneous and suboptimal plans pursued by the user by analyzing the dialog
history and comparing them with the correct plan for achieving the user's goal.
Section 2 discusses the research situation and elaborates on those problems which up to now prevented research concepts for plan recognition and intelligent help systems from being practically applied. Testing error situations empirically is a first prerequisite since potential erroneous plans can only be established in real-world tests. The second prerequisite is a special system architecture which counteracts the problem of ambiguities in plan recognition. Section 3 introduces a first still restricted prototype version of COMFOHELP whose efficiency was verified in a statistical hypothesis test. The users performing their text processing tasks with the support of COMFOHELP came off significantly better than members of a reference group working without the intelligent help. Section 4 shows that the proposed COMFOHELP system architecture is reconfirmed by the results of extensive empirical investigations (with more than 100 users) of erroneous plans when using a more complex version of COMFOTEX. The architecture still proves to be worthwhile even when functionality is increased by a factor of three to four. Keywords: intelligent help system; plan recognition; user modeling; adaptive systems;
artificial intelligence; practicality | |||
| User-model-driven generation of instructions | | BIBAK | Full-Text | 289-319 | |
| Gerhard Peter; Dietmar Rösner | |||
| There has been a great deal of research on specific issues of user modeling
(e.g., generation of explanations (Paris 88), implicit knowledge acquisition
(Kass, Finin 87), exploitation of user feedback to compensate for the
unreliability of user models (Moore, Paris 92), but to our knowledge no work
has been done on how to integrate this work into a single system. In TECHDOC-I
we combine a number of ideas and work from various areas in a single system,
add some unique features (e.g., application of a double-stereotype mechanism to
plan representation), and apply them to a new domain (instructions for car
maintenance). TECHDOC-I provides support to a user in the area of car
maintenance. All maintenance activities are represented by plans, which consist
of plan steps. The dialogue between user and system is based on these plans. A
user model ensures that the system adapts the content of the output to the
user. The commands and explanations are given in natural language that is
generated by a (multilingual) text generator. Keywords: Intelligent help; user modeling; text generation | |||
| Forming user models by understanding user feedback | | BIBAK | Full-Text | 321-358 | |
| Alex Quilici | |||
| An intelligent advisory system should be able to provide explanatory
responses that correct mistaken user beliefs. This task requires the ability to
form a model of the user's relevant beliefs and to understand and address
feedback from users who are not satisfied with its advice. This paper presents
a method by which a detailed model of the user's relevant domain-specific,
plan-oriented beliefs can gradually be formed by trying to understand user
feedback in an on-going advisory dialog. In particular, we consider the problem
of constructing an automated advisor capable of participating in a dialog
discussing which UNIX command should be used to perform a particular task. We
show how to construct a model of a UNIX user's beliefs about UNIX commands from
several different classes of user feedback. Unlike other approaches to
inferring user beliefs, our approach focuses on inferring only the small set of
beliefs likely to be relevant in contributing to the user's misconception. And
unlike other approaches to providing advice, we focus on the task of
understanding the user's descriptions of perceived problems with that advice. Keywords: advice-giving systems; dialog systems; misconception detection and repair;
constructing user models; understanding user feedback; UNIX advising | |||
| Workshop on Adaptivity and User Modeling in Interactive Software Systems | | BIB | Full-Text | 359-367 | |
| Alfred Kobsa; Wolfgang Pohl | |||