| PERSONAF: framework for personalised ontological reasoning in pervasive computing | | BIBAK | Full-Text | 1-40 | |
| William T. Niu; Judy Kay | |||
| Pervasive computing creates possibilities for presenting highly personalised
information about the people, places and things in a building. One of the
challenges for such personalisation is the creation of the system that can
support ontological reasoning for several key tasks: reasoning about location;
personalisation of information about location at the right level of detail; and
personalisation to match each person's conceptions of the building based on
their own use of it and their relationship to other people in the building.
From pragmatic perspectives, it should be inexpensive to create the ontology
for each new building. It is also critical that users should be able to
understand and control pervasive applications. We created the PERSONAF
(personalised pervasive scrutable ontological framework) to address these
challenges. PERSONAF is a new abstract framework for pervasive ontological
reasoning. We report its evaluation at three levels. First, we assessed the
power of the ontology for reasoning about noisy and uncertain location
information, showing that PERSONAF can improve location modelling. Notably, the
best ontological reasoner varies across users. Second, we demonstrate the use
of the PERSONAF framework in Adaptive Locator, an application built upon it,
using our low cost mechanisms for non-generic layers of the ontology. Finally,
we report a user study, which evaluated the PERSONAF approach as seen by users
in the Adaptive Locator. We assessed both the personalisation performance and
the understandability of explanations of the system reasoning. Together, these
three evaluations show that the PERSONAF approach supports building of low cost
ontologies, that can achieve flexible ontological reasoning about smart
buildings and the people in them, and that this can be used to build
applications which give personalised information that can provide
understandable explanations of the reasoning underlying the personalisation. Keywords: Personal ontology; Ontological reasoning; Pervasive personalisation;
Scrutable personalisation; Pervasive computing | |||
| A query expansion and user profile enrichment approach to improve the performance of recommender systems operating on a folksonomy | | BIBAK | Full-Text | 41-86 | |
| Pasquale De Meo; Giovanni Quattrone | |||
| In this paper we propose a query expansion and user profile enrichment
approach to improve the performance of recommender systems operating on a
folksonomy, storing and classifying the tags used to label a set of available
resources. Our approach builds and maintains a profile for each user. When he
submits a query (consisting of a set of tags) on this folksonomy to retrieve a
set of resources of his interest, it automatically finds further
"authoritative" tags to enrich his query and proposes them to him. All
"authoritative" tags considered interesting by the user are exploited to refine
his query and, along with those tags directly specified by him, are stored in
his profile in such a way to enrich it. The expansion of user queries and the
enrichment of user profiles allow any content-based recommender system
operating on the folksonomy to retrieve and suggest a high number of resources
matching with user needs and desires. Moreover, enriched user profiles can
guide any collaborative filtering recommender system to proactively discover
and suggest to a user many resources relevant to him, even if he has not
explicitly searched for them. Keywords: Folksonomies; Query expansion; Recommender systems; Tag ranking; Social
tagging; Personalised query answering | |||
| Exploring the feasibility of web form adaptation to users' cultural dimension scores | | BIBAK | Full-Text | 87-108 | |
| Matías Recabarren; Miguel Nussbaum | |||
| With many daily tasks now performed on the Internet, productivity and
efficiency in working with web pages have become transversal necessities for
all users. Many of these tasks involve the inputting of user information,
obligating the user to interact with a webform. Research has demonstrated that
productivity depends largely on users' personal characteristics, implying that
it will vary from user to user. The webform development process must therefore,
include modeling of its intended users to ensure the interface design is
appropriate. Taking all potential users into account is difficult, however,
primarily because their identity is unknown, and some may be effectively
excluded by the final design. Such discrimination can be avoided by
incorporating rules that allow webforms to adapt automatically to the
individual user's characteristics, the principal one being the person's
culture. In this paper we report two studies that validate this option. We
begin by determining the relationships between a user's cultural dimension
scores and their behavior when faced with a webform. We then validate the
notion that rules based on these relationships can be established for the
automatic adaptation of a webform in order to reduce the time taken to complete
it. We conclude that the automatic webform adaptation to the cultural
dimensions of users improves their performance. Keywords: User modeling; Human--computer interaction; Usability; User culture;
Adaptive webform; Webform design; Hofstede's cultural dimensions | |||
| Automatic detection of users' skill levels using high-frequency user interface events | | BIBAK | Full-Text | 109-146 | |
| Arin Ghazarian; S. Majid Noorhosseini | |||
| Computer users have different levels of system skills. Moreover, each user
has different levels of skill across different applications and even in
different portions of the same application. Additionally, users' skill levels
change dynamically as users gain more experience in a user interface. In order
to adapt user interfaces to the different needs of user groups with different
levels of skills, automatic methods of skill detection are required. In this
paper, we present our experiments and methods, which are used to build
automatic skill classifiers for desktop applications. Machine learning
algorithms were used to build statistical predictive models of skill. Attribute
values were extracted from high frequency user interface events, such as mouse
motions and menu interactions, and were used as inputs to our models. We have
built both task-independent and task-dependent classifiers with promising
results. Keywords: Expertise; Skill; User modeling; Machine learning; Graphical user
interfaces; Intelligent user interfaces; Adaptive user interfaces; GOMS | |||
| Multimodal semi-automated affect detection from conversational cues, gross body language, and facial features | | BIBAK | Full-Text | 147-187 | |
| Sidney K. D'Mello; Arthur Graesser | |||
| We developed and evaluated a multimodal affect detector that combines
conversational cues, gross body language, and facial features. The multimodal
affect detector uses feature-level fusion to combine the sensory channels and
linear discriminant analyses to discriminate between naturally occurring
experiences of boredom, engagement/flow, confusion, frustration, delight, and
neutral. Training and validation data for the affect detector were collected in
a study where 28 learners completed a 32-min. tutorial session with AutoTutor,
an intelligent tutoring system with conversational dialogue. Classification
results supported a channel × judgment type interaction, where the face
was the most diagnostic channel for spontaneous affect judgments (i.e., at any
time in the tutorial session), while conversational cues were superior for
fixed judgments (i.e., every 20 s in the session). The analyses also indicated
that the accuracy of the multichannel model (face, dialogue, and posture) was
statistically higher than the best single-channel model for the fixed but not
spontaneous affect expressions. However, multichannel models reduced the
discrepancy (i.e., variance in the precision of the different emotions) of the
discriminant models for both judgment types. The results also indicated that
the combination of channels yielded superadditive effects for some affective
states, but additive, redundant, and inhibitory effects for others. We explore
the structure of the multimodal linear discriminant models and discuss the
implications of some of our major findings. Keywords: Multimodal affect detection; Conversational cues; Gross body language;
Facial features; Superadditivity; AutoTutor; Affective computing;
Human-computer interaction | |||
| User-adaptive explanatory program visualization: evaluation and insights from eye movements | | BIBAK | Full-Text | 191-226 | |
| Tomasz D. Loboda; Peter Brusilovsky | |||
| User-adaptive visualization and explanatory visualization have been
suggested to increase educational effectiveness of program visualization. This
paper presents an attempt to assess the value of these two approaches. The
results of a controlled experiment indicate that explanatory visualization
allows students to substantially increase the understanding of a new
programming topic. Furthermore, an educational application that features
explanatory visualization and employs a user model to track users' progress
allows students to interact with a larger amount of material than an
application which does not follow users' activity. However, no support for the
difference in short-term knowledge gain between the two applications is found.
Nevertheless, students admit that they prefer the version that estimates and
visualizes their progress and adapts the learning content to their level of
understanding. They also use the application's estimation to pace their work.
The differences in eye movement patterns between the applications employing
adaptive and non-adaptive explanatory visualizations are investigated as well.
Gaze-based measures show that adaptive visualization captivates attention more
than its non-personalized counterpart and is more interesting to students.
Natural language explanations also accumulate a big portion of students'
attention. Furthermore, the results indicate that working memory span can
mediate the perception of adaptation. It is possible that user-adaptation in an
educational context provides a different service to people with different
mental processing capabilities. Keywords: User-adaptation; Program visualization; Explanatory visualization; Eye
movements; Eye tracking; Evaluation; User study; Working memory | |||
| Towards personality-based user adaptation: psychologically informed stylistic language generation | | BIBAK | Full-Text | 227-278 | |
| François Mairesse; Marilyn A. Walker | |||
| Conversation is an essential component of social behavior, one of the
primary means by which humans express intentions, beliefs, emotions, attitudes
and personality. Thus the development of systems to support natural
conversational interaction has been a long term research goal. In natural
conversation, humans adapt to one another across many levels of utterance
production via processes variously described as linguistic style matching,
entrainment, alignment, audience design, and accommodation. A number of recent
studies strongly suggest that dialogue systems that adapted to the user in a
similar way would be more effective. However, a major research challenge in
this area is the ability to dynamically generate user-adaptive utterance
variations. As part of a personality-based user adaptation framework, this
article describes personage, a highly parameterizable generator which provides
a large number of parameters to support adaptation to a user's linguistic
style. We show how we can systematically apply results from psycholinguistic
studies that document the linguistic reflexes of personality, in order to
develop models to control personage's parameters, and produce utterances
matching particular personality profiles. When we evaluate these outputs with
human judges, the results indicate that humans perceive the personality of
system utterances in the way that the system intended. Keywords: Natural language generation; Linguistic style; Personality; Individual
differences; Big Five traits; Dialogue; Recommendation | |||
| Using affective parameters in a content-based recommender system for images | | BIBAK | Full-Text | 279-311 | |
| Marko TkalÄ iÄ; Urban Burnik; Andrej KoÅ¡ir | |||
| There is an increasing amount of multimedia content available to end users.
Recommender systems help these end users by selecting a small but relevant
subset of items for each user based on her/his preferences. This paper
investigates the influence of affective metadata (metadata that describe the
user's emotions) on the performance of a content-based recommender (CBR) system
for images. The underlying assumption is that affective parameters are more
closely related to the user's experience than generic metadata (e.g. genre) and
are thus more suitable for separating the relevant items from the non-relevant.
We propose a novel affective modeling approach based on users' emotive
responses. We performed a user-interaction session and compared the performance
of the recommender system with affective versus generic metadata. The results
of the statistical analysis showed that the proposed affective parameters yield
a significant improvement in the performance of the recommender system. Keywords: Affective modeling; Content-based recommender system; Emotion induction;
IAPS; Item profile; Machine learning; Metadata; User profile;
Valence-arousal-dominance | |||
| Towards affective camera control in games | | BIBAK | Full-Text | 313-340 | |
| Georgios N. Yannakakis; Héctor P. Martínez | |||
| Information about interactive virtual environments, such as games, is
perceived by users through a virtual camera. While most interactive
applications let users control the camera, in complex navigation tasks within
3D environments users often get frustrated with the interaction. In this paper,
we propose inclusion of camera control as a vital component of affective
adaptive interaction in games. We investigate the impact of camera viewpoints
on psychophysiology of players through preference surveys collected from a test
game. Data is collected from players of a 3D prey/predator game in which player
experience is directly linked to camera settings. Computational models of
discrete affective states of fun, challenge, boredom, frustration, excitement,
anxiety and relaxation are built on biosignal (heart rate, blood volume pulse
and skin conductance) features to predict the pairwise self-reported emotional
preferences of the players. For this purpose, automatic feature selection and
neuro-evolutionary preference learning are combined providing highly accurate
affective models. The performance of the artificial neural network models on
unseen data reveals accuracies of above 80% for the majority of discrete
affective states examined. The generality of the obtained models is tested in
different test-bed game environments and the use of the generated models for
creating adaptive affect-driven camera control in games is discussed. Keywords: Camera control; Player experience modeling; Skin conductance; Blood volume
pulse; Neuro-evolution; Preference learning | |||
| User preferences can drive facial expressions: evaluating an embodied conversational agent in a recommender dialogue system | | BIBAK | Full-Text | 341-381 | |
| Mary Ellen Foster; Jon Oberlander | |||
| Tailoring the linguistic content of automatically generated descriptions to
the preferences of a target user has been well demonstrated to be an effective
way to produce higher-quality output that may even have a greater impact on
user behaviour. It is known that the non-verbal behaviour of an embodied agent
can have a significant effect on users' responses to content presented by that
agent. However, to date no-one has examined the contribution of non-verbal
behaviour to the effectiveness of user tailoring in automatically generated
embodied output. We describe a series of experiments designed to address this
question. We begin by introducing a multimodal dialogue system designed to
generate descriptions and comparisons tailored to user preferences, and
demonstrate that the user-preference tailoring is detectable to an overhearer
when the output is presented as synthesised speech. We then present a
multimodal corpus consisting of the annotated facial expressions used by a
speaker to accompany the generated tailored descriptions, and verify that the
most characteristic positive and negative expressions used by that speaker are
identifiable when resynthesised on an artificial talking head. Finally, we
combine the corpus-derived facial displays with the tailored descriptions to
test whether the addition of the non-verbal channel improves users' ability to
detect the intended tailoring, comparing two strategies for selecting the
displays: one based on a simple corpus-derived rule, and one making direct use
of the full corpus data. The performance of the subjects who saw displays
selected by the rule-based strategy was not significantly different than that
of the subjects who got only the linguistic content, while the subjects who saw
the data-driven displays were significantly worse at detecting the correctly
tailored output. We propose a possible explanation for this result, and also
make recommendations for developers of future systems that may make use of an
embodied agent to present user-tailored content. Keywords: Embodied conversational agents; Evaluation of generated output; Multimodal
corpora; User-preference modelling | |||
| Layered evaluation of interactive adaptive systems: framework and formative methods | | BIBAK | Full-Text | 383-453 | |
| Alexandros Paramythis; Stephan Weibelzahl | |||
| The evaluation of interactive adaptive systems has long been acknowledged to
be a complicated and demanding endeavour. Some promising approaches in the
recent past have attempted tackling the problem of evaluating adaptivity by
"decomposing" and evaluating it in a "piece-wise" manner. Separating the
evaluation of different aspects can help to identify problems in the adaptation
process. This paper presents a framework that can be used to guide the
"layered" evaluation of adaptive systems, and a set of formative methods that
have been tailored or specially developed for the evaluation of adaptivity. The
proposed framework unifies previous approaches in the literature and has
already been used, in various guises, in recent research work. The presented
methods are related to the layers in the framework and the stages in the
development lifecycle of interactive systems. The paper also discusses
practical issues surrounding the employment of the above, and provides a brief
overview of complementary and alternative approaches in the literature. Keywords: Layered evaluation; Evaluation framework; Formative evaluation methods;
Design | |||
| Learners' navigation behavior identification based on trace analysis | | BIBAK | Full-Text | Erratum | 455-494 | |
| Nabila Bousbia; Issam Rebaï; Jean-Marc Labat | |||
| Identifying learners' behaviors and learning preferences or styles in a
Web-based learning environment is crucial for organizing the tracking and
specifying how and when assistance is needed. Moreover, it helps online course
designers to adapt the learning material in a way that guarantees
individualized learning, and helps learners to acquire meta-cognitive
knowledge. The goal of this research is to identify learners' behaviors and
learning styles automatically during training sessions, based on trace
analysis. In this paper, we focus on the identification of learners' behaviors
through our system: Indicators for the Deduction of Learning Styles. We shall
first present our trace analysis approach. Then, we shall propose a 'navigation
type' indicator to analyze learners' behaviors and we shall define a method for
calculating it. To this end, we shall build a decision tree based on semantic
assumptions and tests. To validate our approach, and improve the proposed
calculation method, we shall present and discuss the results of two experiments
that we conducted. Keywords: Navigation type; Indicator; Trace; Web behavior analysis; Educational
Hypermedia System | |||