HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 939798990001020304050607080910111213-113-214-1

Proceedings of the 2005 International Conference on Intelligent User Interfaces

Fullname:International Conference on Intelligent User Interfaces
Editors:John Riedl; Anthony Jameson
Location:San Diego, California, USA
Dates:2005-Jan-10 to 2005-Jan-13
Publisher:ACM
Standard No:ACM ISBN 1-58113-894-6; ACM Order Number 608050; ACM DL: Table of Contents hcibib: IUI05
Papers:68
Pages:332
Links:Conference Home Page
  1. Invited Talks
  2. Panel Discussion
  3. Workshops
  4. Tutorials
  5. Long papers: affective computing
  6. Long papers: multimodal interaction
  7. Long papers: personal assistants
  8. Long papers: visualization and presentation
  9. Long papers: natural language and gestural input
  10. Long papers: recommendation and instruction
  11. Long papers: knowledge acquisition and knowledge-based design
  12. Long papers: smart environments and ubiquitous computing
  13. Short papers: affective computing
  14. Short papers: human-robot interaction
  15. Short papers: personal assistants
  16. Short papers: visualization and presentation
  17. Short papers: speech- and vision-based interfaces
  18. Short papers: knowledge acquisition and knowledge-based design
  19. Short papers: smart environments and ubiquitous computing

Invited Talks

Attention-reactive user interface for sensemaking BIBAFull-Text 2
  Stuart Card
I will talk about an emerging class of user interfaces that if not exactly intelligent are at least attention-reactive. They are being developed to handle "sensemaking" tasks, in which users find, analyze, and creation products or action from large collections of documents. Applications might be expected to develop in law, education, scholarship, security, and medicine. These interfaces have a focus + context visualization on the front end and a semantic contextual computing engine on the back end. Ultimately they can be expected to have mixed initiatives. These interfaces require the development of a supporting science of human information interaction that stresses interaction between the user and information and deemphasizes the platform through which this occurs.
Oral tradition, aboral coordination: building rapport with embodied conversational agents BIBAFull-Text 2
  Justine Cassell
Oral tradition, aboral coordination: building rapport with embodied conversational agents Harmony or rapport between people is essential for relationships as diverse as seller-buyer and teacher-learner. In this talk I describe the kinds of verbal behaviors -- such as common interactional structures and narrative resonance -- and non-verbal behaviors -- such as attention, positivity, and coordination -- that function together to establish a sense of rapport between two people in conversation. These studies are used as the basis for the implementation of virtual peers -- adults, but also more recently embodied conversational virtual children who are capable of acting as friends and learning partners with real children from different ethnic traditions, collaborating to tell stories from the child's own cultural context, and aiding children in making the transition between home and school language.
Adaptive information access and the quest for the personalization-privacy sweetspot BIBAFull-Text 2
  Barry Smyth
In 2000 the entire World-Wide Web consisted of just 21 terabytes of information; now it grows by 3 times this every single day. This phenomenal growth frames the information overload problem that is threatening to stall the information revolution going forward. In short, users are finding it increasingly difficult to locate the right information at the right time in the right way. Search engine technologies are struggling to cope with the sheer quantity of information that is available, a problem that is greatly exacerbated by the apparent inability of Web users to formulate effective search queries that accurately reflect their current information needs. This talk will focus on how so-called personalization techniques are being used in response to the information overload problem.
   Personalization research brings together ideas from artificial intelligence, user profiling, information retrieval and user-interface design to provide users with more proactive and intelligent information services that are capable of predicting the needs of individuals and adapting to their implicit preferences. We will describe how personalization techniques have been successfully applied to the two dominant modes of information access, browsing and search, with reference to deployed applications in the mobile Internet and Web search arenas. Particular attention will be paid to the natural tension that exists between the potential value of personalization, on the one hand, and the perceived privacy risk associated with profiling, on the other. We will highlight certain recent approaches to personalization that appear to achieve a useful balance between personalization and privacy and argue that realizing this personalization-privacy sweet spot may be the key to the large-scale success of personalization technologies in the future.

Panel Discussion

The usability crisis in high-tech home products: an opportunity for intelligent user interfaces? BIBAFull-Text 4
  Boris de Ruyter; Yogendra Jain; David Keyson; Charles Rich
Ordinary people already have great difficulty using the advanced features of digitally-operated household devices, such personal video recorders, DVD burners, etc., and "white goods," such as washing machines, microwave ovens, programmable thermostats, etc. And the problem is getting worse as more customization and programming features are continually being added. This is challenging and practical application for intelligent user interface research, and one in which new ideas are badly needed. This panel brings together industrial and academic researchers as well as business people to report on their activities and stimulate others to join.

Workshops

Multi-user and ubiquitous user interfaces: (MU3I 2005) BIBAFull-Text 6
  Christian Kray; Andreas Butz; Antonio Kruger; Albrecht Schmidt; Helmut Prendinger
This second workshop on Multi-User and Ubiquitous User Interfaces aims at further investigating two major issues identified at last year's MU3I: control and consistency. The former relates to how a user gains control of devices in a ubiquitous computing environment, how control is passed, and how it is shared in such a setting. The second one concerns interfaces that span multiple devices or move from one set of devices to another. Both issues will be discussed in this year's workshop (with a focus on consistency.
Affective interactions: the computer in the affective loop BIBAFull-Text 7
  Cristina Conati; Stacy Marsella; Ana Paiva
There has been an increasing interest in exploring how recognition of a user's affective state can be exploited in creating more effective human-computer interaction. It has been argued that IUIs may be able to improve interaction by including affective elements in their communication with the user (e.g. by showing empathy via adequate phrasing of feedback.) This workshop will address a variety of issues related to the development of what we will call the affective loop: detection/modeling of relevant user's states, selection of appropriate system responses (including responses that are designed to influence the user affective state but are not overtly affective), as well as synthesis of the appropriate affective expressions.
Beyond personalization: the next stage of recommender systems research BIBAFull-Text 8
  Mark van Setten; Sean M. McNee; Joseph A. Konstan
This workshop intends to bring recommender systems researchers and practitioners together in order to discuss the current state of recommender systems research, both on existing and emerging research topics, and to determine how research in this area should proceed. We are at a pivotal point in recommender systems research where researchers are both looking inward at what recommender systems are and looking outward at where recommender systems can be applied, and the implications of applying them out 'in the wild.' This creates a unique opportunity to both reassess the current state of research and directions research is taking in the near and long term.

Tutorials

Interaction with embodied conversational agents BIBAFull-Text 10
  Lewis Johnson
Embodied Conversational Agents (ECAs) are computer-controlled synthetic characters that can engage in dialog with users. This tutorial will present an overview of techniques and methods relating to the design, construction, and evaluation of ECAs that interact appropriately with users. It will introduce the major technologies for controlling ECA behavior. It will then consider the problem of how to design a successful interactive interface that incorporates ECAs. Finally, it will discuss how to evaluate ECA-enhanced interfaces, including evaluation methods and factors that can influence the evaluation.
Intelligent interfaces for preference-based search BIBAFull-Text 10
  Pearl Pu; Boi Faltings
Preference-based search, defined as finding the most preferred item in a large collection, is becoming an increasingly important subject in computer science with many applications: multi-attribute product search, constraint-based plan optimization, configuration design, and recommendation systems. Decision theory formalizes what the most preferred item is and how it can be identified. In recent years, decision theory has pointed out discrepancies between the normative models of how people should reason and empirical studies of how they in fact think and decide. However, many search tools are still based on the normative model, thus ignoring some of the fundamental cognitive aspects of human decision making. Consequently these search tools do not find accurate results for users. This tutorial starts by giving an overview of recent literature in decision theory, and explaining the differences between descriptive, and normative approaches. It then describes some of the principles derived from behavior decision theory and how they can be turned into principles for developing intelligent user interfaces to help users to make better choices while searching. It develops in particular the issues of how to model user preferences with a limited interaction effort, how to support tradeoff, and how to implement practical search tools using the principles.
Gaze-based human-computer interaction BIBAFull-Text 10
  Kari-Jouko Raiha; Aulikki Hyrskykari; Paivi Majaranta
The tutorial provides examples, experiences and design guidelines for using eye-gaze in human-computer interaction. The goal of the tutorial is to give insight into exploiting the information about gaze direction in human-computer interaction. The participants will learn the basics of eye-tracking, but the focus of the tutorial is on the interaction issues. After the tutorial, the participants understand the pros and cons of using gaze for real-time input. The tutorial consists of lectures and live demonstrations with a state-of-the-art eye-tracking device.

Long papers: affective computing

Experimental evaluation of polite interaction tactics for pedagogical agents BIBAFull-Text 12-19
  Ning Wang; W. Lewis Johnson; Paola Rizzo; Erin Shaw; Richard E. Mayer
Recent research shows that instructors commonly use politeness strategies to achieve affective scaffolding in educational contexts. The importance of affective factors such as self-confidence and interest that contribute to learner motivation is well recognized. In this paper, we describe the results of a Wizard-of-Oz experiment to study the effect of politeness strategies on both cognitive and motivational factors. We compare the results of two different politeness strategies, direct and polite, in assisting seventeen students in a computer-based learning task. We find that politeness can affect students' motivational state and help students learn difficult concepts. The results of the experiment provide a basis for the design of a polite pedagogical agent and its tutorial intervention strategies.
Recognising emotions in human and synthetic faces: the role of the upper and lower parts of the face BIBAFull-Text 20-27
  Erica Costantini; Fabio Pianesi; Michela Prete
Embodied Conversational Agents that can express emotions are a popular topic. Yet, despite recent attempts, reliable methods are still lacking to assess the quality of facial displays. This paper extends and refines the work in [6], focusing on the role of the upper and the lower portions of the face. We analysed the recognition rates and errors from the responses of 74 subjects to the presentations of dynamic (human and synthetic) faces. The results points to the possibility of: a) addressing the issue of the naturalness of synthetic faces, and b) a greater importance of the upper part.
Extraction and classification of facemarks BIBAFull-Text 28-34
  Yuki Tanaka; Hiroya Takamura; Manabu Okumura
We propose methods for extracting facemarks (emoticons) in text and classifying them into some emotional categories. In text-based communication, facemarks have gained popularity, since they help us understand what writers imply. However, there are two problems in text-based communication using facemarks; the first is the variety of facemarks and the second is lack of good comprehension in using facemarks. These problems are more serious in the areas where 2-byte characters are used, because the 2-byte characters can generate a quite large number of different facemarks. Therefore, we are going to propose methods for extraction and classification of facemarks. Regarding the extraction of facemarks as a chunking task, we automatically annotate a tag to each character in text. In the classification of the extracted facemarks, we apply the dynamic time alignment kernel (DTAK) and the string subsequence kernel (SSK) for scoring in the k-nearest neighbor (k-NN) method and for expanding usual Support Vector Machines (SVMs) to accept sequential data such as facemarks. We empirically show that our methods work well in classification and extraction of facemarks, with appropriate settings of parameters.

Long papers: multimodal interaction

Two-way adaptation for robust input interpretation in practical multimodal conversation systems BIBAFull-Text 35-42
  Shimei Pan; Siwei Shen; Michelle X. Zhou; Keith Houck
Multimodal conversation systems allow users to interact with computers effectively using multiple modalities, such as natural language and gesture. However, these systems have not been widely used in practical applications mainly due to their limited input understanding capability. As a result, conversation systems often fail to understand user requests and leave users frustrated. To address this issue, most existing approaches focus on improving a system's interpretation capability. Nonetheless, such improvements may still be limited, since they would never cover the entire range of input expressions. Alternatively, we present a two-way adaptation framework that allows both users and systems to dynamically adapt to each other's capability and needs during the course of interaction. Compared to existing methods, our approach offers two unique contributions. First, it improves the usability and robustness of a conversation system by helping users to dynamically learn the system's capabilities in context. Second, our approach enhances the overall interpretation capability of a conversation system by learning new user expressions on the fly. Our preliminary evaluation shows the promise of this approach.
Linguistic theories in efficient multimodal reference resolution: an empirical investigation BIBAFull-Text 43-50
  Joyce Y. Chai; Zahar Prasov; Joseph Blaim; Rong Jin
Multimodal conversational interfaces provide a natural means for users to communicate with computer systems through multiple modalities such as speech, gesture, and gaze. To build effective multimodal interfaces, understanding user multimodal inputs is important. Previous linguistic and cognitive studies indicate that user language behavior does not occur randomly, but rather follows certain linguistic and cognitive principles. Therefore, this paper investigates the use of linguistic theories in multimodal interpretation. In particular, we present a greedy algorithm that incorporates Conversation Implicature and Givenness Hierarchy for efficient multimodal reference resolution. Empirical studies indicate that this algorithm significantly reduces the complexity in multimodal reference resolution compared to a previous graph-matching approach. One major advantage of this greedy algorithm is that the prior linguistic and cognitive knowledge can be used to guide the search and significantly prune the search space. Because of its simplicity and generality, this approach has the potential to improve the robustness of interpretation and provide a more practical solution to multimodal input interpretation.
Multimodal new vocabulary recognition through speech and handwriting in a whiteboard scheduling application BIBAFull-Text 51-58
  Edward C. Kaiser
Our goal is to automatically recognize and enroll new vocabulary in a multimodal interface. To accomplish this our technique aims to leverage the mutually disambiguating aspects of co-referenced, co-temporal handwriting and speech. The co-referenced semantics are spatially and temporally determined by our multimodal interface for schedule chart creation. This paper motivates and describes our technique for recognizing out-of-vocabulary (OOV) terms and enrolling them dynamically in the system. We report results for the detection and segmentation of OOV words within a small multimodal test set. On the same test set we also report utterance, word and pronunciation level error rates both over individual input modes and multimodally. We show that combining information from handwriting and speech yields significantly better results than achievable by either mode alone.
Multimodal interaction for pedestrians: an evaluation study BIBAFull-Text 59-66
  Matthias Jost; Jochen Haussler; Matthias Merdes; Rainer Malaka
What are the most suitable interaction paradigms for navigational and informative tasks for pedestrians? Is there an influence of social and situational context on multimodal interaction? Our study takes a closer look at a multimodal system on a handheld device that was recently developed as a prototype for mobile navigation assistance. The system allows visitors of a city to navigate, to get information on sights, and to use and manipulate map information. In an outdoor evaluation, we studied the usability of such a system on site. The study yields insight about how multimodality can enhance the usability of hand-held devices with their future services. We show, for example that for our more complicated tasks multimodal interaction is superior to classical unimodal interaction.

Long papers: personal assistants

Automated email activity management: an unsupervised learning approach BIBAFull-Text 67-74
  Nicholas Kushmerick; Tessa Lau
Many structured activities are managed by email. For instance, a consumer purchasing an item from an e-commerce vendor may receive a message confirming the order, a warning of a delay, and then a shipment notification. Existing email clients do not understand this structure, forcing users to manage their activities by sifting through lists of messages. As a first step to developing email applications that provide high-level support for structured activities, we consider the problem of automatically learning an activity's structure. We formalize activities as finite-state automata, where states correspond to the status of the process, and transitions represent messages sent between participants. We propose several unsupervised machine learning algorithms in this context, and evaluate them on a collection of e-commerce email.
TaskTracer: a desktop environment to support multi-tasking knowledge workers BIBAFull-Text 75-82
  Anton N. Dragunov; Thomas G. Dietterich; Kevin Johnsrude; Matthew McLaughlin; Lida Li; Jonathan L. Herlocker
This paper reports on TaskTracer -- a software system being designed to help highly multitasking knowledge workers rapidly locate, discover, and reuse past processes they used to successfully complete tasks. The system monitors users' interaction with a computer, collects detailed records of users' activities and resources accessed, associates (automatically or with users' assistance) each interaction event with a particular task, enables users to access records of past activities and quickly restore task contexts. We present a novel Publisher-Subscriber architecture for collecting and processing users' activity data, describe several different user interfaces tried with TaskTracer, and discuss the possibility of applying machine learning techniques to recognize/predict users' tasks.
Intelligent data entry assistant for XML using ensemble learning BIBAFull-Text 83-89
  Danico Lee; Costas Tsatsoulis
XML has emerged as the primary standard of data representation and data exchange [13]. Although many software tools exist to assist the XML implementation process, data must be manually entered into the XML documents. Current form filling technologies are mostly for simple data entry and do not provide support for the complexity and nested structures of XML grammars. This paper presents SmartXAutofill, an intelligent data entry assistant for predicting and automating inputs for XML documents based on the contents of historical document collections in the same XML domain. SmartXAutofill incorporates an ensemble classifier, which integrates multiple internal classification algorithms into a single architecture. Each internal classifier uses approximate techniques to propose a value for an empty XML field, and, through voting, the ensemble classifier determines which value to accept. As the system operates it learns which internal classification algorithms work better for a specific XML document domain and modifies its weights (confidence) in their predictive ability. As a result, the ensemble classifier adapts itself to the specific XML domain, without the need to develop special learners for the infinite number of domains that XML users have created. We evaluated our system performance using data from eleven different XML domains. The results show that the ensemble classifier adapted itself to different XML document domains, and most of the time (for 9 out of 11 domains) produced predictive accuracies as good as or better than the best individual classifier for a domain.
Active preference learning for personalized calendar scheduling assistance BIBAFull-Text 90-97
  Melinda T. Gervasio; Michael D. Moffitt; Martha E. Pollack; Joseph M. Taylor; Tomas E. Uribe
We present PLIANT, a learning system that supports adaptive assistance in an open calendaring system. PLIANT learns user preferences from the feedback that naturally occurs during interactive scheduling. It contributes a novel application of active learning in a domain where the choice of candidate schedules to present to the user must balance usefulness to the learning module with immediate benefit to the user. Our experimental results provide evidence of PLIANT's ability to learn user preferences under various conditions and reveal the tradeoffs made by the different active learning selection strategies.

Long papers: visualization and presentation

The centrality of pivotal points in the evolution of scientific networks BIBAFull-Text 98-105
  Chaomei Chen
In this paper, we describe the development of CiteSpace as an integrated environment for identifying and tracking thematic trends in scientific literature. The goal is to simplify the process of finding not only highly cited clusters of scientific articles, but also pivotal points and trails that are likely to characterize fundamental transitions of a knowledge domain as a whole. The trails of an advancing research field are captured through a sequence of snapshots of its intellectual structure over time in the form of Pathfinder networks. These networks are subsequently merged with a localized pruning algorithm. Pivotal points in the merged network are algorithmically identified and visualized using the betweenness centrality metric. An example of finding clinical evidence associated with reducing risks of heart diseases is included to illustrate how CiteSpace could be used. The contribution of the work is its integration of various change detection algorithms and interactive visualization capabilities to simply users' tasks.
Interfaces for networked media exploration and collaborative annotation BIBAFull-Text 106-113
  Preetha Appan; Bageshree Shevade; Hari Sundaram; David Birchfield
In this paper, we present our efforts towards creating interfaces for networked media exploration and collaborative annotation. The problem is important since online social networks are emerging as conduits for exchange of everyday experiences. These networks do not currently provide media-rich communication environments. Our approach has two parts -- collaborative annotation, and a media exploration framework. The collaborative annotation takes place through a web based interface, and provides to each user personalized recommendations, based on media features, and by using a common sense inference toolkit. We develop three media exploration interfaces that allow for two-way interaction amongst the participants -- (a) spatio-temporal evolution, (b) event cones and (c) viewpoint centric interaction. We also analyze the user activity to determine important people and events, for each user. We also develop subtle visual interface cues for activity feedback. Preliminary user studies indicate that the system performs well and is well liked by the users.
A graph-matching approach to dynamic media allocation in intelligent multimedia interfaces BIBAFull-Text 114-121
  Michelle X. Zhou; Zhen Wen; Vikram Aggarwal
To aid users in exploring large and complex data sets, we are building an intelligent multimedia conversation system. Given a user request, our system dynamically creates a multimedia response that is tailored to the interaction context. In this paper, we focus on the problem of media allocation, a process that assigns one or more media, such as graphics or speech, to best convey the intended response content. Specifically, we develop a graph-matching approach to media allocation, whose goal is to find a set of data-media mappings that maximizes the satisfaction of various allocation constraints (e.g., data-media compatibility and presentation consistency constraints). Compared to existing rule-based or plan-based approaches to media allocation, our work offers three unique contributions. First, we provide an extensible computational framework that optimizes media assignments by dynamically balancing all relevant constraints. Second, we use feature-based metrics to uniformly model various allocation constraints, including those cross-content and cross-media constraints, which often require special treatment in existing approaches. Third, we further improve the quality of a response by automatically detecting and repairing undesired allocation results. We have applied our approach to two different applications and our preliminary study has shown the promise of our work.
A location representation for generating descriptive walking directions BIBAFull-Text 122-129
  Gary Look; Buddhika Kottahachchi; Robert Laddaga; Howard Shrobe
An expressive representation for location is an important component in many applications. However, while many location-aware applications can reason about space at the level of coordinates and containment relationships, they have no way to express the semantics that define how a particular space is used. We present Lair, an ontology that addresses this problem by modeling both the geographical relationships between spaces as well as the functional purpose of a given space. We describe how Lair was used to create an application that produces walking directions comparable to those given by a person, and a pilot study that evaluated the quality of these directions. We also describe how Lair can be used to evaluate other intelligent user interfaces.

Long papers: natural language and gestural input

User interfaces with semi-formal representations: a study of designing argumentation structures BIBAFull-Text 130-136
  Timothy Chklovski; Varun Ratnakar; Yolanda Gil
When designing mixed-initiative systems, full formalization of all potentially relevant knowledge may not be cost-effective or practical. This paper motivates the need for semi-formal representations that combine machine-processable structures with free text statements, and discusses the need to design them in a way that makes the free text more amenable to automated structuring and processing. Our work is done in the context of argumentation systems, and has explored a range of tradeoffs in combining informal free-text statements with formal connectors. The paper compares alternative argument representations which combine structured argument connectors with free text. We discuss merits of the systems based on a variety of analysis structures that we have collected from Web users to date.
An agent-based approach to dialogue management in personal assistants BIBAFull-Text 137-144
  Anh Nguyen; Wayne Wobcke
Personal assistants need to allow the user to interact with the system in a flexible and adaptive way such as through spoken language dialogue. In this research we focus on an application in which the user can use a variety of devices to interact with a collection of personal assistants each specializing in a task domain such as email or calendar management, information seeking, etc. We propose an agent-based approach for developing the dialogue manager that acts as the central point maintaining continuous user-system interaction and coordinating the activities of the assistants. In addition, this approach enables development of multi-modal interfaces. We describe our initial implementation which contains an email management agent that the user can interact with through a spoken dialogue and an interface on PDAs. The dialogue manager was implemented by extending a BDI agent architecture.
Sketch recognition with continuous feedback based on incremental intention extraction BIBAFull-Text 145-150
  Junfeng Li; Xiwen Zhang; Xiang Ao; Guozhong Dai
On-line synchronous sketch recognition has the advantages of convenient input and natural interaction. But among the existing algorithms, some are just able to process simple sketches, and some have so high computational complexity as not to satisfy the real-time demand. In order to solve the problem of efficiency and coverage, a sketch recognition algorithm based on incremental intention extraction is presented. By defining the lag window, the algorithm understands the sketch intention of users on the base of incremental intention extraction. Moreover, the algorithm can update the existing intention sections according to the latest information in order that the recognition results are in line with the sketch intention of users. Experiments show that, the algorithm can recognize kinds of sketches in real time.
Relaxing stylus typing precision by geometric pattern matching BIBAFull-Text 151-158
  Per-Ola Kristensson; Shumin Zhai
Fitts' law models the inherent speed-accuracy trade-off constraint in stylus typing. Users attempting to go beyond the Fitts' law speed ceiling will tend to land the stylus outside the targeted key, resulting in erroneous words and increasing users' frustration. We propose a geometric pattern matching technique to overcome this problem. Our solution can be used either as an enhanced spell checker or as a way to enable users to escape the Fitts' law constraint in stylus typing, potentially resulting in higher text entry speeds than what is currently theoretically modeled. We view the hit points on a stylus keyboard as a high resolution geometric pattern. This pattern can be matched against patterns formed by the letter key center positions of legitimate words in a lexicon. We present the development and evaluation of an "elastic" stylus keyboard capable of correcting words even if the user misses all the intended keys, as long as the user's tapping pattern is close enough to the intended word.

Long papers: recommendation and instruction

Improving proactive information systems BIBAFull-Text 159-166
  Daniel Billsus; David M. Hilbert; Dan Maynes-Aminzade
Proactive contextual information systems help people locate information by automatically suggesting potentially relevant resources based on their current tasks or interests. Such systems are becoming increasingly popular, but designing user interfaces that effectively communicate recommended information is a challenge: the interface must be unobtrusive, yet communicate enough information at the right time to provide value to the user. In this paper we describe our experience with the FXPAL Bar, a proactive information system designed to provide contextual access to corporate and personal resources. In particular, we present three features designed to communicate proactive recommendations more effectively: translucent recommendation windows increase the user's awareness of particularly highly-ranked recommendations, query term highlighting communicates the relationship between a recommended document and the user's current context, and a novel recommendation digest function allows users to return to the most relevant previously recommended resources. We present empirical evidence supporting our design decisions and relate lessons learned for other designers of contextual recommendation systems.
Trust in recommender systems BIBAFull-Text 167-174
  John O'Donovan; Barry Smyth
Recommender systems have proven to be an important response to the information overload problem, by providing users with more proactive and personalized information services. And collaborative filtering techniques have proven to be an vital component of many such recommender systems as they facilitate the generation of high-quality recommendations by leveraging the preferences of communities of similar users. In this paper we suggest that the traditional emphasis on user similarity may be overstated. We argue that additional factors have an important role to play in guiding recommendation. Specifically we propose that the trustworthiness of users must be an important consideration. We present two computational models of trust and show how they can be readily incorporated into standard collaborative filtering frameworks in a variety of ways. We also show how these trust models can lead to improved predictive accuracy during recommendation.
Experiments in dynamic critiquing BIBAFull-Text 175-182
  Kevin McCarthy; James Reilly; Lorraine McGinty; Barry Smyth
Conversational recommender systems are commonly used to help users to navigate through complex product-spaces by alternatively making product suggestions and soliciting user feedback in order to guide subsequent suggestions. Recently, there has been a surge of interest in developing effective interfaces that support user interaction in domains of limited user expertise. Critiquing has proven to be a popular and successful user feedback mechanism in this regard, but is typically limited to the modification of single features. We review a novel approach to critiquing, dynamic critiquing, that allows users to modify multiple features simultaneously by choosing from a range of so-called compound critiques that are automatically proposed based on their current position within the product-space. In addition, we introduce the results of an important new live-user study that evaluates the practical benefits of dynamic critiquing.
Animating an interactive conversational character for an educational game system BIBAFull-Text 183-190
  Andrea Corradini; Manish Mehta; Niels-Ole Bernsen; Marcela Charfuelan
Within the framework of the project NICE (Natural Interactive Communication for Edutainment) [2], we have been developing an educational and entertaining computer game that allows children and teenagers to interact with a conversational character impersonating the fairy tale writer H.C. Andersen (HCA). The rationale behind our system is to make kids learn about HCA's life, fairy tales and historical period while playing and having fun. We report on the character's generation and realization of both verbal and 3D graphical non-verbal output behaviors, such as speech, body gestures and facial expressions. This conveys the impression of a human-like agent with relevant domain knowledge, and distinct personality. With the educational goal in the foreground, coherent and synchronized output presentation becomes mandatory, as any inconsistency may undermine the user's learning process rather than reinforcing it.

Long papers: knowledge acquisition and knowledge-based design

Task learning by instruction in tailor BIBAFull-Text 191-198
  Jim Blythe
In order for intelligent systems to be applicable in a wide range of situations, end users must be able to modify their task descriptions. We introduce Tailor, a system that allows users to modify task information through instruction. In this approach, the user enters a short sentence to describe the desired change. The system maps the sentence into valid, plausible modifications and checks for unexpected side-effects they may have, working interactively with the user throughout the process. We conducted preliminary tests in which subjects used Tailor to make modifications to domains drawn from the eHow website, applying modifications posted by readers as 'tips'. In this way the subjects acted as interpreters between Tailor and the human-generated descriptions of modifications. Almost all the subjects were able to make all modifications to the process descriptions with Tailor, indicating that the interpreter role is quite natural for users.
ClaimSpotter: an environment to support sensemaking with knowledge triples BIBAFull-Text 199-206
  Bertrand Sereno; Simon Buckingham Shum; Enrico Motta
Annotating a document with an interpretation of its contents raises a number of challenges that we are hoping to address via the creation of a supporting environment. We present these challenges and motivate an approach based on the notion of suggestions to support document annotation, hoping these suggestions would act as leads to follow for annotators, therefore reducing some of the difficulties inherent to the task. The environment resulting from this approach, ClaimSpotter, is presented. Aspects of its evaluation are also given, using the findings of a study involving a group of participants faced with a document annotation task.
Suggesting novel but related topics: towards context-based support for knowledge model extension BIBAFull-Text 207-214
  Ana Maguitman; David Leake; Thomas Reichherzer
Much intelligent user interfaces research addresses the problem of providing information relevant to a current user topic. However, little work addresses the complementary question of helping the user identify potential topics to explore next. In knowledge acquisition, this question is crucial to deciding how to extend previously-captured knowledge. This paper examines requirements for effective topic suggestion and presents a domain-independent topic-generation algorithm designed to generate candidate topics that are novel but related to the current context. The algorithm iteratively performs a cycle of topic formation, Web search for connected material, and context-based filtering. An experimental study shows that this approach significantly outperforms a baseline at developing new topics similar to those chosen by an expert for a hand-coded knowledge model.
The UI pilot: a model-based tool to guide early interface design BIBAFull-Text 215-222
  Angel Puerta; Michael Micheletti; Alan Mak
In this paper, we introduce the User Interface Pilot, a model-based software tool that enables designers and engineers to create the initial specifications for the pages of a website, or for the screens of a desktop or mobile application. The tool guides the design of these specifications, commonly known as wireframes, in a user-centered fashion by framing the context of the design within the concepts of user tasks, user types, and data objects. Unlike previous model-based tools, the User Interface Pilot does not impose a rigid model-driven methodology and functions well within common software engineering development processes. The tool has been used in over twenty real-world user interface design projects.

Long papers: smart environments and ubiquitous computing

Dimensions of adaptivity in mobile systems: personality and people's attitudes BIBAFull-Text 223-230
  Ilenia Graziola; Fabio Pianesi; Massimo Zancanaro; Dina Goren-Bar
In this work, we present a study about adaptation on a mobile museum guide aiming at investigating the relationships between personality traits and the attitudes toward some basic dimensions of adaptivity. Each participant was exposed to two simulated systems that realized an adaptive and a non-adaptive version, respectively, on each of the dimensions investigated. The study showed interesting effects of Big Five personality traits on acceptance of the adaptivity dimensions; in particular conscientiousness, creativity and stability. Locus of control seemed to have a limited yet quite selective effect on delegating to the system the choice of follow-ups.
CASIS: a context-aware speech interface system BIBAFull-Text 231-238
  Lee Hoi Leong; Shinsuke Kobayashi; Noboru Koshizuka; Ken Sakamura
In this paper, we propose a robust natural language interface called CASIS for controlling devices in an intelligent environment. CASIS is novel in a sense that it integrates physical context acquired from the sensors embedded in the environment with traditionally used context to reduce the system error rate and disambiguate deictic references and elliptical inputs. The n-best result of the speech recognizer is re-ranked by a score calculated using a Bayesian network consisting of information from the input utterance and context. In our prototype system that uses device states, brightness, speaker location, chair occupancy, speech direction and action history as context, the system error rate has been reduced by 41% compared to a baseline system that does not leverage on context information.
SmartCanvas: a gesture-driven intelligent drawing desk system BIBAFull-Text 239-243
  Zhenyao Mo; J. P. Lewis; Ulrich Neumann
This paper describes SmartCanvas, an intelligent desk system that allows a user to perform freehand drawing on a desk or similar surface with gestures. Our system requires one camera and no touch sensors. The key underlying technique is a vision-based method that distinguishes drawing gestures and transitional gestures in real time, avoiding the need for "artificial" gestures to mark the beginning and end of a drawing stroke. The method achieves an average classification accuracy of 92.17%. Pie-shaped menus and a "rotate-to-and-select" approach eliminate the need for a fixed menu display, resulting in an "invisible" interface.

Short papers: affective computing

Person-independent estimation of emotional experiences from facial expressions BIBAFull-Text 246-248
  Timo Partala; Veikko Surakka; Toni Vanhala
The aim of this research was to develop methods for the automatic person-independent estimation of experienced emotions from facial expressions. Ten subjects watched series of emotionally arousing pictures and videos, while the electromyographic (EMG) activity of two facial muscles: zygomaticus major (activated in smiling) and corrugator supercilii (activated in frowning) was registered. Based on the changes in the activity of these two facial muscles, it was possible to distinguish between ratings of positive and negative emotional experiences at a rate of almost 70% for pictures and over 80% for videos. Using these methods, the computer could adapt its behavior according to the user's emotions during human-computer interaction.
Preliminary design guidelines for pedagogical agent interface image BIBAFull-Text 249-250
  Amy L. Baylor
Pedagogical agent image is a key feature for animated interface agents. Experimental research indicates that agent interface images should be carefully designed, considering both the relevant outcomes (learning or motivational) together with student characteristics. This paper summarizes empirically-derived design guidelines for pedagogical agent image.
Emotive alert: HMM-based emotion detection in voicemail messages BIBAFull-Text 251-253
  Zeynep Inanoglu; Ron Caneel
Voicemail has become an integral part of our personal and professional communication. The number of messages that accumulate in our voice mailboxes necessitate new ways of prioritizing them. Currently, we are forced to actively listen to all messages in order to find out which ones are important and which ones can be attended to later on. In this paper, we describe Emotive Alert, a system that can detect some of the significant emotions in a new message and notify the account owner along various affective axes, including urgency, formality, valence (happy vs. sad) and arousal (calm vs. excited). We have used a purely acoustic, HMM-based approach for identifying the emotions, which allows application of this system to all messages independent of language.

Short papers: human-robot interaction

Vision based GUI for interactive mobile robots BIBAFull-Text 254-256
  Randeep Singh; Bhartendu Seth; Uday B. Desai
Interactive mobile robots are an active area of research. This paper presents a framework for designing a real-time vision based hand-body gesture user interface for such robots. The said framework works in real world lighting conditions, with complex background, and can handle intermittent motion of the camera. The input signal is captured by using a singular monocular color camera. Vision is the only feedback sensor being used. It is assumed that the gesturer is wearing clothes that are slightly different from the background. We have tested this framework on a gesture database consisting of 11 hand-body gestures and have recorded recognition accuracy up to 90%.
User intentions funneled through a human-robot interface BIBAFull-Text 257-259
  Michael T. Rosenstein; Andrew H. Fagg; Shichao Ou; Roderic A. Grupen
We describe a method for predicting user intentions as part of a human-robot interface. In particular, we show that funnels, i.e., geometric objects that partition an input space, provide a convenient means for discriminating individual objects and for clustering sets of objects for hierarchical tasks. One advantage of the proposed implementation is that a simple parametric model can be used to specify the shape of a funnel, and a straightforward heuristic for setting initial parameter values appears promising. We discuss the possibility of adapting the user interface with machine learning techniques, and we illustrate the approach with a humanoid robot performing a variation of a standard peg-insertion task.

Short papers: personal assistants

Context-based similar words detection and its application in specialized search engines BIBAFull-Text 260-262
  Hisham Al-Mubaid; Ping Chen
This paper presents a new context-based method for automatic detection and extraction of similar and related words from texts. Finding similar words is a very important task for many NLP applications including anaphora resolution, document retrieval, text segmentation, and text summarization. Here we use word similarity to improve search quality for search engines in (general and) specific domains. Our method is based on rules for extracting the words in the neighborhood of a target word, then connecting this with the surroundings of other occurrences of the same word in the (training) text corpus. This is an on-going work, and is still under extensive testing. The preliminary results, however, are promising and encouraging more work in this direction.
Interactively building agents for consumer-side data mining BIBAFull-Text 263-265
  Rattapoom Tuchinda; Craig A. Knoblock
Integrating and mining data from different web sources can make end-users well-informed when they make decisions. One of many limitations that bars end-users from taking advantages of such process is the complexity in each of the steps required to gather, integrate, monitor, and mine data from different websites. We present the idea of combining the data integration, monitoring, and mining as one single process in the form of an intelligent assistant that guides end-users to specify their mining tasks by just answering questions. This easy-to-use approach, which trades off complexity in terms of available operations with the ease of use, has the ability to provide interesting insight into the data that would requires days of human effort to gather, combine, and mine manually from the web.
Adaptive teaching strategy for online learning BIBAFull-Text 266-268
  Jungsoon Yoo; Cen Li; Chrisila Pettey
Finding the optimal teaching strategy for an individual student is difficult even for an experienced teacher. Identifying and incorporating multiple optimal teaching strategies for different students in a class is even harder. This paper presents an Adaptive tutor for online Learning, AtoL, for Computer Science laboratories that identifies and applies the appropriate teaching strategies for students on an individual basis. The optimal strategy for a student is identified in two steps. First, a basic strategy for a student is identified using rules learned from a supervised learning system. Then the basic strategy is refined to better fit the student using models learned using an unsupervised learning system that takes into account the temporal nature of the problem solving process. The learning algorithms as well as the initial experimental results are presented.
Providing intelligent help across applications in dynamic user and environment contexts BIBAFull-Text 269-271
  Ashwin Ramachandran; R. Michael Young
The problem of providing help for complex application interfaces has been a source of interest for a number of researcher efforts. As the computational power of computers increases, typical applications not only increase in functionality but also in the degree of interaction with the computational environment in which they reside. This paper describes an ongoing project to design an Intelligent Help System (IHS) that provides context-sensitivity not only through its modeling of application states but also its modeling of the interaction between applications and between an application and the environment in which it resides.

Short papers: visualization and presentation

ScentHighlights: highlighting conceptually-related sentences during reading BIBAFull-Text 272-274
  Ed H. Chi; Lichan Hong; Michelle Gumbrecht; Stuart K. Card
Researchers have noticed that readers are increasingly skimming instead of reading in depth. Skimming also occur in re-reading activities, where the goal is to recall specific topical facts. Bookmarks and highlighters were invented precisely to achieve this goal. For skimming activities, readers need effective ways to direct their attention toward the most relevant passages within text. We describe how we have enhanced skimming activity by conceptually highlighting sentences within electronic text that relate to search keywords. We perform the conceptual highlighting by computing what conceptual keywords are related to each other via word co-occurrence and spreading activation. Spreading activation is a cognitive model developed in psychology to simulate how memory chunks and conceptual items are retrieved in our brain. We describe the method used, and illustrate the idea with realistic scenarios using our system.
Personal reporting of a museum visit as an entrypoint to future cultural experience BIBAFull-Text 275-277
  Charles Callaway; Tsvi Kuflik; Elena Not; Alessandra Novello; Oliviero Stock; Massimo Zancanaro
Museum visitors can continue interacting with museum exhibits even after they have left the museum. We can help them do this by creating a report that includes a basic, personalized narration of their visit, the items and relationships they found most interesting, pointers to additional related online information, and suggestions for future visits to the current and other museums. In this work we describe the automatic generation of personalized natural language reports to help create one episode in an ongoing coherent sequence of cultural activities.

Short papers: speech- and vision-based interfaces

How to wreck a nice beach you sing calm incense BIBAFull-Text 278-280
  Henry Lieberman; Alexander Faaborg; Waseem Daher; Jose Espinosa
A principal problem in speech recognition is distinguishing between words and phrases that sound similar but have different meanings. Speech recognition programs produce a list of weighted candidate hypotheses for a given audio segment, and choose the "best" candidate. If the choice is incorrect, the user must invoke a correction interface that displays a list of the hypotheses and choose the desired one. The correction interface is time-consuming, and accounts for much of the frustration of today's dictation systems. Conventional dictation systems prioritize hypotheses based on language models derived from statistical techniques such as n-grams and Hidden Markov Models.
   We propose a supplementary method for ordering hypotheses based on Commonsense Knowledge. We filter acoustical and word-frequency hypotheses by testing their plausibility with a semantic network derived from 700,000 statements about everyday life. This often filters out possibilities that "don't make sense" from the user's viewpoint, and leads to improved recognition. Reducing the hypothesis space in this way also makes possible streamlined correction interfaces that improve the overall throughput of dictation systems.
HMM-based efficient sketch recognition BIBAFull-Text 281-283
  Tevfik Metin Sezgin; Randall Davis
Current sketch recognition systems treat sketches as images or a collection of strokes, rather than viewing sketching as an interactive and incremental process. We show how viewing sketching as an interactive process allows us to recognize sketches using Hidden Markov Models. We report results of a user study indicating that in certain domains people draw objects using consistent stroke orderings. We show how this consistency, when present, can be used to perform sketch recognition efficiently. This novel approach enables us to have polynomial time algorithms for sketch recognition and segmentation, unlike conventional methods with exponential complexity.
Interaction techniques using prosodic features of speech and audio localization BIBAFull-Text 284-286
  Alex Olwal; Steven Feiner
We describe several approaches for using prosodic features of speech and audio localization to control interactive applications. This information can be applied to parameter control, as well as to speech disambiguation. We discuss how characteristics of spoken sentences can be exploited in the user interface; for example, by considering the speed with which a sentence is spoken and the presence of extraneous utterances. We also show how coarse audio localization can be used for low-fidelity gesture tracking, by inferring the speaker's head position.
Doubleshot: an interactive user-aided segmentation tool BIBAFull-Text 287-289
  Tom Yeh; Trevor Darrell
In this paper, we describe an intelligent user interface designed for camera phones to allow mobile users to specify the object of interest in the scene simply by taking two pictures: one with the object and one without the object. By comparing these two images, the system can reliably extract the visual appearance of the object, which can be useful to a wide-range of applications such as content-based image retrieval and object recognition.
Generating semantic contexts from spoken conversation in meetings BIBAFull-Text 290-292
  Jurgen Ziegler; Zoulfa El Jerroudi; Karsten Bohm
SemanticTalk is a tool for supporting face-to-face meetings and discussions by automatically generating a semantic context from spoken conversations. We use speech recognition and topic extraction from a large terminological database to create a network of discussion topics in real-time. This network includes concepts explicitly addressed in the discussion as well as semantically associated terms, and is visualized to increase conversational awareness and creativity in the group.
Conventions in human-human multi-threaded dialogues: a preliminary study BIBAFull-Text 293-295
  Peter A. Heeman; Fan Yang; Andrew L. Kun; Alexander Shyrokov
In this paper, we explore the conventions that people use in managing multiple dialogue threads. In particular, we focus on where in a thread people interrupt when switching to another thread. We find that some subjects are able to vary where they switch depending on how urgent the interrupting task is. When time-allowed, they switched at the end of a discourse segment, which we hypothesize is less disruptive to the interrupted task when it is later resumed.
Towards automatic transcription of expressive oral percussive performances BIBAFull-Text 296-298
  Amaury Hazan
We describe a tool for transcribing voice generated percussive rhythms. The system consists of: (a) a segmentation component which separates the monophonic input stream into percussive events (b) a descriptors generation component that computes a set of acoustic features from each of the extracted segments, (c) a machine learning component which assigns to each of the segmented sounds of the input stream a symbolic class. We describe each of these components and compare different machine learning strategies that can be used to obtain a symbolic representation of the oral percussive performance.
Communicating user's focus of attention by image processing as input for a mobile museum guide BIBAFull-Text 299-301
  Adriano Albertini; Roberto Brunelli; Oliviero Stock; Massimo Zancanaro
The paper presents a first prototype of a handheld museum guide delivering contextualized information based on the recognition of drawing details selected by the user through the guide camera. The resulting interaction modality has been analyzed and compared to previous approaches. Finally, alternative, more scalable, solutions are presented that preserve the most interesting features of the system described.

Short papers: knowledge acquisition and knowledge-based design

ComicKit: acquiring story scripts using common sense feedback BIBAFull-Text 302-304
  Ryan Williams; Barbara Barry; Push Singh
At the Media Lab we are developing a resource called StoryNet, a very-large database of story scripts that can be used for commonsense reasoning by computers. This paper introduces ComicKit, an interface for acquiring StoryNet scripts from casual internet users. The core element of the interface is its ability to dynamically make common-sense suggestions that guide user story construction. We describe the encouraging results of a preliminary user study, and discuss future directions for ComicKit.
Metafor: visualizing stories as code BIBAFull-Text 305-307
  Hugo Liu; Henry Lieberman
Every program tells a story. Programming, then, is the art of constructing a story about the objects in the program and what they do in various situations. So-called programming languages, while easy for the computer to accurately convert into code, are, unfortunately, difficult for people to write and understand.
   We explore the idea of using descriptions in a natural language as a representation for programs. While we cannot yet convert arbitrary English to fully specified code, we can use a reasonably expressive subset of English as a visualization tool. Simple descriptions of program objects and their behavior generate scaffolding (underspecified) code fragments, that can be used as feedback for the designer. Roughly speaking, noun phrases can be interpreted as program objects; verbs can be functions, adjectives can be properties. A surprising amount of what we call programmatic semantics can be inferred from linguistic structure. We present a program editor, Metafor, that dynamically converts a user's stories into program code, and in a user study, participants found it useful as a brainstorming tool.
Task aware information access for diagnosis of manufacturing problems BIBAFull-Text 308-310
  Larry Birnbaum; Wallace Hopp; Seyed Iravani; Kevin Livingston; Biying Shou; Thomas Tirpak
Pinpoint is a promising first step towards using a rich model of task context in proactive and dynamic IR systems. Pinpoint allows a user to navigate decision tree representations of problem spaces, built by domain experts, while dynamically entering annotations specific to their problem. The system then automatically generates queries to information repositories based on both the user's annotations and location in the problem space, producing results that are both task focused and problem specific. Initial feedback from users and domain experts has been positive.
Designing interfaces for guided collection of knowledge about everyday objects from volunteers BIBAFull-Text 311-313
  Timothy Chklovski
A new generation of intelligent applications can be enabled by broad-coverage knowledge repositories about everyday objects. We distill lessons in design of intelligent user interfaces which collect such broad-coverage knowledge from untrained volunteers. We motivate the knowledge-driven template-based approach adopted in Learner2, a second generation proactive acquisition interface for eliciting such knowledge. We present volume, accuracy, and recall of knowledge collected by fielding the system for 5 months. Learner2 has so far acquired 99,018 general statements, emphasizing knowledge about parts of and typical uses of objects.
An ontology-based interface for machine learning BIBAFull-Text 314-316
  Mathias Bauer; Stephan Baldes
Machine learning (ML) is a complex process that can hardly be carried out by non-expert users. Especially when using adaptive systems that interpret and exploit observations of the user to modify their behavior according to the user's perceived preferences, even naive users may be confronted with learning systems. This paper presents an approach to make non-expert users understand and influence an ML system such as to improve trust and acceptance of the overall system behavior.

Short papers: smart environments and ubiquitous computing

A framework for designing intelligent task-oriented augmented reality user interfaces BIBAFull-Text 317-319
  Leonardo Bonanni; Chia-Hsun Lee; Ted Selker
A task-oriented space can benefit from an augmented reality interface that layers the existing tools and surfaces with useful information to make cooking more easy, safe and efficient. To serve experienced users as well as novices, augmented reality interfaces need to adapt modalities to the user's expertise and allow for multiple ways to perform tasks. We present a framework for designing an intelligent user interface that informs and choreographs multiple tasks in a single space according to a model of tasks and users. A residential kitchen has been outfitted with systems to gather data from tools and surfaces and project multi-modal interfaces back onto the tools and surfaces themselves. Based on user evaluations of this augmented reality kitchen, we propose a system to tailor information modalities based on the spatial and temporal qualities of the task, and the expertise, location and progress of the user. The intelligent augmented reality user interface choreographs multiple tasks in the same space at the same time.
Seamless user notification in ambient soundscapes BIBAFull-Text 320-322
  Andreas Butz; Ralf Jung
We describe a method for notifying users through auditory cues embedded in an ambient soundscape in the environment. It uses pieces of music which are composed in such a way, that particular instruments or motifs can be added or omitted without losing the aesthetic quality of the overall composition. This allows for very subtle modifications in the soundscape which are only noticed by those users who have chosen this particular instrument or motif as "their" notification instrument before. As a side effect, the soundscape itself can be used to subtly influence the mood of users. The method has been implemented in a prototype, which we briefly discuss. The prototype is implemented using a spatial audio framework and can hence notify users from particular directions.
Building intelligent shopping assistants using individual consumer models BIBAFull-Text 323-325
  Chad Cumby; Andrew Fano; Rayid Ghani; Marko Krema
This paper describes an Intelligent Shopping Assistant designed for a shopping cart mounted tablet PC that enables individual interactions with customers. We use machine learning algorithms to predict a shopping list for the customer's current trip and present this list on the device. As they navigate through the store, personalized promotions are presented using consumer models derived from loyalty card data for each individual. In order for shopping assistant devices to be effective, we believe that they have to be powered by algorithms that are tuned for individual customers and can make accurate predictions about an individual's actions. We formally frame the shopping list prediction as a classification problem, describe the algorithms and methodology behind our system, and show that shopping list prediction can be done with high levels of accuracy, precision, and recall. Beyond the prediction of shopping lists we briefly introduce other aspects of the shopping assistant project, such as the use of consumer models to select appropriate promotional tactics, and the development of promotion planning simulation tools to enable retailers to plan personalized promotions delivered through such a shopping assistant.
Adaptive navigation support with public displays BIBAFull-Text 326-328
  Christian Kray; Gerd Kortuem; Antonio Kruger
In this paper, we describe a public navigation system which uses adaptive displays as directional signs. The displays are mounted to walls where they provide passersbys with directional information. Each sign is an autonomous, wirelessly networked digital displays connected to a central server. The signs are position-aware and able to adapt their display content in accordance with their current position. Advantages of such a navigation system include improved flexibility, dynamic adaptation and ease of setup and maintenance.