HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 07080910111213-113-214-114-215-115-216-116-2

Proceedings of the 2015 International Conference on Intelligent User Interfaces

Fullname:Proceedings of the 20th International Conference on Intelligent User Interfaces
Editors:Oliver Brdiczka; Polo Chau; Giuseppe Carenini; Shimei Pan; Per Ola Kristensson
Location:Atlanta, Georgia
Dates:2015-Mar-29 to 2015-Apr-01
Volume:1
Publisher:ACM
Standard No:ISBN: 978-1-4503-3306-1; ACM DL: Table of Contents; hcibib: IUI15-1
Papers:57
Pages:457
Links:Conference Website
  1. IUI 2015-03-29 Volume 1
    1. Keynotes
    2. Education / Crowdsourcing / Social
    3. Multimodal / Touch / Gesture
    4. Interactive Machine Learning / Decision Making / Topic Modeling / Robotics
    5. Recommenders / Web
    6. Personalization / Adaptation / Recommendation / Sentiment
    7. Visualization / Video / Augmented Reality
    8. Affect / Health
    9. Tutorials
    10. Workshops

IUI 2015-03-29 Volume 1

Keynotes

Intelligent Control of Crowdsourcing BIBAFull-Text 1
  Daniel S. Weld
Crowd-sourcing labor markets (e.g., Amazon Mechanical Turk) are booming, because they enable rapid construction of complex workflows that seamlessly mix human computation with computer automation. Example applications range from photo tagging to audio-visual transcription and interlingual translation. Similarly, workflows on citizen science sites (e.g. GalaxyZoo) have allowed ordinary people to pool their effort and make interesting discoveries. Unfortunately, constructing a good workflow is difficult, because the quality of the work performed by humans is highly variable. Typically, a task designer will experiment with several alternative workflows to accomplish a task, varying the amount of redundant labor, until she devises a control strategy that delivers acceptable performance. Fortunately, this control challenge can often be formulated as an automated planning problem ripe for algorithms from the probabilistic planning and reinforcement learning literature. I describe our recent work on the decision-theoretic control of crowd sourcing and suggest open problems for future research.
Blurring of the Boundary Between Interactive Search and Recommendation BIBAFull-Text 2
  Ed H. Chi
Search and recommendation engines are increasingly more intelligent. They have become more personalized and social as well as more interactive. No longer just offering ten blue links, search engines have increasingly been integrated with task and item recommenders directly, for example, to offer news, movie, music, and dining suggestions. Vice versa, recommendation systems have increasingly became more search-like by offering capabilities that enable users to tune and direct recommendation results instantly. As the two technologies evolve toward each other, there is increasingly a blurring of the boundary between these two approaches to interactive information seeking. On the search side, this is driven by the merging of question answering capabilities with search, led by systems like Google Now and Apple Siri that move search toward intelligent personal assistants. On the recommendation side, there has been a merging of techniques from not just keyword search but also faceted search, along with user-based and item-based collaborative filtering techniques and other more proactive recommenders.
   This blurring has resulted in both critical re-thinking about not just how to architect the systems by merging and sharing backend components common to both types of systems, but also how to structure the user interactions and experiences.
Recognizing Stress, Engagement, and Positive Emotion BIBAFull-Text 3-4
  Rosalind W. Picard
An intelligent interaction should not typically call attention to emotion. However, it almost always involves emotion: For example, it should engage, not inflict undesirable stress and frustration, and perhaps elicit positive emotions such as joy or delight. How would the system sense or recognize if it was succeeding in these elements of intelligent interaction? This keynote talk will address some ways that our work at the MIT Media Lab has advanced solutions for recognizing user emotion during everyday experiences.

Education / Crowdsourcing / Social

Improving Inquiry-Driven Modeling in Science Education through Interaction with Intelligent Tutoring Agents BIBAFull-Text 5-16
  David A. Joyner; Ashok K. Goel
This paper presents the design and evaluation of a set of intelligent tutoring agents constructed to teach teams of students an authentic process of inquiry-driven modeling. The paper first presents the theoretical grounding for inquiry-driven modeling as both a teaching strategy and a learning goal, and then presents the need for guided instruction to improve learning of this skill. However, guided instruction is difficulty to provide in a one-to-many classroom environment, and thus, this paper makes the case that interaction with a metacognitive tutoring system can help students acquire the skill. The paper then describes the design of an exploratory learning environment, the Modeling and Inquiry Learning Application (MILA), and an accompanying set of metacognitive tutors (MILA-T). These tools were used in a controlled experiment with 84 teams (237 total students) in which some teams received and interacted with the tutoring system while other teams did not. The effect of this experiment on teams' demonstration of inquiry-driven modeling are presented.
Automated Social Skills Trainer BIBAFull-Text 17-27
  Hiroki Tanaka; Sakriani Sakti; Graham Neubig; Tomoki Toda; Hideki Negoro; Hidemi Iwasaka; Satoshi Nakamura
Social skills training is a well-established method to decrease human anxiety and discomfort in social interaction, and acquire social skills. In this paper, we attempt to automate the process of social skills training by developing a dialogue system named "automated social skills trainer," which provides social skills training through human-computer interaction. The system includes a virtual avatar that recognizes user speech and language information and gives feedback to users to improve their social skills. Its design is based on conventional social skills training performed by human participants, including defining target skills, modeling, role-play, feedback, reinforcement, and homework. An experimental evaluation measuring the relationship between social skill and speech and language features shows that these features have a relationship with autistic traits. Additional experiments measuring the effect of performing social skills training with the proposed application show that most participants improve their skill by using the system for 50 minutes.
Evaluating Subjective Accuracy in Time Series Pattern-Matching Using Human-Annotated Rankings BIBAFull-Text 28-37
  Philipp Eichmann; Emanuel Zgraggen
Finding patterns is a common task in time series analysis which has gained a lot of attention across many fields. A multitude of similarity measures have been introduced to perform pattern searches. The accuracy of such measures is often evaluated objectively using a one nearest neighbor classification (1NN) on labeled time series or through clustering. Prior work often disregards the subjective similarity of time series which can be pivotal in systems where a user specified pattern is used as input and a similarity-based ranking is expected as output (query-by-example). In this paper, we describe how a human-annotated ranking based on real-world queries and datasets can be created using simple crowdsourcing tasks and use this ranking as ground-truth to evaluate the perceived accuracy of existing time series similarity measures. Furthermore, we show how different sampling strategies and time series representations of pen-drawn queries effect the precision of these similarity measures and provide a publicly available dataset which can be used to optimize existing and future similarity search algorithms.
Cohort Comparison of Event Sequences with Balanced Integration of Visual Analytics and Statistics BIBAFull-Text 38-49
  Sana Malik; Fan Du; Megan Monroe; Eberechukwu Onukwugha; Catherine Plaisant; Ben Shneiderman
Finding the differences and similarities between two datasets is a common analytics task. With temporal event sequence data, this task is complex because of the many ways single events and event sequences can differ between the two datasets (or cohorts) of records: the structure of the event sequences (e.g., event order, co-occurring events, or event frequencies), the attributes of events and records (e.g., patient gender), or metrics about the timestamps themselves (e.g., event duration). In exploratory analyses, running statistical tests to cover all cases is time-consuming and determining which results are significant becomes cumbersome. Current analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. This paper presents a taxonomy of metrics for comparing cohorts of temporal event sequences, showing that the problem-space is bounded. We also present a visual analytics tool, CoCo (for "Cohort Comparison"), which implements balanced integration of automated statistics with an intelligent user interface to guide users to significant, distinguishing features between the cohorts. Lastly, we describe two early case studies: the first with a research team studying medical team performance in the emergency department and the second with pharmacy researchers.
Real-Time Community Question Answering: Exploring Content Recommendation and User Notification Strategies BIBAFull-Text 50-61
  Qiaoling Liu; Tomasz Jurczyk; Jinho Choi; Eugene Agichtein
Community-based Question Answering (CQA) services allow users to find and share information by interacting with others. A key to the success of CQA services is the quality and timeliness of the responses that users get. With the increasing use of mobile devices, searchers increasingly expect to find more local and time-sensitive information, such as the current special at a cafe around the corner. Yet, few services provide such hyper-local and time-aware question answering. This requires intelligent content recommendation and careful use of notifications (e.g., recommending questions to only selected users). To explore these issues, we developed RealQA, a real-time CQA system with a mobile interface, and performed two user studies: a formative pilot study with the initial system design, and a more extensive study with the revised UI and algorithms. The research design combined qualitative survey analysis and quantitative behavior analysis under different conditions. We report our findings of the prevalent information needs and types of responses users provided, and of the effectiveness of the recommendation and notification strategies on user experience and satisfaction. Our system and findings offer insights and implications for designing real-time CQA systems, and provide a valuable platform for future research.
Seeing the Big Picture from Microblogs: Harnessing Social Signals for Visual Event Summarization BIBAFull-Text 62-66
  Jiejun Xu; Tsai-Ching Lu
We propose an approach to automatically select a set of representative images to generate a concise visual summary of a real-world event from the Tumblr microblogging platform. Central to our approach is a unified graph model with heterogeneous nodes and edges to capture the interrelationship among various entities (e.g., users, posts, images, and tags) in online social media. With the graph representation, we then cast the summarization problem as a graph-based ranking problem by identifying the most representative images regarding to an event. The intuition behind our work is that not only can we crowdsource social media users as sensors to capture and share data, but we can also use them as filters to identify the most useful information through analyzing their interaction in the microblogging network. In addition, we propose a greedy algorithm to encourage diversity among top ranked results for the generation of temporal highlights of targeted events. Our approach is flexible to support different query tasks and is adaptable to additional graph entities and relationships.

Multimodal / Touch / Gesture

BeyondTouch: Extending the Input Language with Built-in Sensors on Commodity Smartphones BIBAFull-Text 67-77
  Cheng Zhang; Anhong Guo; Dingtian Zhang; Caleb Southern; Rosa Arriaga; Gregory Abowd
While most smartphones today have a rich set of sensors that could be used to infer input (.e.g., accelerometer, gyroscope, microphone), the primary mode of interaction is still limited to the front-facing touchscreen and several physical buttons on the case. To investigate the potential opportunities for interactions supported by built-in sensors, we present the implementation and evaluation of BeyondTouch, a family of interactions to extend and enrich the input experience of a smartphone. Using only existing sensing capabilities on a commodity smartphone, we offer the user a wide variety of additional tapping and sliding inputs on the case of and the surface adjacent to the smartphone. We outline the implementation of these interaction techniques and demonstrate empirical evidence of their effectiveness and usability. We also discuss the practicality of BeyondTouch for a variety of application scenarios.
Hairware: The Conscious Use of Unconscious Auto-contact Behaviors BIBAFull-Text 78-86
  Katia Vega; Marcio Cunha; Hugo Fuks
Beauty Technology is a wearable computing paradigm that uses the body's surface as an interactive platform by integrating technology into beauty products applied directly to one's skin, fingernails and hair. Hairware is a Beauty Technology Prototype that connects chemically metalized hair extensions to a microcontroller turning it into an input device for triggering different objects. Hairware acts as a capacitive touch sensor that detects touch variations on hair and uses machine learning algorithms in order to recognize user's intention. Normally, while someone touches her own hair, unconsciously she is bringing comfort to herself and at the same time is emitting a non-verbal message decodable by an observer. However, when she replays that touch on Hairware, she is not just emitting a message to an observer, because touching her hair would trigger an object, creating in this way, a concealed interface to different devices. Therefore, Hairware brings the opportunity to make conscious use of an unconscious auto-contact behavior. We present Hairware's hardware and software implementation.
Math Boxes: A Pen-Based User Interface for Writing Difficult Mathematical Expressions BIBAFull-Text 87-96
  Eugene M. Taranta; Joseph J., Jr. LaViola
We present math boxes, a novel pen-based user interface for simplifying the task of hand writing difficult mathematical expressions. Visible bounding boxes around certain subexpressions are automatically generated as the system detects specific relationships including superscripts, subscripts, and fractions. Subexpressions contained in a box can then be extended by adding new terms directly into its given bounds. Upon accepting new characters, box boundaries are dynamically resized and neighboring terms are translated to make room for the larger box. Feedback on structural recognition is given via the boxes themselves. We also provide feedback on character recognition by morphing the user's individual characters into a cleaner version stored in our ink database.
   To evaluate the usefulness of our proposed method, we conducted a user study in which participants write a variety of equations ranging in complexity from a simple polynomial to the more difficult expected value of the logistic distribution. The math boxes interface is compared against the commonly used offset typeset (small) method, where recognized expressions are typeset in a system font near the user's unmodified ink. In our initial study, we find that the fluidness of the offset method is preferred for simple expressions but as difficulty increases, our math boxes method is overwhelmingly preferred.
Predicting Task Execution Time on Natural User Interfaces based on Touchless Hand Gestures BIBAFull-Text 97-109
  Orlando Erazo; José A. Pino
Model-based evaluation has been widely used in HCI. However, current predictive models are insufficient to evaluate Natural User Interfaces based on touchless hand gestures. The purpose of this paper is to present a model based on KLM to predict performance time for doing tasks using this interface type. The required model operators were defined considering the temporal structure of hand gestures (i.e. using gesture units) and performing a systematic bibliographic review. The times for these operators were estimated by a user study consisting of various parts. Finally, the model empirical evaluation gave acceptable results (root-mean-square error = 10%, R2 = 0.936) when compared to similar models developed for other interaction styles. Thus, the proposed model should be helpful to software designers to carry out usability assessments by predicting performance time without user participation.
TouchML: A Machine Learning Toolkit for Modelling Spatial Touch Targeting Behaviour BIBAFull-Text 110-114
  Daniel Buschek; Florian Alt
Pointing tasks are commonly studied in HCI research, for example to evaluate and compare different interaction techniques or devices. A recent line of work has modelled user-specific touch behaviour with machine learning methods to reveal spatial targeting error patterns across the screen. These models can also be applied to improve accuracy of touchscreens and keyboards, and to recognise users and hand postures. However, no implementation of these techniques has been made publicly available yet, hindering broader use in research and practical deployments. Therefore, this paper presents a toolkit which implements such touch models for data analysis (Python), mobile applications (Java/Android), and the web (JavaScript). We demonstrate several applications, including hand posture recognition, on touch targeting data collected in a study with 24 participants. We consider different target types and hand postures, changing behaviour over time, and the influence of hand sizes.
From One to Many Users and Contexts: A Classifier for Hand and Arm Gestures BIBAFull-Text 115-120
  David Costa; Carlos Duarte
On-body interaction techniques are gaining traction, and opening up new avenues to control interactive systems. At the same time, they reveal potential to increase the accessibility of systems like touch based smartphones and other mobile devices for visually impaired users. However, for this potential to be realised, it is paramount that these techniques can be used in a multitude of contextual settings, and, ideally, do not impose training and calibration procedures. Our approach intends to optimize signal filtering, feature extraction parameters and classifier configurations for each defined gesture. The results show a 98.35% accuracy for the optimized classifier. We proceeded to conduct a validation study (15 participants) in three contexts: seated, standing and walking. Our findings show that, despite the gesture being trained by someone not participating in the study, the average accuracy was 94.67%. We also concluded that, while walking, false positives can impact its usefulness.
Spatio-Temporal Detection of Divided Attention in Reading Applications Using EEG and Eye Tracking BIBAFull-Text 121-125
  Mathieu Rodrigue; Jungah Son; Barry Giesbrecht; Matthew Turk; Tobias Höllerer
Reading is central to learning and communicating, however, divided attention in the form of distraction may be present in learning environments, resulting in a limited understanding of the reading material. This paper presents a novel system that can spatio-temporally detect divided attention in users during two different reading applications: typical document reading and speed reading. Eye tracking and electroencephalography (EEG) monitor the user during reading and provide a classifier with data to decide the user's attention state. The multimodal data informs the system where the user was distracted spatially in the user interface and when the user was distracted. Classification was evaluated with two exploratory experiments. The first experiment was designed to divide the user's attention with a multitasking scenario. The second experiment was designed to divide the users attention by simulating a real-world scenario where the reader is interrupted by unpredictable audio distractions. Results from both experiments show that divided attention may be detected spatio-temporally well above chance on a single-trial basis.

Interactive Machine Learning / Decision Making / Topic Modeling / Robotics

Principles of Explanatory Debugging to Personalize Interactive Machine Learning BIBAFull-Text 126-137
  Todd Kulesza; Margaret Burnett; Weng-Keen Wong; Simone Stumpf
How can end users efficiently influence the predictions that machine learning systems make on their behalf? This paper presents Explanatory Debugging, an approach in which the system explains to users how it made each of its predictions, and the user then explains any necessary corrections back to the learning system. We present the principles underlying this approach and a prototype instantiating it. An empirical evaluation shows that Explanatory Debugging increased participants' understanding of the learning system by 52% and allowed participants to correct its mistakes up to twice as efficiently as participants using a traditional learning system.
Binary Space Partitioning Layouts To Help Build Better Information Dashboards BIBAFull-Text 138-147
  Patrick Hertzog
Information dashboards are a widely used means to present data at-a-glance and their quality (i.e., their ability to communicate the data in a clear and efficient way) depends not only on the way to represent the data but also on how visual elements are placed on the screen -- the dashboard layout. In many cases the dashboards are predefined by the application but we are now in an era where the user wants more freedom. Unfortunately this freedom is today bounded by the applications using layout managers which either do not offer the needed flexibility or are too complex to be used by non-experts.
   In this paper, we propose a novel layout manager more suitable for dashboards; it provides layouts based on a binary space partitioning (BSP) and we show how the user can interactively build and modify them. Then we present how we can compute an optimal solution for such layouts, i.e., how to actually compute the position and size of all visual elements in order for them to be as close as possible to their preferred size while respecting the arrangement decided by the user. This is achieved by automatically generating a set of constraints which are then solved to find the optimal solution. Finally we describe the actual implementation of a lightweight constraint solver in JavaScript -- named QPSolver -- that can be embedded in a web application and we compare its performance with Cassowary [4], a well-known solver for constraint-based GUI layouts notably used by Apple in OS X and iOS.
Counteracting Serial Position Effects in the CHOICLA Group Decision Support Environment BIBAFull-Text 148-157
  Martin Stettinger; Alexander Felfernig; Gerhard Leitner; Stefan Reiterer; Michael Jeran
Decisions are often suboptimal due to the fact that humans apply simple heuristics which cause different types of decision biases. CHOICLA is an environment that supports decision making for groups of users. It supports the determination of recommendations for groups and also includes mechanisms to counteract decision biases. In this paper we give an overview of the CHOICLA environment and report the results of a user study which analyzed two voting strategies with regard to their potential of counteracting serial position (primacy/recency) effects when evaluating decision alternatives.
User-directed Non-Disruptive Topic Model Update for Effective Exploration of Dynamic Content BIBAFull-Text 158-168
  Yi Yang; Shimei Pan; Yangqiu Song; Jie Lu; Mercan Topkara
Statistical topic models have become a useful and ubiquitous text analysis tool for large corpora. One common application of statistical topic models is to support topic-centric navigation and exploration of document collections at the user interface by automatically grouping documents into coherent topics. For today's constantly expanding document collections, topic models need to be updated when new documents become available. Existing work on topic model update focuses on how to best fit the model to the data, and ignores an important aspect that is closely related to the end user experience: topic model stability. When the model is updated with new documents, the topics previously assigned to old documents may change, which may result in a disruption of end users' mental maps between documents and topics, thus undermining the usability of the applications. In this paper, we describe a user-directed non-disruptive topic model update system, nTMU, that balances the tradeoff between finding the model that fits the data and maintaining the stability of the model from end users' perspective. It employs a novel constrained LDA algorithm (cLDA) to incorporate pair-wise document constraints, which are converted from user feedback about topics, to achieve topic model stability. Evaluation results demonstrate advantages of our approach over previous methods.
ConVisIT: Interactive Topic Modeling for Exploring Asynchronous Online Conversations BIBAFull-Text 169-180
  Enamul Hoque; Giuseppe Carenini
In the last decade, there has been an exponential growth of asynchronous online conversations thanks to the rise of social media. Analyzing and gaining insights from such conversations can be quite challenging for a user, especially when the discussion becomes very long. A promising solution to this problem is topic modeling, since it may help the user to quickly understand what was discussed in the long conversation and explore the comments of interest. However, the results of topic modeling can be noisy and may not match the user's current information needs. To address this problem, we propose a novel topic modeling system for asynchronous conversations that revises the model on the fly based on user's feedback. We then integrate this system with interactive visualization techniques to support the user in exploring long conversations, as well as revising the topic model when the current results are not adequate to fulfill her information needs. An evaluation with real users illustrates the potential benefits of our approach for exploring conversations, when compared to both a traditional interface as well as an interactive visual interface that does not support human-in-the-loop topic model.
Applying the CASSM Framework to Improving End User Debugging of Interactive Machine Learning BIBAFull-Text 181-185
  Marco Gillies; Andrea Kleinsmith; Harry Brenton
This paper presents an application of the CASSM (Concept-based Analysis of Surface and Structural Misfits) framework to interactive machine learning for a bodily interaction domain. We developed software to enable end users to design full body interaction games involving interaction with a virtual character. The software used a machine learning algorithm to classify postures as based on examples provided by users. A longitudinal study showed that training the algorithm was straightforward, but that debugging errors was very challenging. A CASSM analysis showed that there were fundamental mismatches between the users concepts and the working of the learning system. This resulted in a new design in which aimed to better align both the learning algorithm and user interface with users' concepts. This work provides and example of how HCI methods can be applied to machine learning in order to improve its usability and provide new insights into its use.
Path Bending: Interactive Human-Robot Interfaces With Collision-Free Correction of User-Drawn Paths BIBAFull-Text 186-190
  Jared Alan Frank; Vikram Kapila
Enabling natural and intuitive communication with robots calls for the design of intelligent user interfaces. As robots are introduced into applications with novice users, the information obtained from such users may not always be reliable. This paper describes a user interface approach to process and correct intended paths for robot navigation as sketched by users on a touchscreen. Our approach demonstrates that by processing video frames from an overhead camera and by using composite Bézier curves to interpolate smooth paths from a small set of significant points, low-resolution occupancy grid maps (OGMs) with numeric potential fields can be continuously updated to correct unsafe user-drawn paths at interactive speeds. The approach generates sufficiently complex paths that appear to bend around static and dynamic obstacles. The results of an evaluation study show that our approach captures the user intent while relieving the user from being concerned about her path-drawing abilities.

Recommenders / Web

Unsupervised Modeling of Users' Interests from their Facebook Profiles and Activities BIBAFull-Text 191-201
  Preeti Bhargava; Oliver Brdiczka; Michael Roberts
User interest profiles have become essential for personalizing information streams and services, and user interfaces and experiences. In today's world, social networks such as Facebook or Twitter provide users with a powerful platform for interest expression and can, thus, act as a rich content source for automated user interest modeling. This, however, poses significant challenges because the user generated content on them consists of free unstructured text. In addition, users may not explicitly post or tweet about everything that interests them. Moreover, their interests evolve over time. In this paper, we propose a novel unsupervised algorithm and system that addresses these challenges. It models a broad range of an individual user's explicit and implicit interests from her social network profile and activities without any user input. We perform extensive evaluation of our system, and algorithm, with a dataset consisting of 488 active Facebook users' profiles and demonstrate that it can accurately estimate a user's interests in practice.
Personalized Search: Reconsidering the Value of Open User Models BIBAFull-Text 202-212
  Jae-wook Ahn; Peter Brusilovsky; Shuguang Han
Open user modeling has been perceived as an important mechanism to enhance the effectiveness of personalization. However, several studies have reported that open and editable user models can harm the performance of personalized search systems. This paper re-examines the value of open and editable user models in the context of personalized search. We implemented a personalized search system with 2D user manipulatable visualization and concept-based user model components. A user study result suggests that the proposed visualization-based open user modeling approach can be beneficial for adaptive search.
Learning Higher-Order Interactions for User and Item Profiling Based on Tensor Factorization BIBAFull-Text 213-224
  Xiaoyu Tang; Yue Xu; Shlomo Geva
User profiling techniques play a central role in many Recommender Systems (RS). In recent years, multidimensional data are getting increasing attention for making recommendations. Additional metadata help algorithms better understanding users' behaviors and decisions. Existing user/item profiling techniques for Collaborative Filtering (CF) RS in multidimensional environment mostly analyze data through splitting the multidimensional relations. However, this leads to the loss of multidimensionality in user-item interactions; whereas the interactions are naturally multidimensional since users' choices are often affected by contextual information. In this paper, we propose a unified profiling approach which models users/items with latent higher-order interaction factors. We demonstrate that the proposed profiling approach is intimately related to two-dimensional profiling based on Matrix Factorization techniques. We further propose to integrate the profiling approach into three neighborhood-based CF recommenders for item recommendation. Finally, we empirically show on real-world social tagging datasets that the proposed recommenders outperform state-of-the-art CF recommendation approaches in accuracy.
Exploring Personalized Command Recommendations based on Information Found in Web Documentation BIBAFull-Text 225-235
  Md Adnan Alam Khan; Volodymyr Dziubak; Andrea Bunt
Prior work on command recommendations for feature-rich software has relied on data supplied by a large community of users to generate personalized recommendations. In this work, we explore the feasibility of using an alternative data source: web documentation. Specifically, our approach uses QF-Graphs, a previously proposed technique that maps higher-level tasks (i.e., search queries) to commands referenced in online documentation. Our approach uses these command-to-task mappings as an automatically generated plan library, enabling our prototype system to make personalized recommendations for task-relevant commands. Through both offline and online evaluations, we explore potential benefits and drawbacks of this approach.
Minimal Interaction Search in Recommender Systems BIBAFull-Text 236-246
  Branislav Kveton; Shlomo Berkovsky
While numerous works study algorithms for predicting item ratings in recommender systems, the area of the user-recommender interaction remains largely under-explored. In this work, we look into user interaction with the recommendation list, aiming to devise a method that allows users to discover items of interest in a minimal number of interactions. We propose generalized linear search (GLS), a combination of linear and generalized searches that brings together the benefits of both approaches. We prove that GLS performs at least as well as generalized search and compare our method to several baselines and heuristics. Our evaluation shows that GLS is liked by the users and achieves the shortest interactions.
Improving Controllability and Predictability of Interactive Recommendation Interfaces for Exploratory Search BIBAFull-Text 247-251
  Antti Kangasrääsiö; Dorota Glowacka; Samuel Kaski
In exploratory search, when a user directs a search engine using uncertain relevance feedback, usability problems regarding controllability and predictability may arise. One problem is that the user is often modelled as a passive source of relevance information, instead of an active entity trying to steer the system based on evolving information needs. This may cause the user to feel that the response of the system is inconsistent with her steering. Another problem arises due to the sheer size and complexity of the information space, and hence of the system, as it may be difficult for the user to anticipate the consequences of her actions in this complex environment. These problems can be mitigated by interpreting the user's actions as setting a goal for an optimization problem regarding the system state, instead of passive relevance feedback, and by allowing the user to see the predicted effects of an action before committing to it. In this paper, we present an implementation of these improvements in a visual user-controllable search interface. A user study involving exploratory search for scientific literature gives some indication on improvements in task performance, usability, perceived usefulness and user acceptance.

Personalization / Adaptation / Recommendation / Sentiment

Adaptive Recommendation-based Modeling Support for Data Analysis Workflows BIBAFull-Text 252-262
  Dietmar Jannach; Michael Jugovac; Lukas Lerche
RapidMiner is a software framework for the development and execution of data analysis workflows. Like many modern software development environments, the tool comprises a visual editor which allows the user to design processes on a conceptual level, thereby abstracts technical details, and thus helps the user focus on the core modeling task. The large set of pre-implemented data analysis operations available in the framework, as well as their logical dependencies, can, however, be overwhelming in particular for novice users.
   In this work we present an intelligent add-on to the RapidMiner framework that supports the user during the modeling phase by recommending additional operations to insert into the currently developed data analysis workflow. In the paper, we first propose different recommendation techniques and evaluate them in an offline setting using a pool of several thousand existing workflows. Second, we present the results of a laboratory study, which show that our tool helps users to significantly increase the efficiency of the modeling process.
Intelligent Computing in Personal Informatics: Key Design Considerations BIBAFull-Text 263-274
  Fredrik Ohlin; Carl Magnus Olsson
An expanding range of apps supported by wearable and mobile devices are being used by people engaged in personal informatics in order to track and explore data about themselves and their everyday activities. While the aspect of data collection is easier than ever before through these technologies, more advanced forms of support from personal informatics systems are not presently available. This lack of next generation personal informatics systems presents research with an important role to fill, and this paper presents a two-step contribution to this effect. The first step is to present a new model of human cooperation with intelligent computing, which collates key issues from the literature. The second step is to apply this model to personal informatics, identifying twelve key considerations for integrating intelligent computing in the design of future personal informatics systems. These design considerations are also applied to an example system, which illustrates their use in eliciting new design directions.
Aurigo: an Interactive Tour Planner for Personalized Itineraries BIBAFull-Text 275-285
  Alexandre Yahi; Antoine Chassang; Louis Raynaud; Hugo Duthil; Duen Horng (Polo) Chau
Planning personalized tour itineraries is a complex and challenging task for both humans and computers. Doing it manually is time-consuming; approaching it as an optimization problem is computationally NP hard. We present Aurigo, a tour planning system combining a recommendation algorithm with interactive visualization to create personalized itineraries. This hybrid approach enables Aurigo to take into account both quantitative and qualitative preferences of the user. We conducted a within-subject study with 10 participants, which demonstrated that Aurigo helped them find points of interest quickly. Most participants chose Aurigo over Google Maps as their preferred tools to create personalized itineraries. Aurigo may be integrated into review websites or social networks, to leverage their databases of reviews and ratings and provide better itinerary recommendations.
Rhema: A Real-Time In-Situ Intelligent Interface to Help People with Public Speaking BIBAFull-Text 286-295
  M. Iftekhar Tanveer; Emy Lin; Mohammed (Ehsan) Hoque
A large number of people rate public speaking as their top fear. What if these individuals were given an intelligent interface that provides live feedback on their speaking skills? In this paper, we present Rhema, an intelligent user interface for Google Glass to help people with public speaking. The interface automatically detects the speaker's volume and speaking rate in real time and provides feedback during the actual delivery of speech. While designing the interface, we experimented with two different strategies of information delivery: 1) Continuous streams of information, and 2) Sparse delivery of recommendation. We evaluated our interface with 30 native English speakers. Each participant presented three speeches (avg. duration 3 minutes) with 2 different feedback strategies (continuous, sparse) and a baseline (no feeback) in a random order. The participants were significantly more pleased (p < 0.05) with their speech while using the sparse feedback strategy over the continuous one and no feedback.
Managing Smartphone Interruptions through Adaptive Modes and Modulation of Notifications BIBAFull-Text 296-299
  Hugo Lopez-Tovar; Andreas Charalambous; John Dowell
Smartphones are capable of alerting their users to different kinds of digital interruption using different modalities and with varying modulation. Smart notification is the capability of a smartphone for selecting the user's preferred kind of alert in particular situations using the full vocabulary of notification modalities and modulations. It therefore goes well beyond attempts to predict if or when to silence a ringing phone call. We demonstrate smart notification for messages received from a document retrieval system while the user is attending a meeting. The notification manager learns about their notification preferences from users' judgements about videos of meetings. It takes account of the relevance of the interruption to the meeting, whether the user is busy and the sensed location of the smartphone. Through repeated training, the notification manager learns to reliably predict the preferred notification modes for users and this learning continues to improve with use.
IntentStreams: Smart Parallel Search Streams for Branching Exploratory Search BIBAFull-Text 300-305
  Salvatore Andolina; Khalil Klouche; Jaakko Peltonen; Mohammad Hoque; Tuukka Ruotsalo; Diogo Cabral; Arto Klami; Dorota Glowacka; Patrik Floréen; Giulio Jacucci
The user's understanding of information needs and the information available in the data collection can evolve during an exploratory search session. Search systems tailored for well-defined narrow search tasks may be suboptimal for exploratory search where the user can sequentially refine the expressions of her information needs and explore alternative search directions. A major challenge for exploratory search systems design is how to support such behavior and expose the user to relevant yet novel information that can be difficult to discover by using conventional query formulation techniques. We introduce IntentStreams, a system for exploratory search that provides interactive query refinement mechanisms and parallel visualization of search streams. The system models each search stream via an intent model allowing rapid user feedback. The user interface allows swift initiation of alternative and parallel search streams by direct manipulation that does not require typing. A study with 13 participants shows that IntentStreams provides better support for branching behavior compared to a conventional search system.
Building Image Sentiment Dataset with an Online Rating Game BIBAFull-Text 306-310
  Chanhee Yoon; KeumHee Kang; Eun Yi Kim
In this paper, an online rating game called Image-Battle is developed to build the ground truth dataset for image sentiment analysis. Our goal is to provide more interesting and intuitive interface to users and to collect the images with more correct sentiment scores despite of less human intervention. For this, two schemes are designed: 1) a pair-wise competition and 2) a ranking algorithm based on visual link analysis. First, the system shows two images and asks the user which image is closer to a given sentiment. Thereafter, the ranking algorithm assigns the sentiment scores to the images based on all the competition results: the main idea is to give higher score to images that win more or to the images that beat images with high scores. To evaluate the proposed system, it was used to collect ground truth for 30,000 Photo.net images, each of which was labeled by six emotions. The ground truth was used to develop the sentiment recognition system, and its result was compared with that of other rating system. Then the result proved the excellence of the proposed method in terms of accuracy and user satisfaction.

Visualization / Video / Augmented Reality

Augmenting the Driver's View with Peripheral Information on a Windshield Display BIBAFull-Text 311-321
  Renate Häuslschmid; Sven Osterwald; Marcus Lang; Andreas Butz
Windshield displays (WSDs) are information displays covering the entire windshield. Current WSD test setups place information at different distances, but always within the driver's foveal field of view. We built two WSD test setups, which present information not only at various distances within the driver's visual focus, but also in the peripheral field of view. Then we evaluated the display of information in the periphery on both WSD setups in a user study. While making sure the participants would look at the peripheral information, we measured the display's impact on driving performance. Subjects were also asked about their driving experience with the windshield displays and their preference among the two setups.
Attention Engagement and Cognitive State Analysis for Augmented Reality Text Display Functions BIBAFull-Text 322-332
  Takumi Toyama; Daniel Sonntag; Jason Orlosky; Kiyoshi Kiyokawa
Human eye gaze has recently been used as an effective input interface for wearable displays. In this paper, we propose a gaze-based interaction framework for optical see-through displays. The proposed system can automatically judge whether a user is engaged with virtual content in the display or focused on the real environment and can determine his or her cognitive state. With these analytic capacities, we implement several proactive system functions including adaptive brightness, scrolling, messaging, notification, and highlighting, which would otherwise require manual interaction. The goal is to manage the relationship between virtual and real, creating a more cohesive and seamless experience for the user. We conduct user experiments including attention engagement and cognitive state analysis, such as reading detection and gaze position estimation in a wearable display towards the design of augmented reality text display applications. The results from the experiments show robustness of the attention engagement and cognitive state analysis methods. A majority of the experiment participants (8/12) stated the proactive system functions are beneficial.
Content-driven Multi-modal Techniques for Non-linear Video Navigation BIBAFull-Text 333-344
  Kuldeep Yadav; Kundan Shrivastava; S. Mohana Prasad; Harish Arsikere; Sonal Patil; Ranjeet Kumar; Om Deshmukh
The growth of Massive Open Online Courses (MOOCs) has been remarkable in the last few years. A significant amount of MOOCs content is in the form of videos and participants often use non-linear navigation to browse through a video. This paper proposes the design of a system that provides non-linear navigation in educational videos using features derived from a combination of audio and visual content of a video. It provides multiple dimensions for quickly navigating to a given point of interest in a video i.e., customized dynamic time-aware word-cloud, video pages, and a 2-D timeline. In word-cloud, the relative placement of the words indicates their temporal ordering in the video whereas color codes are used to represent acoustic stress. The 2-D timeline is used to present multiple occurrences of a keyword/concept in the video in response to user click in the word-cloud. Additionally, visual content is analyzed to identify frames with "maximum written content", known as video pages. We conducted a user study with 20 users to evaluate the proposed system and compared it with transcription-based interfaces used by major MOOC providers. Our findings suggest that the proposed system leads to statistically significant navigation time savings especially on multimodal navigation tasks.
Getting the Message?: A Study of Explanation Interfaces for Microblog Data Analysis BIBAFull-Text 345-356
  James Schaffer; Prasanna Giridhar; Debra Jones; Tobias Höllerer; Tarek Abdelzaher; John O'Donovan
In many of today's online applications that facilitate data exploration, results from information filters such as recommender systems are displayed alongside traditional search tools. However, the effect of prediction algorithms on users who are performing open-ended data exploration tasks through a search interface is not well understood. This paper describes a study of three interface variations of a tool for analyzing commuter traffic anomalies in the San Francisco Bay Area. The system supports novel interaction between a prediction algorithm and a human analyst, and is designed to explore the boundaries, limitations and synergies of both. The degree of explanation of underlying data and algorithmic process was varied experimentally across each interface. The experiment (N=197) was performed to assess the impact of algorithm transparency/explanation on data analysis tasks in terms of search success, general insight into the underlying data set and user experience. Results show that 1) presence of recommendations in the user interface produced a significant improvement in recall of anomalies, 2) participants were able to detect anomalies in the data that were missed by the algorithm, 3) participants who used the prediction algorithm performed significantly better when estimating quantities in the data, and 4) participants in the most explanatory condition were the least biased by the algorithm's predictions when estimating quantities.
Prediction of Users' Learning Curves for Adaptation while Using an Information Visualization BIBAFull-Text 357-368
  Sébastien Lallé; Dereck Toker; Cristina Conati; Giuseppe Carenini
User performance and satisfaction when working with an interface is influenced by how quickly the user can acquire the skills necessary to work with the interface through practice. Learning curves are mathematical models that can represent a user's skill acquisition ability through parameters that describe the user's initial expertise as well as her learning rate. This information could be used by an interface to provide adaptive support to users who may otherwise be slow in learning the necessary skills. In this paper, we investigate the feasibility of predicting in real time a user's learning curve when working with ValueChart, an interactive visualization for decision making. Our models leverage various data sources (a user's gaze behavior, pupil dilation, cognitive abilities), and we show that they outperform a baseline that leverages only knowledge on user task performance so far. We also show that the best performing model achieves good accuracies in predicting users' learning curves even after observing users' performance only on a few tasks. These results are promising toward the design of user-adaptive visualizations that can dynamically support a user in acquiring the necessary skills to complete visual tasks.
Halo Content: Context-aware Viewspace Management for Non-invasive Augmented Reality BIBAFull-Text 369-373
  Jason Orlosky; Kiyoshi Kiyokawa; Takumi Toyama; Daniel Sonntag
In mobile augmented reality, text and content placed in a user's immediate field of view through a head worn display can interfere with day to day activities. In particular, messages, notifications, or navigation instructions overlaid in the central field of view can become a barrier to effective face-to-face meetings and everyday conversation. Many text and view management methods attempt to improve text viewability, but fail to provide a non-invasive personal experience for the user.
   In this paper, we introduce Halo Content, a method that proactively manages movement of multiple elements such as e-mails, texts, and notifications to make sure they do not interfere with interpersonal interactions. Through a unique combination of face detection, integrated layouts, and automated content movement, virtual elements are actively moved so that they do not occlude conversation partners' faces or gestures. Unlike other methods that often require tracking or prior knowledge of the scene, our approach can deal with multiple conversation partners in unknown, dynamic situations. In a preliminary experiment with 14 participants, we show that the Halo Content algorithm results in a 54.8% reduction in the number of times content interfered with conversations compared to standard layouts.
Emotar: Communicating Feelings through Video Sharing BIBAFull-Text 374-378
  Tiffany C. K. Kwok; Michael Xuelin Huang; Wai Cheong Tam; Grace Ngai
Affect exchange is essential for healthy physical and social development [7], and friends and family communicate their emotions to each other instinctively. In particular, watching movies has always been a popular mode of socialization and video sharing is increasingly viewed as an effective way to facilitate communication of feelings and affects, even when the parties are not in the same location. We present an asynchronous video-sharing platform that uses Emotars to facilitate affect sharing in order to create and enhance the sense of togetherness through the experience of asynchronous movie watching. We investigate its potential impact and benefits, including a better viewing experience, supporting relationships, and strengthening engagement, connectedness and emotion awareness among individuals.

Affect / Health

Automatic Detection of Learning-Centered Affective States in the Wild BIBAFull-Text 379-388
  Nigel Bosch; Sidney D'Mello; Ryan Baker; Jaclyn Ocumpaugh; Valerie Shute; Matthew Ventura; Lubin Wang; Weinan Zhao
Affect detection is a key component in developing intelligent educational interfaces that are capable of responding to the affective needs of students. In this paper, computer vision and machine learning techniques were used to detect students' affect as they used an educational game designed to teach fundamental principles of Newtonian physics. Data were collected in the real-world environment of a school computer lab, which provides unique challenges for detection of affect from facial expressions (primary channel) and gross body movements (secondary channel) -- up to thirty students at a time participated in the class, moving around, gesturing, and talking to each other. Results were cross validated at the student level to ensure generalization to new students. Classification was successful at levels above chance for off-task behavior (area under receiver operating characteristic curve or (AUC = .816) and each affective state including boredom (AUC =.610), confusion (.649), delight (.867), engagement (.679), and frustration (.631) as well as a five-way overall classification of affect (.655), despite the noisy nature of the data. Implications and prospects for affect-sensitive interfaces for educational software in classroom environments are discussed.
Exploring Peripheral Physiology as a Predictor of Perceived Relevance in Information Retrieval BIBAFull-Text 389-399
  Oswald Barral; Manuel J. A. Eugster; Tuukka Ruotsalo; Michiel M. Spapé; Ilkka Kosunen; Niklas Ravaja; Samuel Kaski; Giulio Jacucci
Peripheral physiological signals, as obtained using electrodermal activity and facial electromyography over the corrugator supercilii muscle, are explored as indicators of perceived relevance in information retrieval tasks. An experiment with 40 participants is reported, in which these physiological signals are recorded while participants perform information retrieval tasks. Appropriate feature engineering is defined, and the feature space is explored. The results indicate that features in the window of 4 to 6 seconds after the relevance judgment for electrodermal activity, and from 1 second before to 2 seconds after the relevance judgment for corrugator supercilii activity, are associated with the users' perceived relevance of information items. A classifier verified the predictive power of the features and showed up to 14% improvement predicting relevance. Our research can help the design of intelligent user interfaces for information retrieval that can detect the user's perceived relevance from physiological signals and complement or replace conventional relevance feedback.
Light-Bulb Moment?: Towards Adaptive Presentation of Feedback based on Students' Affective State BIBAFull-Text 400-404
  Beate Grawemeyer; Wayne Holmes; Sergio Gutiérrez-Santos; Alice Hansen; Katharina Loibl; Manolis Mavrikis
Affective states play a significant role in students' learning behaviour. Positive affective states can enhance learning, whilst negative affective states can inhibit it. This paper describes a Wizard-of-Oz study which investigates whether the way feedback is presented should change according to the affective state of a student, in order to encourage affect change if that state is negative. We presented 'high-interruptive' feedback in the form of pop-up windows in which messages were immediately viewable; or 'low-interruptive' feedback, a glowing light bulb which students needed to click in order to access the messages. Our results show that when students are confused or frustrated high-interruptive feedback is more effective, but when students are enjoying their activity, there is no difference. Based on the results, we present guidelines for adaptively tailoring the presentation of feedback based on students' affective states when interacting with learning environments.
BayesHeart: A Probabilistic Approach for Robust, Low-Latency Heart Rate Monitoring on Camera Phones BIBAFull-Text 405-416
  Xiangmin Fan; Jingtao Wang
Recent technological advances have demonstrated the feasibility of measuring people's heart rates through commodity cameras by capturing users' skin transparency changes, color changes, or involuntary motion. However, such raw image data collected during everyday interactions (e.g. gaming, learning, and fitness training) is often noisy and intermittent, especially in mobile contexts. Such interference causes increased error rates, latency, and even detection failures for most existing algorithms. In this paper, we present BayesHeart, a probabilistic algorithm that extracts both heart rates and distinct phases of the cardiac cycle directly from raw fingertip transparency signals captured by camera phones. BayesHeart is based on an adaptive hidden Markov model, requires minimal training data and is user-independent. Through a comparative study of twelve state-of-the-art algorithms covering the design space of noise reduction and pulse counting, we found that BayesHeart outperforms existing algorithms in both accuracy and speed for noisy, intermittent signals.
Mixed-Initiative Real-Time Topic Modeling & Visualization for Crisis Counseling BIBAFull-Text 417-426
  Karthik Dinakar; Jackie Chen; Henry Lieberman; Rosalind Picard; Robert Filbin
Text-based counseling and support systems have seen an increasing proliferation in the past decade. We present Fathom, a natural language interface to help crisis counselors on Crisis Text Line, a new 911-like crisis hotline that takes calls via text messaging rather than voice. Text messaging opens up the opportunity for software to read the messages as well as people, and to provide assistance for human counselors who give clients emotional and practical support. Crisis counseling is a tough job that requires dealing with emotionally stressed people in possibly life-critical situations, under time constraints. Fathom is a system that provides topic modeling of calls and graphical visualization of topic distributions, updated in real time. We develop a mixed-initiative paradigm to train coherent topic and word distributions and use them to power real-time visualizations aimed at reducing counselor cognitive overload. We believe Fathom to be the first real-time computational framework to assist in crisis counseling.
Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study BIBAFull-Text 427-431
  Edison Thomaz; Cheng Zhang; Irfan Essa; Gregory D. Abowd
Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.
Learning Therapy Strategies from Demonstration Using Latent Dirichlet Allocation BIBAFull-Text 432-436
  Hee-Tae Jung; Richard G. Freedman; Tammie Foster; Yu-Kyong Choe; Shlomo Zilberstein; Roderic A. Grupen
The use of robots in stroke rehabilitation has become a popular trend in rehabilitation robotics. However, despite the acknowledged value of customized service for individual patients, research on programming adaptive therapy for individual patients has received little attention. The goal of the current study is to model teletherapy sessions in the form of a generative process for autonomous therapy that approximate the demonstrations of the therapist. The resulting autonomous programs for therapy may imitate the strategy that the therapist might have employed and reinforce therapeutic exercises between teletherapy sessions. We propose to encode the therapist's decision criteria in terms of the patient's motor performance features. Specifically, in this work, we apply Latent Dirichlet Allocation on the batch data collected during teletherapy sessions between a single stroke patient and a single therapist. Using the resulting models, the therapeutic exercise targets are generated and are verified with the same therapist who generated the data.

Tutorials

Speech-based Interaction: Myths, Challenges, and Opportunities BIBAFull-Text 437-438
  Cosmin Munteanu; Gerald Penn
HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines -- despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines' ability to understand speech, the HCI community has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the relatively discouraging levels of accuracy in understanding speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces.
   The goal of this course is to inform the IUI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that IUI researchers and general HCI, UI, and UX practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
Tutorial on Personalization for Behaviour Change BIBAFull-Text 439-442
  Judith Masthoff; Julita Vassileva
Digital behaviour interventions aim to encourage and support people to change their behaviour, for their own or communal benefits. Personalization plays an important role in this, as the most effective persuasive and motivational strategies are likely to depend on user characteristics. This tutorial covers the role of personalization in behaviour change technology, and methods and techniques to design personalized behaviour change technology.
Modelling User Affect and Sentiment in Intelligent User Interfaces: A Tutorial Overview BIBAFull-Text 443-446
  Björn W. Schuller
The computer-based automatic analysis of human sentiment, and affect are broadly expected to play a major role that will likely make 'that difference' in future Intelligent User Interfaces, as they bear the promise to lend interactive systems emotional intelligence. Such comprise intelligent digital games, e.g., for empowerment and inclusion, tutoring systems, information systems or virtual companions, e.g., in the car to name but a few. This tutorial aims to give a good introduction into the related fields of user Sentiment Analysis and user Affect Modelling. Its intention is to show the general technology, and its current reliability, the ways for technical integration and efficient embedding of solutions in a user interface context, and the latest trends in this young and ever emerging field. Emphasis is laid on highlighting the range of toolkits available at this moment with the aim of empowering one to immediately craft own solutions. This description contains the general motivation, goals, objectives, and topics.

Workshops

PATCH 2015: Personalized Access to Cultural Heritage BIBAFull-Text 447-449
  Liliana Ardissono; Cristina Gena; Lora Aroyo; Tsvika Kuflik; Alan J. Wecker; Johan Oomen; Oliviero Stock
Since 2007, the PATCH workshop series (https://patchworkshopseries.wordpress.com/) have been gathering successfully researchers and professionals from various countries and institutions to discuss the topics of digital access to cultural heritage and specifically the personalization aspects in this process. Due to this rich history, the reach of the PATCH workshop in various research communities is extensive.
IDGEI 2015: 3rd International Workshop on Intelligent Digital Games for Empowerment and Inclusion BIBAFull-Text 450-452
  Lucas Paletta; Björn Schuller; Peter Robinson; Nicolas Sabouret
Digital Games for Empowerment and Inclusion have the potential to improve our society by preparing particular groups of people to meet social challenges in their everyday lives, and to do so in an enjoyable way through games. These games are developing rapidly to exploit new algorithms for computational intelligence supported by increasing availability of computing power to help analyze players' behavior, monitor their motivation and interest, and to adapt the progress of the games accordingly. The workshop on Intelligent Digital Games for Empowerment and Inclusion (IDGEI) explores the use of machine intelligence in serious digital games. In this context, we summarize the third international workshop on IDGEI held at the International Conference on Intelligent User Interfaces (IUI) 2015.
SmartObjects: Fourth Workshop on Interacting with Smart Objects BIBAFull-Text 453-454
  Dirk Schnelle-Walka; Max Mühlhäuser; Stefan Radomski; Oliver Brdiczka; Jochen Huber; Kris Luyten; Tobias Grosse-Puppendahl
The increasing number of smart objects in our everyday life shapes how we interact beyond the desktop. In this workshop we discussed how the interaction with these smart objects should be designed from various perspectives. This year's workshop put a special focus on affective computing with smart objects, as reflected by the keynote talk.
IUI-TextVis 2015: Fourth Workshop on Interactive Visual Text Analytics BIBAFull-Text 455-457
  Jaegul Choo; Christopher Collins; Wenwen Dou; Alex Endert
Analyzing text documents has been a key research topic in many areas. Countless approaches have been proposed to tackle this problem, and they are largely categorized into fully automated approaches (via statistical techniques) or human-involved exploratory ones (via interactive visualization). The primary purpose of this workshop is to bring together researchers from both sides and provide them with opportunities to discuss ways to harmonize the power of these two complementary approaches. The combination will allow us to push the boundary of text analytics. The detailed workshop schedule, proceedings, and agenda will be available at http://www.textvis.org.